0% found this document useful (0 votes)
10 views87 pages

Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views87 pages

Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

www.Vidyarthiplus.

com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

SUDHARSAN ENGINEERING COLLEGE


Sathiyamangalam 622 501, Pudukkottai District.

Department of Computer Science & Engineering

OPERATING SYSTEMS

LECTURE NOTES

Prepared by
P.Parvathi
Assistant Professor
Dept.of CSE

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

PROCESSES AND THREADS

UNIT-I
Chapter-1 Introduction
What is an Operating System?

 A program that acts as an intermediary between a user of a computer and the


computer hardware.
 Operating system goals:
o Execute user programs and make solving user problems easier.
o Make the computer system convenient to use.
 Use the computer hardware in an efficient manner.

The Evaluation of Operating Systems:

1.Mainframe Systems

 Simple batch systems


 Multiprogrammed batch systems
 Time-sharing systems

Simple Batch Systems

 User prepare a job and submit it to a computer operator


 User get output some time later
 No interaction between the user and the computer system
 Operator batches together jobs with similar needs to speedup processing

Task of OS: automatically transfers control from one job to another.

 OS always resident in memory


 Disadvantages of one job at a time:
 CPU idle during I/O
 I/O devices idle when CPU busy

Memory Layout for a Simple Batch System

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Multiprogrammed Batch Systems:

 Keep more than one job in memory simultaneously


 When a job performs I/O, OS switches to another job
 Increase CPU utilization
 All jobs enter the system kept in the job pool on a disk, scheduler brings
jobs from pool into memory

OS Features Needed for Multiprogramming:

 Job scheduling: which jobs in the job pool should be brought into memory?
 Memory management: the system must allocate the memory to several jobs.
 CPU scheduling: choose among jobs in memory that are ready to run.
 Allocation of devices: what if more than one job wants to use a device?
 Multiple jobs running concurrently should not affect one Another

Memory Layout for a Multiprogrammed Batch Systems

Time-Sharing Systems:

 Like multiprogrammed batch, except CPU switches between jobs occur so


frequently that the users can interact with a running program
 User give instructions to the OS or a program, and wait for immediate
results
 Require low response time

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Allow many users to share the computer simultaneously, users have the
impression that they have their own machine
 CPU is multiplexed among several jobs that are kept in memory and on
disk

OS Features Needed for Time-Sharing

 Job synchronization and communication


 Deadlock handling
 File system

2.Desktop Systems:

Personal computers : computer system dedicated to a single user.


 CPU utilization not a prime concern, want maximize user convenience and
responsiveness.
 Can adopt technologies developed for mainframe operating systems: virtual
memory, file systems, multiprogramming
 File protection needed due to interconnections of computers
 Operating systems for PCs: Windows, Mac OS, Linux Multiprocessor Systems

3.Multiprocessor Systems:

 Also known as parallel systems or tightly coupled systems


 More than one processor in close communication, sharing computer bus, clock,
memory, and usually peripheral Devices
 Communication usually takes place through the shared memory.

Advantages:

 Increased throughput: speed-up ratio with N processors < N


 Economy of scale: cheaper than multiple single-processor systems
 Increased reliability: graceful degradation, fault tolerant

Types of Multiprocessor Systems:

a.Symmetric multiprocessing (SMP):

 Each processor runs an identical copy of the operating system.


 All processors are peers: any processor can work on any task
 OS can distribute load evenly over the processors.

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Most modern operating systems support SMP

b.Asymmetric multiprocessing:

 Master-slave relationship: a master processor controls the system, assigns works


to other processors
 Each processor is assigned a specific task
 Don't have the flexibility to assign processes to the least loaded CPU
 More common in extremely large systems

4.Distributed Systems:

 Loosely coupled system – each processor has its own local memory and clock;
processors communicate with one another through a network.
 Need protocol to communicate: TCP/IP is the most common and best supported
protocol
 Three types of networks (based on distances between nodes)
 Local area network (LAN): exists within a room/floor/building/campus
 Wide area network (WAN): span a country or continent
 Metropolitan area network (MAN): span a city
 Transmission media: copper wires, optical fiber, radio

Types of Distributed Systems:

a. Client-server systems

 Client sends requests to server


 Server sends reply to client
 Two types of servers

i)File servers: allow clients to create, read, update, and delete files
ii) Compute servers: execute actions requests by clients and send back results to
clients

b. Peer-to-peer systems:

 all machines are equals, serve both as clients and servers


 File-sharing networks: Gnutella, eDonkey 2000

5. Clustered Systems

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Clustering allows two or more systems to share storage


 Provides high reliability
 Asymmetric clustering: one server runs the application or
applications while other servers standby
 Symmetric clustering: all N hosts are running the application or
applications

6.. Real-Time Systems

 Often used as a control device in a dedicated application such as controlling


scientific experiments, medical imaging systems, industrial control systems, and
some display systems
 Well-defined fixed-time constraints
 Real-Time systems may be either hard or soft real-time

Hard real-time:

 Secondary storage limited or absent, data stored in short term memory, or read-
only memory (ROM)
 Conflicts with time-sharing systems, not supported by general-purpose operating
systems

Soft real-time:

 Limited utility in industrial control of robotics


 Integrate-able with time-share systems
 Useful in applications (multimedia, virtual reality) requiring
tight response times

Handheld Systems

 Personal Digital Assistants (PDAs)


 Cellular telephones
 Issues:
 Limited memory
 Slow processors
 Small display screens

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Chapter-2 Computer System Structures

Computer System Structure

 Computer system can be divided into four components


o Hardware – provides basic computing resources
 CPU, memory, I/O devices
o Operating system
 Controls and coordinates use of hardware among various
applications and users
o Application programs – define the ways in which the system resources are
used to solve the computing problems of the users
 Word processors, compilers, web browsers, database systems,
video games
o Users
 People, machines, other computers

Four Components of a Computer System

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Computer Startup

 bootstrap program is loaded at power-up or reboot


o Typically stored in ROM or EPROM, generally known as firmware
o Initializates all aspects of system
o Loads operating system kernel and starts execution

Computer System Organization:

 Computer-system operation
 One or more CPUs, device controllers connect through common bus
providing access to shared memory
 Concurrent execution of CPUs and devices competing for memory cycles

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Computer-System Operation:

 I/O devices and the CPU can execute concurrently.


 Each device controller is in charge of a particular device type.
 Each device controller has a local buffer.
 CPU moves data from/to main memory to/from local buffers
 I/O is from the device to local buffer of controller.
 Device controller informs CPU that it has finished its operation by causing
an interrupt.

Common Functions of Interrupts:

 Interrupt transfers control to the interrupt service routine generally,


through the interrupt vector, which contains the addresses of all the
service routines.
 Interrupt architecture must save the address of the interrupted instruction.
 Incoming interrupts are disabled while another interrupt is being
processed to prevent a lost interrupt.
 A trap is a software-generated interrupt caused either by an error or a user
request.
 An operating system is interrupt driven.

Interrupt Handling:

 The operating system preserves the state of the CPU by storing registers
and the program counter.
 Determines which type of interrupt has occurred:
 polling
 vectored interrupt system

 Separate segments of code determine what action should be taken for each type of
interrupt

Interrupt Timeline

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

I/O Structure

 After I/O starts, control returns to user program only upon I/O completion.
 Wait instruction idles the CPU until the next interrupt
 Wait loop (contention for memory access).
 At most one I/O request is outstanding at a time, no simultaneous I/O
processing.

 After I/O starts, control returns to user program without waiting for I/O
completion.
 System call – request to the operating system to allow user to wait for I/O
completion.
 Device-status table contains entry for each I/O device indicating its type,
address, and state.
 Operating system indexes into I/O device table to determine device status
and to modify table entry to include interrupt.

Two I/O Methods:

Synchronous Asynchronous

10

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Device-Status Table

Direct Memory Access Structure:

 Used for high-speed I/O devices able to transmit information at close to memory
speeds.
 Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention.
 Only one interrupt is generated per block, rather than the one interrupt per byte.

Storage Structure:

 Main memory – only large storage media that the CPU can access directly.

11

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Secondary storage – extension of main memory that provides large nonvolatile


storage capacity.
 Magnetic disks – rigid metal or glass platters covered with magnetic recording
material
 Disk surface is logically divided into tracks, which are subdivided into
sectors.
 The disk controller determines the logical interaction between the device
and the computer.

Storage Hierarchy:

 Storage systems organized in hierarchy.


 Speed
 Cost
 Volatility
 Caching – copying information into faster storage system; main memory can be
viewed as a last cache for secondary storage.

Storage-Device Hierarchy:

Caching:

 Important principle, performed at many levels in a computer (in hardware,


operating system, software)
 Information in use copied from slower to faster storage temporarily
 Faster storage (cache) checked first to determine if information is there
 If it is, information used directly from the cache (fast)
 If not, data copied to cache and used there
 Cache smaller than storage being cached

12

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Cache management important design problem


 Cache size and replacement policy

Performance of Various Levels of Storage:

 Movement between levels of storage hierarchy can be explicit or implicit


 Movement between levels of storage hierarchy can be explicit or implicit

 Multiprogramming needed for efficiency


 Single user cannot keep CPU and I/O devices busy at all times
 Multiprogramming organizes jobs (code and data) so CPU always has one
to execute
 A subset of total jobs in system is kept in memory
 One job selected and run via job scheduling
 When it has to wait (for I/O for example), OS switches to another job

 Timesharing (multitasking) is logical extension in which CPU switches jobs so


frequently that users can interact with each job while it is running, creating
interactive computing
 Response time should be < 1 second
 Each user has at least one program executing in memory process
 If several jobs ready to run at the same time  CPU scheduling
 If processes don’t fit in memory, swapping moves them in and out to run
 Virtual memory allows execution of processes not completely in memory

Memory Layout for Multiprogrammed System:

13

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Chapter 3: Operating System Structures

 Operating System Services


 User Operating System Interface
 System Calls
 Types of System Calls
 System Programs
 Operating System Design and Implementation
 Operating System Structure
 Virtual Machines
 Operating System Generation
 System Boot

Objectives:

 To describe the services an operating system provides to users, processes, and


other systems
 To discuss the various ways of structuring an operating system
 To explain how operating systems are installed and customized and how they boot

Operating System Services:

 One set of operating-system services provides functions that are helpful to the
user:
 User interface - Almost all operating systems have a user interface (UI)
 Varies between Command-Line (CLI), Graphics User Interface (GUI),
Batch
 Program execution - The system must be able to load a program into
memory and to run that program, end execution, either normally or
abnormally (indicating error)
 I/O operations - A running program may require I/O, which may involve
a file or an I/O device.
 File-system manipulation - The file system is of particular interest.
Obviously, programs need to read and write files and directories, create
and delete them, search them, list file Information, permission
management.
 Communications – Processes may exchange information, on the same
computer or between computers over a network
 Communications may be via shared memory or through message passing
(packets moved by the OS)

14

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Error detection – OS needs to be constantly aware of possible errors


 May occur in the CPU and memory hardware, in I/O devices, in
user program
 For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
 Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system

 Another set of OS functions exists for ensuring the efficient operation of the
system itself via resource sharing

 Resource allocation - When multiple users or multiple jobs running


concurrently, resources must be allocated to each of them
 Many types of resources - Some (such as CPU
cycles,mainmemory, and file storage) may have special allocation code,
others (such as I/O devices) may have general request and release code.
 Accounting - To keep track of which users use how much and what kinds
of computer resources
 Protection and security - The owners of information stored in a multiuser
or networked computer system may want to control use of that information,
concurrent processes should not interfere with each other
 Protection involves ensuring that all access to system resources is
controlled
 Security of the system from outsiders requires user authentication,
extends to defending external I/O devices from invalid access attempts
 If a system is to be protected and secure, precautions must be
instituted throughout it. A chain is only as strong as its weakest link.

System Calls:

 Programming interface to the services provided by the OS


 Typically written in a high-level language (C or C++)
 Mostly accessed by programs via a high-level Application Program Interface
(API) rather than direct system call use
 Three most common APIs are Win32 API for Windows, POSIX API for POSIX-
based systems (including virtually all versions of UNIX, Linux, and Mac OS X),
and Java API for the Java virtual machine (JVM)
 Why use APIs rather than system calls?

15

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Example of System Calls:

System call sequence to copy the contents of one file to another file

Example of Standard API

 Consider the ReadFile() function in the


 Win32 API—a function for reading from a file

A description of the parameters passed to ReadFile()

16

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 HANDLE file—the file to be read


 LPVOID buffer—a buffer where the data will be read into and written from
 DWORD bytesToRead—the number of bytes to be read into the buffer
 LPDWORD bytesRead—the number of bytes read during the last read
 LPOVERLAPPED ovl—indicates if overlapped I/O is being used

System Call Implementation:

 Typically, a number associated with each system call


o System-call interface maintains a table indexed according to these
numbers
 The system call interface invokes intended system call in OS kernel and returns
status of the system call and any return values
 The caller need know nothing about how the system call is implemented
o Just needs to obey API and understand what OS will do as a result call
o Most details of OS interface hidden from programmer by API
 Managed by run-time support library (set of functions built into
libraries included with compiler

API – System Call – OS Relationship

Standard C Library Example


 C program invoking printf() library call, which calls write() system call

17

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

System Call Parameter Passing:

 Often, more information is required than simply identity of desired system call
o Exact type and amount of information vary according to OS and call
 Three general methods used to pass parameters to the OS
o Simplest: pass the parameters in registers
 In some cases, may be more parameters than registers
o Parameters stored in a block, or table, in memory, and address of block
passed as a parameter in a register
 This approach taken by Linux and Solaris
o Parameters placed, or pushed, onto the stack by the program and popped
off the stack by the operating system
o Block and stack methods do not limit the number or length of parameters
being passed

Parameter Passing via Table

18

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Types of System Calls:

 Process control
 File management
 Device management
 Information maintenance
 Communications

MS-DOS execution:

(a) At system startup (b) running a program

FreeBSD Running Multiple Programs

19

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

System Programs:

 System programs provide a convenient environment for program development


and execution. The can be divided into:
 File manipulation
 Status information
 File modification
 Programming language support
 Program loading and execution
 Communications
 Application programs
 Most users’ view of the operation system is defined by system programs, not the
actual system calls

 Provide a convenient environment for program development and execution


 Some of them are simply user interfaces to system calls; others are
considerably more complex
 File management - Create, delete, copy, rename, print, dump, list, and generally
manipulate files and directories
 Status information
 Some ask the system for info - date, time, amount of available memory,
disk space, number of users
 Others provide detailed performance, logging, and debugging information
 Typically, these programs format and print the output to the terminal or
other output devices
 Some systems implement a registry - used to store and retrieve
configuration information
 File modification

20

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Text editors to create and modify files


 Special commands to search contents of files or perform transformations
of the text
 Programming-language support - Compilers, assemblers, debuggers and
interpreters sometimes provided
 Program loading and execution- Absolute loaders, relocatable loaders, linkage
editors, and overlay-loaders, debugging systems for higher-level and machine
language
 Communications - Provide the mechanism for creating virtual connections among
processes, users, and computer systems
 Allow users to send messages to one another’s screens, browse web pages,
send electronic-mail messages, log in remotely, transfer files from one
machine to another

Simple Structure
 MS-DOS – written to provide the most functionality in the least space
 Not divided into modules
 Although MS-DOS has some structure, its interfaces and levels of
functionality are not well separated

MS-DOS Layer Structure:

Layered Approach:

21

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 The operating system is divided into a number of layers (levels), each built on top
of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N)
is the user interface.
 With modularity, layers are selected such that each uses functions (operations)
and services of only lower-level layers
Layered Operating System:

UNIX
 UNIX – limited by hardware functionality, the original UNIX operating system
had limited structuring. The UNIX OS consists of two separable parts
o Systems programs
o The kernel
 Consists of everything below the system-call interface and
above the physical hardware
 Provides the file system, CPU scheduling, memory
management, and other operating-system functions; a large
number of functions for one level

UNIX System Structure:

22

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Microkernel System Structure :

 Moves as much from the kernel into ―user‖ space


 Communication takes place between user modules using message passing
 Benefits:
o Easier to extend a microkernel
o Easier to port the operating system to new architectures
o More reliable (less code is running in kernel mode)
o More secure
 Detriments:
o Performance overhead of user space to kernel space communication

Mac OS X Structure:

Virtual Machines:

23

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware
 A virtual machine provides an interface identical to the underlying bare hardware
 The operating system creates the illusion of multiple processes, each executing on
its own processor with its own (virtual) memory

Process Management

Chapter 4: Process

 Process Concept
 Process Scheduling
 Operations on Processes
 Cooperating Processes
 Interprocess Communication
 Communication in Client-Server Systems

24

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Process Concept:

 An operating system executes a variety of programs:


o Batch system – jobs
o Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution; process execution must progress in sequential
fashion
 A process includes:
o program counter
o stack
o data section

Process in Memory:

Process State:

 As a process executes, it changes state


 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution

Diagram of Process State:

25

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Process Control Block (PCB):

Information associated with each process


 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information

CPU Switch From Process to Process:

26

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Process Scheduling Queues:

 Job queue – set of all processes in the system


 Ready queue – set of all processes residing in main memory, ready and waiting
to execute
 Device queues – set of processes waiting for an I/O device
 Processes migrate among the various queues

Ready Queue And Various I/O Device Queues

27

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Representation of Process Scheduling:

Schedulers:

 Long-term scheduler (or job scheduler) – selects which processes should be


brought into the ready queue
 Short-term scheduler (or CPU scheduler) – selects which process should be
executed next and allocates CPU

Addition of Medium Term Scheduling

 Short-term scheduler is invoked very frequently (milliseconds)  (must be fast)


 Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be
slow)
 The long-term scheduler controls the degree of multiprogramming
 Processes can be described as either:
 I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts

28

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 CPU-bound process – spends more time doing computations; few very long
CPU bursts

Context Switch:

 When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process
 Context-switch time is overhead; the system does no useful work while switching
 Time dependent on hardware support

Process Creation:

 Parent process create children processes, which, in turn create other processes,
forming a tree of processes
 Resource sharing
o Parent and children share all resources
o Children share subset of parent’s resources
o Parent and child share no resources
 Execution
o Parent and children execute concurrently
o Parent waits until children terminate
 Address space
o Child duplicate of parent
o Child has a program loaded into it
 UNIX examples
o fork system call creates new process
o exec system call used after a fork to replace the process’ memory space
with a new program

C Program Forking Separate Process


int main()
{
pid_t pid;
/* fork another process */
pid = fork();

29

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

if (pid < 0) { /* error occurred */


fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}

A tree of processes on a typical Solaris:

Process Termination:

 Process executes last statement and asks the operating system to delete it (exit)
o Output data from child to parent (via wait)
o Process’ resources are deallocated by operating system
 Parent may terminate execution of children processes (abort)
o Child has exceeded allocated resources
o Task assigned to child is no longer required
o If parent is exiting

30

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Some operating system do not allow child to continue if its parent


terminates
– All children terminated - cascading termination

Cooperating Processes:

 Independent process cannot affect or be affected by the execution of another


process
 Cooperating process can affect or be affected by the execution of another process
 Advantages of process cooperation
 Information sharing
 Computation speed-up
 Modularity
 Convenience

Producer-Consumer Problem:

 Paradigm for cooperating processes, producer process produces information that


is consumed by a consumer process
 unbounded-buffer places no practical limit on the size of the buffer
 bounded-buffer assumes that there is a fixed buffer size

Bounded-Buffer – Shared-Memory Solution:


n Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
n Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Insert() Method:

while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */

31

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

Bounded Buffer – Remove() Method:


while (true) {
while (in == out)
; // do nothing -- nothing to consume

// remove an item from the buffer


item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Interprocess Communication (IPC):

 Mechanism for processes to communicate and to synchronize their actions


 Message system – processes communicate with each other without resorting to
shared variables
 IPC facility provides two operations:
o send(message) – message size fixed or variable
o receive(message)
 If P and Q wish to communicate, they need to:
o establish a communication link between them
o exchange messages via send/receive
 Implementation of communication link
o physical (e.g., shared memory, hardware bus)
o logical (e.g., logical properties)

Communications Models :

32

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Direct Communication:

 Processes must name each other explicitly:


o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q
 Properties of communication link
o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair there exists exactly one link
o The link may be unidirectional, but is usually bi-directional

Indirect Communication:

 Messages are directed and received from mailboxes (also referred to as ports)
o Each mailbox has a unique id
o Processes can communicate only if they share a mailbox
 Properties of communication link
o Link established only if processes share a common mailbox
o A link may be associated with many processes
o Each pair of processes may share several communication links
o Link may be unidirectional or bi-directional
 Operations
o create a new mailbox
o send and receive messages through mailbox
o destroy a mailbox
 Primitives are defined as:
 send(A, message) – send a message to mailbox A
 receive(A, message) – receive a message from mailbox A
 Mailbox sharing
o P1, P2, and P3 share mailbox A
o P1, sends; P2 and P3 receive
o Who gets the message?
 Solutions
o Allow a link to be associated with at most two processes
o Allow only one process at a time to execute a receive operation
o Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.

Synchronization:

 Message passing may be either blocking or non-blocking

33

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Blocking is considered synchronous


o Blocking send has the sender block until the message is received
o Blocking receive has the receiver block until a message is available
 Non-blocking is considered asynchronous
o Non-blocking send has the sender send the message and continue
o Non-blocking receive has the receiver receive a valid message or null

Buffering:

 Queue of messages attached to the link; implemented in one of three ways


1.Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3.Unbounded capacity – infinite length
Sender never waits
Client-Server Communication:
 Sockets
 Remote Procedure Calls
 Remote Method Invocation (Java)
Sockets:
 A socket is defined as an endpoint for communication
 Concatenation of IP address and port
 The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
 Communication consists between a pair of sockets

Socket Communication:

Remote Procedure Calls:

 Remote procedure call (RPC) abstracts procedure calls between processes on


networked systems.
 Stubs – client-side proxy for the actual procedure on the server.
 The client-side stub locates the server and marshalls the parameters.
 The server-side stub receives this message, unpacks the marshalled parameters,
and peforms the procedure on the server.

34

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Execution of RPC

Remote Method Invocation:

 Remote Method Invocation (RMI) is a Java mechanism similar to RPCs.


 RMI allows a Java program on one machine to invoke a method on a remote
object.

Marshalling Parameters:

35

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Chapter 5: Threads

 Overview
 Multithreading Models
 Threading Issues
 Pthreads
 Windows XP Threads
 Linux Threads
 Java Threads

Single and Multithreaded Processes

36

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Benefits:

 Responsiveness
 Resource Sharing
 Economy
 Utilization of MP Architectures

User Threads:

 Thread management done by user-level threads library


 Three primary thread libraries:
o POSIX Pthreads
o Win32 threads
o Java threads

Kernel Threads:

 Supported by the Kernel


 Examples
o Windows XP/2000
o Solaris
o Linux
o Tru64 UNIX
o Mac OS X

Multithreading Models:

37

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Many-to-One
 One-to-One
 Many-to-Many

Many-to-One:

 Many user-level threads mapped to single kernel thread


 Examples:
o Solaris Green Threads
o GNU Portable Threads

Many-to-One Model:

One-to-One:

 Each user-level thread maps to kernel thread


 Examples
o Windows NT/XP/2000
o Linux
o Solaris 9 and later

One-to-one Model:

38

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Many-to-Many Model:

 Allows many user level threads to be mapped to many kernel threads


 Allows the operating system to create a sufficient number of kernel threads
 Solaris prior to version 9
 Windows NT/2000 with the ThreadFiber package

Two-level Model:

 Similar to M:M, except that it allows a user thread to be bound to kernel thread
 Examples
o IRIX
o HP-UX
o Tru64 UNIX
o Solaris 8 and earlier

39

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Two-level Model:

Threading Issues:

 Semantics of fork() and exec() system calls


 Thread cancellation
 Signal handling
 Thread pools
 Thread specific data
 Scheduler activations

Semantics of fork() and exec():

 Does fork() duplicate only the calling thread or all threads?

Thread Cancellation:

 Terminating a thread before it has finished


 Two general approaches:
o Asynchronous cancellation terminates the target thread immediately
o Deferred cancellation allows the target thread to periodically check if it
should be cancelled

Signal Handling:

 Signals are used in UNIX systems to notify a process that a particular event has
occurred
 A signal handler is used to process signals
o Signal is generated by particular event
o Signal is delivered to a process
o Signal is handled
 Options:

40

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

o Deliver the signal to the thread to which the signal applies


o Deliver the signal to every thread in the process
o Deliver the signal to certain threads in the process
o Assign a specific threa to receive all signals for the process

Thread Pools:

 Create a number of threads in a pool where they await work


 Advantages:
o Usually slightly faster to service a request with an existing thread than
create a new thread
o Allows the number of threads in the application(s) to be bound to the size
of the pool

Thread Specific Data:

 Allows each thread to have its own copy of data


 Useful when you do not have control over the thread creation process (i.e., when
using a thread pool)

Scheduler Activations:

 Both M:M and Two-level models require communication to maintain the


appropriate number of kernel threads allocated to the application
 Scheduler activations provide upcalls - a communication mechanism from the
kernel to the thread library
 This communication allows an application to maintain the correct number kernel
threads

Pthreads:

 A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
 API specifies behavior of the thread library, implementation is up to development
of the library
 Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Windows XP Threads:

 Implements the one-to-one mapping


 Each thread contains
o A thread id
o Register set
o Separate user and kernel stacks

41

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

o Private data storage area


 The register set, stacks, and private storage area are known as the context of the
threads
 The primary data structures of a thread include:
o ETHREAD (executive thread block)
o KTHREAD (kernel thread block)
o TEB (thread environment block)

Linux Threads:

 Linux refers to them as tasks rather than threads


 Thread creation is done through clone() system call
 clone() allows a child task to share the address space of the parent task (process)

Java Threads:

 Java threads are managed by the JVM

 Java threads may be created by:


o Extending Thread class
o Implementing the Runnable interface

Java Thread States :

42

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

UNIT-II

PROCESS SCHEDULING AND SYNCHRONIZATION

Chapter 6: CPU Scheduling


 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
 Multiple-Processor Scheduling
 Real-Time Scheduling
 Thread Scheduling
 Operating Systems Examples
 Java Thread Scheduling
 Algorithm Evaluation

Basic Concepts:

 Maximum CPU utilization obtained with multiprogramming


 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution
and I/O wait
 CPU burst distribution

Alternating Sequence of CPU And I/O Bursts

43

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Histogram of CPU-burst Times

CPU Scheduler:

 Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them
 CPU scheduling decisions may take place when a process:
1.Switches from running to waiting state
2.Switches from running to ready state
3.Switches from waiting to ready

44

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

4.Terminates
 Scheduling under 1 and 4 is nonpreemptive
 All other scheduling is preemptive

Dispatcher:
 Dispatcher module gives control of the CPU to the process selected by the short-
term scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that program
 Dispatch latency – time it takes for the dispatcher to stop one process and start
another running

Scheduling Criteria:

 CPU utilization – keep the CPU as busy as possible


 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until
the first response is produced, not output (for time-sharing environment)

Optimization Criteria:

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

45

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

P P P
1 2 3

0 24 27 30
00

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

 Suppose that the processes arrive in the order


P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect short process behind long process

 Associate with each process the length of its next CPU burst. Use these lengths
to schedule the process with the shortest time
 Two schemes:
o nonpreemptive – once CPU given to the process it cannot be preempted
until completes its CPU burst
o preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
know as the Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given set of
processes

46

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Example of Non-Preemptive SJF:

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

 SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

 Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Example of Preemptive SJF:


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

 SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

 Average waiting time = (9 + 1 + 0 +2)/4 = 3

47

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Determining Length of Next CPU Burst:

 Can only estimate the length


 Can be done by using the length of previous CPU bursts, using exponential
averaging

1. t n  actual length of n th CPU burst


2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define : n1   tn  1    n .

Prediction of the Length of the Next CPU Burst:

Examples of Exponential Averaging:

  =0
 n+1 = n
 Recent history does not count
  =1
 n+1 =  tn
 Only the actual last CPU burst counts
 If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
 Since both  and (1 - ) are less than or equal to 1, each successive term has less
weight than its predecessor

48

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Priority Scheduling:

 A priority number (integer) is associated with each process


 The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)
o Preemptive
o nonpreemptive
 SJF is a priority scheduling where priority is the predicted next CPU burst time
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process

Round Robin (RR):

 Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
 Performance
o q large  FIFO
o q small  q must be large with respect to context switch, otherwise
overhead is too high

Example of RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24

 The Gantt chart is:

49

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162


 Typically, higher average turnaround than SJF, but better response

Time Quantum and Context Switch Time:

Turnaround Time Varies With The Time Quantum:

Multilevel Queue:

50

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Ready queue is partitioned into separate queues:


foreground (interactive)
background (batch)
 Each queue has its own scheduling algorithm
o foreground – RR
o background – FCFS
 Scheduling must be done between the queues
o Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
o Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
o 20% to background in FCFS

Multilevel Queue Scheduling:

Multilevel Feedback Queue:

 A process can move between the various queues; aging can be implemented this
way
 Multilevel-feedback-queue scheduler defined by the following parameters:
o number of queues
o scheduling algorithms for each queue
o method used to determine when to upgrade a process
o method used to determine when to demote a process
o method used to determine which queue a process will enter when that
process needs service

Example of Multilevel Feedback Queue:

 Three queues:
o Q0 – RR with time quantum 8 milliseconds
o Q1 – RR time quantum 16 milliseconds

51

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

o Q2 – FCFS
 Scheduling
o A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.
o At Q1 job is again served FCFS and receives 16 additional milliseconds.
If it still does not complete, it is preempted and moved to queue Q2.

Multilevel Feedback Queues:

Multiple-Processor Scheduling:

 CPU scheduling more complex when multiple CPUs are available


 Homogeneous processors within a multiprocessor
 Load sharing
 Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing

Real-Time Scheduling:

 Hard real-time systems – required to complete a critical task within a guaranteed


amount of time
 Soft real-time computing – requires that critical processes receive priority over
less fortunate ones

Thread Scheduling:

 Local Scheduling – How the threads library decides which thread to put onto an
available LWP

 Global Scheduling – How the kernel decides which kernel thread to run next
 Local Scheduling – How the threads library decides which thread to put onto an
available LWP

52

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Global Scheduling – How the kernel decides which kernel thread to run next

#include <pthread.h>
#include <stdio.h>
#define NUM THREADS 5
int main(int argc, char *argv[])
{
int i;
pthread t tid[NUM THREADS];
pthread attr t attr;
/* get the default attributes */
pthread attr init(&attr);
/* set the scheduling algorithm to PROCESS or SYSTEM */
pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM);
/* set the scheduling policy - FIFO, RT, or OTHER */
pthread attr setschedpolicy(&attr, SCHED OTHER);
/* create the threads */
for (i = 0; i < NUM THREADS; i++)
pthread create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM THREADS; i++)
pthread join(tid[i], NULL);
}

/* Each thread will begin control in this function */


void *runner(void *param)
{
printf("I am a thread\n");
pthread exit(0);
}

Operating System Examples:

 Solaris scheduling
 Windows XP scheduling
 Linux scheduling

53

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Solaris 2 Scheduling:

Solaris Dispatch Table :

Windows XP Priorities:

54

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Linux Scheduling:

 Two algorithms: time-sharing and real-time


 Time-sharing
o Prioritized credit-based – process with most credits is scheduled next
o Credit subtracted when timer interrupt occurs
o When credit = 0, another process chosen
o When all processes have credit = 0, recrediting occurs
 Based on factors including priority and history
 Real-time
o Soft real-time
o Posix.1b compliant – two classes
 FCFS and RR
 Highest priority process always runs first

The Relationship between Priorities and Time-slice length:

List of Tasks Indexed According to Prorities:

55

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Algorithm Evaluation:

 Deterministic modeling – takes a particular predetermined workload and


defines the performance of each algorithm for that workload
 Queueing models
 Implementation

Chapter-7 Process Synchronization

 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Atomic Transactions

Background:

 Concurrent access to shared data may result in data inconsistency


 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
 Suppose that we wanted to provide a solution to the consumer-producer problem
that fills all the buffers. We can do so by having an integer count that keeps track
of the number of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the consumer after
it consumes a buffer.
Producer
while (true) {

56

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

/* produce an item and put in nextProduced */


while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;

/* consume the item in nextConsumed


}
Race Condition
 count++ could be implemented as

register1 = count
register1 = register1 + 1
count = register1
 count-- could be implemented as

register2 = count
register2 = register2 - 1
count = register2
 Consider this execution interleaving with ―count = 5‖ initially:

S0: producer execute register1 = count {register1 = 5}


S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}

Solution to Critical-Section Problem:

1.Mutual Exclusion - If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections
2.Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes that
will enter the critical section next cannot be postponed indefinitely

57

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

3.Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes

Peterson’s Solution:
 Two process solution
 Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
 The two processes share two variables:
o int turn;
o Boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical section.
 The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready.

Algorithm for Process Pi


while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);

CRITICAL SECTION

flag[i] = FALSE;

REMAINDER SECTION

Synchronization Hardware
 Many systems provide hardware support for critical section code
 Uniprocessors – could disable interrupts
o Currently running code would execute without preemption
o Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
 Atomic = non-interruptable
o Either test memory word and set value
o Or swap contents of two memory words

Test And Set Instruction :

58

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}

Solution using TestAndSet:

 Shared boolean variable lock., initialized to false.


 Solution:
while (true) {
while ( TestAndSet (&lock ))
; /* do nothing

// critical section

lock = FALSE;

// remainder section

Swap Instruction
 Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}

Solution using Swap

 Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key.
 Solution:
while (true) {
key = TRUE;
while ( key == TRUE)

59

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

Semaphore

 Synchronization tool that does not require busy waiting


 Semaphore S – integer variable
 Two standard operations modify S: wait() and signal()
o Originally called P() and V()
 Less complicated
 Can only be accessed via two indivisible (atomic) operations
wait (S) {
while S <= 0
; // no-op
S--;
}
signal (S) {
S++;
}
Semaphore as General Synchronization Tool

 Counting semaphore – integer value can range over an unrestricted domain


 Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
o Also known as mutex locks
 Can implement a counting semaphore S as a binary semaphore
 Provides mutual exclusion
o Semaphore S; // initialized to 1
o wait (S);
Critical Section
signal (S);

Semaphore Implementation

 Must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time

60

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the crtical section.
o Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and therefore this
is not a good solution.

Semaphore Implementation with no Busy waiting

 With each semaphore there is an associated waiting queue. Each entry in a


waiting queue has two data items:
o value (of type integer)
o pointer to next record in the list

 Two operations:
o block – place the process invoking the operation on the appropriate
waiting queue.
o wakeup – remove one of processes in the waiting queue and place it in the
ready queue.

 Implementation of wait:

wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

 Implementation of signal:

Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}

Deadlock and Starvation

 Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1

61

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended.

Classical Problems of Synchronization

 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem

Bounded-Buffer Problem

 N buffers, each can hold one item


 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N.

 The structure of the producer process

while (true) {
// produce an item

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
}

 The structure of the consumer process

while (true) {
wait (full);
wait (mutex);

62

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

// remove an item from buffer

signal (mutex);
signal (empty);

// consume the removed item

Readers-Writers Problem

 A data set is shared among a number of concurrent processes


 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write.
 Problem – allow multiple readers to read at the same time. Only one single writer
can access the shared data at the same time.

 Shared Data
 Data set
 Semaphore mutex initialized to 1.
 Semaphore wrt initialized to 1.
 Integer readcount initialized to 0.

 The structure of a writer process

while (true) {
wait (wrt) ;

// writing is performed

signal (wrt) ;
}

 The structure of a reader process

while (true) {
wait (mutex) ;
readcount ++ ;
if (readcount == 1) wait (wrt) ;
signal (mutex)

// reading is performed

63

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

wait (mutex) ;
readcount - - ;
if (readcount == 0) signal (wrt) ;
signal (mutex) ;
}

Dining-Philosophers Problem

 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1
 The structure of Philosopher i:

While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

Problems with Semaphores:

 Incorrect use of semaphore operations:


o signal (mutex) …. wait (mutex)
o wait (mutex) … wait (mutex)

o Omitting of wait (mutex) or signal (mutex) (or both)

64

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Incorrect use of semaphore operations:


 signal (mutex) …. wait (mutex)
 wait (mutex) … wait (mutex)

 Omitting of wait (mutex) or signal (mutex) (or both)

 A high-level abstraction that provides a convenient and effective mechanism for


process synchronization
 Only one process may be active within the monitor at a time

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }



}
}

Schematic view of a Monitor

Condition Variables

 condition x, y;

 Two operations on a condition variable:


 x.wait () – a process that invokes the operation is

65

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

suspended.
 x.signal () – resumes one of processes (if any) that
invoked x.wait ()

Monitor with Condition Variables

Solution to Dining Philosophers

monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {

if ( (state[(i + 4) % 5] != EATING) &&


(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {

66

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
 Each philosopher I invokes the operations pickup()
and putdown() in the following sequence:

dp.pickup (i)

EAT

dp.putdown (i)

Monitor Implementation Using Semaphores

 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
 Each procedure F will be replaced by

wait(mutex);

body of F;


if (next-count > 0)
signal(next)
else
signal(mutex);
 Mutual exclusion within a monitor is ensured.

Monitor Implementation

67

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 For each condition variable x, we have:

semaphore x-sem; // (initially = 0)


int x-count = 0;
 The operation x.wait can be implemented as:

x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;

 The operation x.signal can be implemented as:


if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}

Synchronization Examples

 Solaris
 Windows XP
 Linux
 Pthreads

Solaris Synchronization

 Implements a variety of locks to support multitasking, multithreading (including


real-time threads), and multiprocessing
 Uses adaptive mutexes for efficiency when protecting data from short code
segments
 Uses condition variables and readers-writers locks when longer sections of code
need access to data
 Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock

68

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Windows XP Synchronization

 Uses interrupt masks to protect access to global resources on uniprocessor


systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act as either mutexes and semaphores
 Dispatcher objects may also provide events
o An event acts much like a condition variable

Linux Synchronization

 Linux:
disables interrupts to implement short critical sections

 Linux provides:
semaphores
spin locks

Pthreads Synchronization

 Pthreads API is OS-independent


 It provides:
o mutex locks
o condition variables
 Non-portable extensions include:
o read-write locks
o spin locks

Atomic Transactions

 System Model
 Log-based Recovery
 Checkpoints
 Concurrent Atomic Transactions

System Model

 Assures that operations happen as a single logical unit of work, in its entirety, or
not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer system failures

69

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Transaction - collection of instructions or operations that performs single logical


function
o Here we are concerned with changes to stable storage – disk
o Transaction is series of read and write operations
o Terminated by commit (transaction successful) or abort (transaction
failed) operation
o Aborted transaction must be rolled back to undo any changes it performed

Types of Storage Media

 Volatile storage – information stored here does not survive system crashes
o Example: main memory, cache
 Nonvolatile storage – Information usually survives crashes
o Example: disk and tape
 Stable storage – Information never lost
o Not actually possible, so approximated via replication or RAID to devices
with independent failure modes

Log-Based Recovery

 Record to stable storage information about all modifications by a transaction


 Most common is write-ahead logging
o Log on stable storage, each log record describes single transaction write
operation, including
 Transaction name
 Data item name
 Old value
 New value
o <Ti starts> written to log when transaction Ti starts
o <Ti commits> written when Ti commits
 Log entry must reach stable storage before operation on data occurs

Log-Based Recovery Algorithm

 Using the log, system can handle any volatile memory errors
o Undo(Ti) restores value of all data updated by Ti
o Redo(Ti) sets values of all data in transaction Ti to new values
 Undo(Ti) and redo(Ti) must be idempotent
o Multiple executions must have the same result as one execution
 If system fails, restore state of all updated data via log
o If log contains <Ti starts> without <Ti commits>, undo(Ti)
o If log contains <Ti starts> and <Ti commits>, redo(Ti)

70

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Checkpoints

 Log could become long, and recovery could take long


 Checkpoints shorten log and recovery time.
 Checkpoint scheme:
o Output all log records currently in volatile storage to stable storage
o Output all modified data from volatile to stable storage
o Output a log record <checkpoint> to the log on stable storage
 Now recovery only includes Ti, such that Ti started executing before the most
recent checkpoint, and all transactions after Ti All other transactions already on
stable storage

Concurrent Transactions

 Must be equivalent to serial execution – serializability


 Could perform all transactions in critical section
o Inefficient, too restrictive
 Concurrency-control algorithms provide serializability

Serializability

 Consider two data items A and B


 Consider Transactions T0 and T1
 Execute T0, T1 atomically
 Execution sequence called schedule
 Atomically executed transaction order called serial schedule
 For N transactions, there are N! valid serial schedules

Schedule 1: T0 then T1

Nonserial Schedule

 Nonserial schedule allows overlapped execute


o Resulting execution not necessarily incorrect

71

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Consider schedule S, operations Oi, Oj


o Conflict if access same data item, with at least one write
 If Oi, Oj consecutive and operations of different transactions & Oi and Oj don’t
conflict
o Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping nonconflicting operations
o S is conflict serializable

Schedule 2: Concurrent Serializable Schedule

Locking Protocol

 Ensure serializability by associating lock with each data item


o Follow locking protocol for access control
 Locks
o Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q but not
write Q
o Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can read and write Q
 Require every transaction on item Q acquire appropriate lock
 If lock already held, new request may have to wait
o Similar to readers-writers algorithm

Two-phase Locking Protocol

 Generally ensures conflict serializability


 Each transaction issues lock and unlock requests in two phases
o Growing – obtaining locks
o Shrinking – releasing locks
 Does not prevent deadlock

72

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Timestamp-based Protocols

 Select order among transactions in advance – timestamp-ordering


 Transaction Ti associated with timestamp TS(Ti) before Ti starts
o TS(Ti) < TS(Tj) if Ti entered system before Tj
o TS can be generated from system clock or as logical counter incremented
at each entry of transaction
 Timestamps determine serializability order
o If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to
serial schedule where Ti appears before Tj

Timestamp-based Protocol Implementation

 Data item Q gets two timestamps


o W-timestamp(Q) – largest timestamp of any transaction that executed
write(Q) successfully
o R-timestamp(Q) – largest timestamp of successful read(Q)
o Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting read and write executed in
timestamp order
 Suppose Ti executes read(Q)
o If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already
overwritten
 read operation rejected and Ti rolled back
o If TS(Ti) ≥ W-timestamp(Q)
 read executed, R-timestamp(Q) set to max(R-timestamp(Q),
TS(Ti))

Timestamp-ordering Protocol

 Suppose Ti executes write(Q)


o If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed
previously and Ti assumed it would never be produced
 Write operation rejected, Ti rolled back
o If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q
 Write operation rejected and Ti rolled back
o Otherwise, write executed
 Any rolled back transaction Ti is assigned new timestamp and restarted
 Algorithm ensures conflict serializability and freedom from deadlock

Timestamp-ordering Protocol

73

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Suppose Ti executes write(Q)


o If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed
previously and Ti assumed it would never be produced
 Write operation rejected, Ti rolled back
o If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q
 Write operation rejected and Ti rolled back
o Otherwise, write executed
 Any rolled back transaction Ti is assigned new timestamp and restarted
 Algorithm ensures conflict serializability and freedom from deadlock

Schedule Possible Under Timestamp Protocol

74

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Chapter 8: Deadlocks

 The Deadlock Problem


 System Model
 Deadlock Characterization
 Methods for Handling Deadlocks
 Deadlock Prevention
 Deadlock Avoidance
 Deadlock Detection
 Recovery from Deadlock

Objectives

 To develop a description of deadlocks, which prevent sets of concurrent processes


from completing their tasks
 To present a number of different methods for preventing or avoiding deadlocks in
a computer system.

The Deadlock Problem

 A set of blocked processes each holding a resource and waiting to acquire a


resource held by another process in the set.
 Example
o System has 2 disk drives.
o P1 and P2 each hold one disk drive and each needs another one.
 Example

75

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

o semaphores A and B, initialized to 1

P0 P1
wait (A); wait(B)
wait (B); wait(A)

Bridge Crossing Example

 Traffic only in one direction.


 Each section of a bridge can be viewed as a resource.
 If a deadlock occurs, it can be resolved if one car backs up (preempt resources and
rollback).
 Several cars may have to be backed up if a deadlock occurs.
 Starvation is possible.

System Model

 Resource types R1, R2, . . ., Rm


CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
o request
o use
o release

Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously.


 Mutual exclusion: only one process at a time can use a resource.
 Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.

76

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 No preemption: a resource can be released only voluntarily by the process


holding it, after that process has completed its task.
 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.

Resource-Allocation Graph

A set of vertices V and a set of edges E.


 V is partitioned into two types:
o P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system.
 request edge – directed edge P1  Rj
 assignment edge – directed edge Rj  Pi

 Process

 Resource Type with 4 instances

 Pi requests instance of Rj
Pi

Pi
 Pi is holding an instance of Rj

Example of a Resource Allocation Graph

77

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

Resource Allocation Graph With A Deadlock

Graph With A Cycle But No Deadlock

Basic Facts

78

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 If graph contains no cycles  no deadlock.


 If graph contains a cycle 
o if only one instance per resource type, then deadlock.
o if several instances per resource type, possibility of deadlock.

Methods for Handling Deadlocks

 Ensure that the system will never enter a deadlock state.


 Allow the system to enter a deadlock state and then recover.
 Ignore the problem and pretend that deadlocks never occur in the system;
used by most operating systems, including UNIX.

Deadlock Prevention

Restrain the ways request can be made.


 Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources.
 Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources.
o Require process to request and be allocated all its resources before it
begins execution, or allow process to request resources only when
the process has none.
o Low resource utilization; starvation possible.
 No Preemption –
o If a process that is holding some resources requests another resource
that cannot be immediately allocated to it, then all resources
currently being held are released.
o Preempted resources are added to the list of resources for which the
process is waiting.
o Process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting.
 Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.

Deadlock Avoidance

Requires that the system has some additional a priori information


available.
 Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
 The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.

79

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Resource-allocation state is defined by the number of available and allocated


resources, and the maximum demands of the processes.

Safe State

 When a process requests an available resource, system must decide if immediate


allocation leaves the system in a safe state.
 System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi, the resources that Pi can still
request can be satisfied by currently available resources + resources held by all
the Pj, with j < i.
 That is:
o If Pi resource needs are not immediately available, then Pi can wait until
all Pj have finished.
o When Pj is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate.
o When Pi terminates, Pi +1 can obtain its needed resources, and so on.

Basic Facts

 If a system is in safe state  no deadlocks.


 If a system is in unsafe state  possibility of deadlock.
 Avoidance  ensure that a system will never enter an unsafe state.

Safe, Unsafe , Deadlock State

Avoidance algorithms

 Single instance of a resource type. Use a resource-allocation graph

 Multiple instances of a resource type. Use the banker’s algorithm

Resource-Allocation Graph Scheme

80

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Claim edge Pi  Rj indicated that process Pj may request resource Rj;


represented by a dashed line.
 Claim edge converts to request edge when a process requests a resource.
 Request edge converted to an assignment edge when the resource is allocated to
the process.

 When a resource is released by a process, assignment edge reconverts to a claim


edge.
 Resources must be claimed a priori in the system.

Resource-Allocation Graph

Unsafe State In Resource-Allocation Graph

Resource-Allocation Graph Algorithm

 Suppose that process Pi requests a resource Rj

81

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 The request can be granted only if converting the request edge to an assignment
edge does not result in the formation of a cycle in the resource allocation graph

Banker’s Algorithm

 Multiple instances.
 Each process must a priori claim maximum use.
 When a process requests a resource it may have to wait.
 When a process gets all its resources it must return them in a finite amount of
time.

Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types.


 Available: Vector of length m. If available [j] = k, there are k instances of
resource type Rj available.
 Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj.
 Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj.
 Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j].

Safety Algorithm

1.Let Work and Finish be vectors of length m and n, respectively. Initialize:


Work = Available
Finish [i] = false for i = 0, 1, …, n- 1.
2.Find and i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3.Work = Work + Allocationi
Finish[i] = true
go to step 2.
4.If Finish [i] == true for all i, then the system is in a safe state.
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k
instances of resource type Rj.

82

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

1.If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim.
2.If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not
available.
3.Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
o If safe  the resources are allocated to Pi.
o If unsafe  Pi must wait, and the old resource-allocation state is restored

Example of Banker’s Algorithm


 5 processes P0 through P4;
3 resource types:
A (10 instances), B (5instances), and C (7 instances).
 Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433
 The content of the matrix Need is defined to be Max – Allocation.

Need
ABC
P0 74 3
P1 12 2
P2 60 0
P3 01 1
P4 43 1
 The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria.

Example: P1 Request (1,0,2)


 Check that Request  Available (that is, (1,0,2)  (3,3,2)  true.
Allocation Need Available
ABC ABC ABC
P0 010 743 230
P1 302 020
P2 301 600
P3 211 011

83

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

P4 002 431
 Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies
safety requirement.
 Can request for (3,3,0) by P4 be granted?
 Can request for (0,2,0) by P0 be granted?

Deadlock Detection

 Allow system to enter deadlock state


 Detection algorithm
 Recovery scheme

Single Instance of Each Resource Type

 Maintain wait-for graph


o Nodes are processes.
o Pi  Pj if Pi is waiting for Pj.
 Periodically invoke an algorithm that searches for a cycle in the graph. If there is
a cycle, there exists a deadlock.

 An algorithm to detect a cycle in a graph requires an order of n2 operations,


where n is the number of vertices in the graph.

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph

Several Instances of a Resource Type


 Available: A vector of length m indicates the number of available resources of
each type.

84

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

 Allocation: An n x m matrix defines the number of resources of each type


currently allocated to each process.
 Request: An n x m matrix indicates the current request of each process. If
Request [ij] = k, then process Pi is requesting k more instances of resource type.
Rj.

Detection Algorithm

1.Let Work and Finish be vectors of length m and n, respectively Initialize:


(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false;otherwise, Finish[i] = true.
2.Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
3.If no such i exists, go to step 4.
Work = Work + Allocationi
Finish[i] = true
go to step 2.
4.If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked.

Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state.

Example of Detection Algorithm

 Five processes P0 through P4; three resource types


A (7 instances), B (2 instances), and C (6 instances).
 Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002
 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.

 P2 requests an additional instance of type C.


Request
ABC
P0 00 0

85

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

P1 20 1
P2 00 1
P3 10 0
P4 00 2

 State of system?
o Can reclaim resources held by process P0, but insufficient resources to
fulfill other processes; requests.
o Deadlock exists, consisting of processes P1, P2, P3, and P4.

Detection-Algorithm Usage

 When, and how often, to invoke depends on:


o How often a deadlock is likely to occur?
o How many processes will need to be rolled back?
 one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many cycles in the
resource graph and so we would not be able to tell which of the many deadlocked
processes ―caused‖ the deadlock.

Recovery from Deadlock: Process Termination


 Abort all deadlocked processes.
 Abort one process at a time until the deadlock cycle is eliminated.
 In which order should we choose to abort?
o Priority of the process.
o How long process has computed, and how much longer to completion.
o Resources the process has used.
o Resources process needs to complete.
o How many processes will need to be terminated.
o Is process interactive or batch?

Recovery from Deadlock: Resource Preemption

 Selecting a victim – minimize cost.


 Rollback – return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim, include number of
rollback in cost factor.

86

www.Vidyarthiplus.com
www.Vidyarthiplus.com
Operating Systems Lecture Notes
Prepared By
P.PARVATHI,
Asst.Professor/CSE

UNITIII

87

www.Vidyarthiplus.com

You might also like