0% found this document useful (0 votes)
55 views42 pages

Operating Systems

This document provides an overview of operating systems, defining their purpose as intermediaries between users and computer hardware, and outlining their objectives, services, and essential components such as the kernel and shell. It discusses factors for choosing an operating system, the concept of virtual machines, and the evolution of operating systems through different generations of computers. Additionally, it covers key terms like processes, virtual memory, and file management, along with the advantages and disadvantages of virtual machines.

Uploaded by

nicksonwekesa603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views42 pages

Operating Systems

This document provides an overview of operating systems, defining their purpose as intermediaries between users and computer hardware, and outlining their objectives, services, and essential components such as the kernel and shell. It discusses factors for choosing an operating system, the concept of virtual machines, and the evolution of operating systems through different generations of computers. Additionally, it covers key terms like processes, virtual memory, and file management, along with the advantages and disadvantages of virtual machines.

Uploaded by

nicksonwekesa603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

CCIT Module 1 Operating System

INTRODUCTION TO OPERATING SYSTEMS

DEFINITION OF OPERATING SYSTEM

It is a program that acts as an intermediary between the user of a computer and the computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner

Conceptual View of Operating System

Objectives of operating system

i. Efficient use: To provide efficient use of a computers resources


ii. User convenience: To provide convenient method of using a computer system
iii. Non-interference: To prevent interference in the activities of its users

Factors to consider when choosing and operating system

i. Compatibility with the available hardware e.g. hard disk, memory, processor speed etc
ii. Upgradability – the operating system should be able to accommodate updates if any
iii. Number of user to share the computer resources
iv. Minimum RAM requirement for the OS
v. Applications to installed in the computer
vi. Initial cost

Services provided by the Operating System:

i. Program development: - The OS provides a variety of facilities and services, such as


editors and debuggers to assist the programmer in creating programs. These services are

Page 1 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

in the form of utility programs, are supplied with the OS and are referred to as application
program developer tools.
ii. Program execution: - A number of tasks need to be performed to execute programs.
Instructions and data must be loaded into main memory, I/O devices and files must be
initialized, and other resources must be prepared. The OS handles these scheduling duties
for the user.
iii. Access to I/O devices: - Each I/O device requires its own set of instructions or control
signals for operations. The OS provides a uniform interface that includes these details so
that the programmer can access such devices using simple read and writes.
iv. Controlled access to files: - In the case of files for the operating system, control must
include a detailed understanding of not only the nature of the I/O device (disk drive, tape
drive) but also the structure of the data contained in the files on the storage medium. For
the case of a system with multiple users, the OS may provide protection mechanisms to
control access to files.
v. System Access: - For shared or public systems, the OS controls access to the system as a
whole and to specific system resources. The access functions must provide protection of
resources and data from unauthorized users and must resolve conflicts for resources
contention.
vi. Error detection and response: - When a computer is running, a number of errors occurs,
such as, internal and external hardware errors e.g. memory error, a device failure or
malfunction, software errors such as arithmetic overflow, attempt to access forbidden
memory locations and inability of the OS to grant the request of an application.
In any of these cases, the OS must provide a response that clears the error condition with
the least impact on running applications.
The response may range from ending the program that caused the error to retrying the
operation, to simply reporting the error to the application.
vii. Accounting: - An OS will collect usage statistics for various resources and monitor
performance parameters such as response time. This information is * in anticipating the
need for future enhancements and in tuning the system to improve performance on a
multi-user system, the information can be used for billing purposes.

DEFINITION OF TERMS IN OPERATING SYSTEM

(a) System call/monitor call

It is a request made by any program to the operating system for performing task. It is used
whenever a program needs to access a restricted source.

Type of system calls:


i. Process control (e.g., create and terminate processes* load, execute* end abort* wait
signal event etc.)
Page 2 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

ii. File management (e.g., open and close files,* create file, delete file* read , write)
iii. Device management (e.g., read and write operations* request or release a device)
iv. Information maintenance (e.g., get time or date* set process, file or device attributes*
get system data)
v. Communication (e.g., send and receive messages)

System calls allow user-level processes to request some services from the operating system
which the process itself is not allowed to do. For example, for an I/O a process involves a system
call telling the OS to read or write particular area and this request is satisfied by the operating
system.

Page 3 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

System programs

Provide a convenient environment for program development (editors, compilers) and execution
(shells). Some of them are simply user interfaces to system calls; others are considerably more
complex. They can be divided into these categories:

Types of System Programs

i. File management: These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
ii. Status information/management: Some programs simply ask the system for the date,
time, amount of available memory or disk space, number of users, or similar status
information. That information is then formatted and printed to the terminal or other
output device or file.
iii. File modification: Several text editors may be available to create and modify the content
of files stored on disk or tape.
iv. Programming-language support: Compilers, assemblers, and interpreters for common
programming languages (such as C, C++, Java, Visual Basic, and PERL) are often
provided to the user with the operating system, although some of these programs are now
priced and provided separately.
v. Program loading and execution: Once a program is assembled or compiled, it must be
loaded into memory to be executed. The system may provide absolute loaders, re-
locatable loaders, linkage editors, and overlay loaders. Debugging systems for either
higher-level languages or machine language are needed also.
vi. Communications: These programs provide the mechanism for creating virtual
connections among processes, users, and computer systems. They allow users to send
messages to one another's screens, to browse web pages, to send electronic mail
messages, to log in remotely, or to transfer files from one machine to another.

(b) THE Shell

The shell is the outermost part of an operating system that interacts with user commands. After
verifying that the commands are valid, the shell sends them to the command processor to be
executed.

Operating system shells generally fall into one of two categories:

i) Command-line shells provide a command-line interface (CLI) to the operating system


ii) Graphical shells provide a graphical user interface (GUI).

Features of a GUI

i. Windows – they run a self-contained program isolated from other programs


Page 4 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

ii. Icon – acts as a shortcut to an action the computer perform


iii. Menu – is a text or an icon
iv. The pointer is onscreen symbol represent movement of a physical device that the user
control

The primary purpose of the shell is to invoke or "launch" another program; however, shells
frequently have additional capabilities such as viewing the contents of directories. The best
choice is often determined by the way in which a computer will be used.

(c) The Kernel

The kernel is the central part of an operating system that directly controls the computer
hardware. It is the only way through which the programs (all programs including shell) can
access the hardware

Functions of the kernel's

i. Process management
ii. Device management
iii. Memory management
iv. Interrupt handling
v. I/O communication
vi. File system
Operating system tasks are done differently by different kernels, depending on their design and
implementation. While monolithic kernels will try to achieve these goals by executing all the
operating system code in the same address space to increase the performance of the system,
microkernels run most of the operating system services in user space as servers, aiming to
improve maintainability and modularity of the operating system. A range of possibilities exists
between these two extremes.

Kernel components that works with I/O managers

i. Cache manager
The cache manager handles file caching for all file system. It can dynamically increase or
decrease the size of the cache devoted to a particular file as the amount of available
physical memory varies
ii. File system drivers
The I/O manager treats a file system as just another device driver and routes I/O requests
for file system volumes to the appropriate software driver for that volume. The file
system, in turn, sends I/O requests to the software driver that manage the hardware device
adapter
iii. Network drivers

Page 5 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

This offers the I/O manager with integrated networking capabilities and support for
remote file systems
iv. Hardware drive drivers
These are software driver that access the hardware registers of the peripheral devices
using entry points in the kernels hardware abstraction layer

Page 6 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Kernel Basic Facilities

The kernel's primary purpose is to manage the computer's resources and allow other
programs to run and use these resources. Typically, the resources consist of:

i. The Central Processing Unit (CPU, the processor). The kernel takes responsibility
for deciding at any time which of the many running programs should be allocated to
the processor or processors (each of which can usually run only one program at a
time)
ii. The computer's memory. The kernel is responsible for deciding which memory each
process can use, and determining what to do when not enough is available.
iii. Any Input/output (I/O). The kernel allocates requests from applications to perform
I/O to an appropriate device and provides convenient methods for using the device
(typically abstracted to the point where the application does not need to know
implementation details of the device).

Kernels also usually provide methods for synchronization and communication


between processes (called inter-process communication or IPC).

A kernel may implement these features itself, or rely on some of the processes it runs
to provide the facilities to other processes, although in this case it must provide some
means of IPC to allow processes to access the facilities provided by each other.

Finally, a kernel must provide running programs with a method to make requests to
access these facilities.

What is the difference between kernel and shell?

The Shell is a program which allows the user to access the computer system and it act as
an interface between the user and the computer system. It acts as an interface between the
user and the kernel

The Kernel is the only way through which the programs (all programs including shell)
can access the hardware. It’s a layer between the application programs and hardware. It is
the core of most of the operating systems and manages everything including the
communication between the hardware and software.

Page 7 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

(d) Virtual Machines (VM)

A virtual machine (VM) is a separate and independent software instance that includes a full copy
of an operating system and application software. A physical server prepared with a server
virtualization hypervisor such as Microsoft Hyper-V, VMware vSphere or Citrix XenServer can
host multiple VMs while maintaining logical isolation between each machine. Each instance can
then share the server's computing resources -- dramatically increasing physical server hardware
usage.

Benefits of Using the VM

i. Increase the hardware utilization: A typical non-virtualized application server may reach
just 5% to 10% utilization. But a virtual server that hosts multiple VMs can easily reach
50% to 80% utilization
ii. Decrease the capital and operating cost by sharing in number of VM: The net result is
that more virtual machines can be hosted on fewer physical servers, translating into lower
costs for hardware acquisition, maintenance, energy and cooling system usage.
iii. High availability and Secure
iv. VM can be use from anywhere in the intranet
v. You can run programs in other operating systems from what is running on the machine.

Disadvantages

i. If server is shutdown, we cannot access the VM


ii. Increased processor overhead.

(e) A process

A process is a program in execution. A process is more than a program, because it’s associated
with resources such as registers (program counter, stack pointer) list of open files etc. Moreover,
multiple processes may be associated with one program (e.g., run the same program, a web
browser, twice).

(f) Virtual Memory

Virtual memory is a computer system technique which gives an application program the
impression that it has contiguous working memory (an address space), while in fact it may be
physically fragmented and may even overflow on to disk storage.

Systems that use this technique make programming of large applications easier and use real
physical memory (e.g. RAM) more efficiently than those without virtual memory

Page 8 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Thrashing

It’s a phenomenon in virtual memory schemes in which the processor spend most of its time
swapping pages rather than executing instructions

g) File

Hides away the peculiarities of disk and input/output device and provides the programmers with
an easy way to create, retrieve and modify files.

HISTORY OF OPERATING SYSTEM

Since operating system has historically been closely tied to the architecture of the computer on
which they run we will look at successive generation of computer to see what their operations
were like.

1. First generation computers (1945 - 1955). They used Vacuum tube

The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages). Operating
systems were unheard of.

In these early days a single group of people (usually engineers) designed, built, programmed,
operated and maintained each machine.

2. Second generation computer (1955 1965). They used Transistor and batch systems
This computer had improved with the introduction of punch cards. The General Motors Research
Laboratories implemented the first operating systems in early 1950's for their IBM 701. This
system ran one job at a time. They were called single-stream batch processing systems because
programs and data were submitted in groups or batches.
These computers were mostly used for scientific and engineering calculation, such as partial
differentiation equations that often occur in physics and engineering. They were largely
programmed in FORTRAN and assembly language. Typically the OS were FMS ( the Fortran
Monitor System) and IBSYS, IBMS OS for the 7094

3. Third generation computers (1965 1980). They used Integrated Circuits (ICs) and
multiprogramming
The systems of the 1960's were also batch processing systems, but they were able to take better
advantage of the computer's resources by running several jobs at once.
Page 9 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

Page 10 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Main features of this operating system

i) Multiprogramming
So operating systems designers developed the concept of multiprogramming in which
several jobs are in main memory at once; a processor is switched from job to job as
needed to keep several jobs advancing while keeping the peripheral devices in use.
While one job was waiting for I/O to complete, another job could be using the CPU.
ii) Spooling (simultaneous peripheral operations on line).
In spooling, a high-speed device like a disk interposed between a running program
and a low-speed device involved with the program in input/output. Instead of writing
directly to a printer, for example, outputs are written to the disk. Programs can run to
completion faster, and other programs can be initiated sooner when the printer
becomes available, the outputs may be printed.
iii) Time-sharing technique

Timesharing systems were developed to multiprogram large number of simultaneous


interactive users.

4. Fourth generation computers (1980 1989) . They used Large Scale Integration

With the development of Large Scale Integrated circuits (LSI), chips, operating system entered
in the personal computer and the workstation age. Microprocessor technology evolved to the
point that it become possible to build desktop computers as powerful as the mainframes of the
1970s.

Two operating systems dominated the personal computer scene: MS-DOS, written by Microsoft,
Inc. for the IBM PC and other machines using the Intel 8088 CPU and its successors, and UNIX,
which is dominant on the large personal computers using the Motorola 6899 CPU family.

5. Fifth generation computers (1990 Present)

This generation is characterized by the emerging of telecommunication with computer


technology. Scientists are working on this generation to bring machines with genuine IQ the
ability to reason logically and with real knowledge of the world. The anticipated computer will
have the following characteristics

• It is expected to do parallel processing


• It will be based on logical inference operations
• It's expected to make use of artificial intelligence (AI)

STRUCTURE OF OPERATING SYSTEMS


Page 11 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

As modern operating systems are large and complex careful engineering is required. There are
four different structures that have shown in this document in order to get some idea of the
spectrum of possibilities. These are by no mean s exhaustive, but they give an idea of some
designs that have been tried in practice.

(a) Monolithic Systems:

In this approach the entire operating system runs as a single program in the kernel mode. The
operating system is written as a collection of thousands of procedures, each of which can call any
of the other ones whenever it needs to without restriction making it difficult to understand the
system.

When this approach is used, one compiles all the individual procedures and then binds them all
together into a single executable file using the system linker. In terms of information hiding,
there is essentially none- every procedure is visible to every other one i.e. opposed to a structure
containing modules or packages, in which much of the information is local to module, and only
officially designated entry points can be called from outside the module.

Main Main
procedure
procedure

Service procedures

Utility procedure

A simple structuring model for a monolithic system

Problems with monolithic structure

i. Difficult to maintain
ii. Difficult to take care of concurrency due to multiple users/jobs

Page 12 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Differences between monolithic and non-monolithic operating system

Monolithic Non monolithic


Made up of a single kernel Made up of several layers of kernels
Occupy more memory Uses less memory
Less efficient More efficient

(b) Layered System:

The operating system is broken up into a number of layers (or levels), each on top of lower
layers. Each layer is an implementation of an abstract object that is the encapsulation of data and
operations that can manipulate these data. The operating system is organized as a hierarchy of
layers, each one constructed upon the one below it.

This operating system structure has 6 layers.

Layer Function
5 The operator
4 User program
3 i/o management
2 Operator-process communication
1 Memory and drum management
0 Process allocation and
multiprogramming

Layer 0 was responsible for the multiprogramming aspects of the operating system. It
decided which process was allocated to the CPU. It dealt with interrupts and performed the
context switches when a process change was required.

Layer 1 was concerned with allocating memory to processes. It allocated space for
processes in main memory and on a 512k word drum used for holding parts of processes (pages)
for which there was no room in main memory. Above layer 1, processes did not have to worry
about whether they were in memory or on the drum; the layer 1 software took care of making
sure pages were brought into memory whenever they were needed.

Layer 2 deals with inter-process communication and communication between the operating
system and the console.

Layer 3 managed all I/O between the devices attached to the computer. This included
buffering information from the various devices. It also deals with abstract I/O devices with nice
properties, instead of real devices with many peculiarities.
Page 13 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

Layer 4 was where the user programs were found. They did not have to worry about process,
memory, console, or I/O management.

Layer 5 was the overall control of the system (called the system operator)

(c) Client-server operating Model:

The system is divides the OS into several processes each of which implements a single set of
services.

In client-Server Model, all the kernel does is handle the communication between clients and
servers. By splitting the operating system up into parts, each of which only handles one fact of
the system, such as file service, process service. The kernel validates messages, passes them
between the components and grant access to the hardware.

Terminal service, or memory service, each part becomes small and manageable; furthermore,
because all the servers run as user-mode processes, and not in kernel mode, they do not have
direct access to the hardware. As a consequence, if a bug in the file server is triggered, the file
service may crash, but this will not usually bring the whole machine down.

Client application PEACE threads File server Display server


interface

Sent mode

Sent Kernel mode


Microkernel

Reply

Hardware
Benefits include

i. Can result in a minimal kernel


ii. As each server is managing one part of the operating system, the procedures can be better
structured and more easily maintained.
iii. If a server crashes it is less likely to bring the entire machine down as it won’t be running
in kernel mode. Only the service that has crashed will be affected.
iv. Adaptability to use in distributed system. If a client communicates with a server by
sending it messages, the client need not know whether the message is handled locally in
its own machine, or whether it was sent across a network to a server on a remote
machine. As far as the client is concerned, the same thing happens in both cases: a
request was sent and a reply came back.

Page 14 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

NB in client server model OS is divided into modules instead of layers. Modules are treated
more or less equal. Instead of calling each other like procedures they communicate through
sending messages via external message handler

Page 15 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

TYPES / CLASSIFICATION OF OPERATING SYSTEM

1. Batch processing

It’s the earliest OS to be develops. It refers to a single processor OS that controls a single
microprocessor which is centralized. They allow one job to run at a time e.g. an OS of the 2nd
generation whereby the job are processed serially. Programs and data are submitted to the
computer in form of a “Job”. The job has to be completed for the next to be loaded and
processed.

In batch systems several jobs are collected and processed once as a group then the next is
processed. The processes are is in sequential one job after another. Consequently many support
one user at a time there is little or no interaction between the user and the executing program.
Thus the OS is not user friendly and is tedious

2. Multiprocessor operating system (Multiprocessing OS)

Multiprocessing - An operating system capable of supporting and utilizing more than one
computer processor at a time.

Advantages

i. Reduces CPU idle time


ii. Reduces incidences of peripheral bound operations
iii. Increases productivity of the computer

Disadvantages

i. Requires expensive CPU


ii. Its complex and difficult to operate

3. Distributed operating system

Distributed operating system is an operating system which manages a number of computers and
hardware devices which make up a distributed system.

With the advent of computer networks, in which many computers are linked together and are
able to communicate with one another, distributed computing became feasible. A distributed
computation is one that is carried out on more than one machine in a cooperative manner. A
group of linked computers working cooperatively on tasks

Such an operating system has a number of functions:

Page 16 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

i. It manages the communication between entities on the system


ii. It imposes a security policy on the users of the system
iii. It manages a distributed file system
iv. It monitors problems with hardware and software
v. It manages the connections between application programs and itself
vi. It allocates resources such as file storage to the individual users of the system.

NB A good distributed operating system should give the user the impression that they are
interacting with a single computer.

Advantages

i. Reduction of load of the host computer


ii. Requires low cost mini computers
iii. Reduction in delay
iv. Better service to customers

Disadvantages

i. Expensive because of the extra cost of communication devices


ii. Data duplication is high
iii. Programming problem
iv. Extra training needed for the users

Others

4. Interactive OS

a) Single-user operating system?


In essence, a single-user operating system provides access to the computer system by a single
user at a time. If another user needs access to the computer system, they must wait till the current
user finishes what they are doing and leaves. In this instance there is one keyboard and one
monitor that you interact with. There may also be a printer for the printing of documents and
images.

Operating systems such as Windows 95, Windows NT Workstation and Windows 2000
professional are essentially single user operating systems.

b) Multi-User Operating System


A multi-user operating system lets more than one user access the computer system at one time.
Access to the computer system is normally provided via a network, so that users access the
computer remotely using a terminal or other computer.
Page 17 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

Today, these terminals are generally personal computers and use a network to send and receive
information to the multi-user computer system. Examples of multi-user operating systems are
UNIX, Linux (a UNIX clone) and mainframes such as the IBM AS400.

Multi-user operating system must manage and run all user requests, ensuring they do not
interfere with each other. Devices that are serial in nature (devices which can only be used by
one user at a time, like printers and disks) must be shared amongst all those requesting them (so
that all the output documents are not jumbled up).

If each user tried to send their document to the printer at the same time, the end result would be
garbage. Instead, documents are sent to a queue, and each document is printed in its entirety
before the next document to be printed is retrieved from the queue. When you wait in-line at the
cafeteria to be served you are in a queue. Imagine that all the people in the queue are documents
waiting to be printed and the cashier at the end of the queue is the printer.

5. Networking operating system (NOS)

A networking operating system is an operating system that contains components and programs
that allow a computer on a network to serve requests from other computers for data and provide
access to other resources such as printer and file systems.

Features

• Add, remove and manage users who wish to use resources on the network.
• Allow users to have access to the data on the network. This data commonly resides on the
server.
• Allow users to access data found on other network such as the internet.
• Allow users to access hardware connected to the network.
• Protect data and services located on the network.
• Enables the user to pass documents on the attached network.

Page 18 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

JOB CONTROL
Is the control of multiple tasks/Jobs to ensuring that they each have access to adequate resources
to perform correctly, that competition for limited resources does not cause a deadlock where two
or more jobs are unable to complete, resolving such situations where they do occur, and
terminating jobs that, for any reason, are not performing as expected.

i. Job control language (JCL)


Job Control Language is a means of communicating with Operating System. JCL
statements provide information that the operating system needs to execute a job.

ii. Command languages


The shell is a command language or it’s a layer or a software that separate the user from the
rest of the machine. The user expresses commands to the shell which intern invokes low level
and kernel services as appropriate to implement the command. It creates processes pipes and
connects them with files and devices as needed to carry out the command.

Command language interfaces uses artificial language much like programming language.
They usually permits a user to combine constructs in a new and complex ways hence more
powerful for advance users. For then command language provides a strong feeling that they
are in charge and that they are taking the initiative rather than responding to the computer.

Command language users must learn the syntax but they can often express complex
possibilities without having distracting prompts. Command language interfaces are also the
style most enabled to programming that is writing programs or scripts of user input command

iii. System messages


System Message is a utilitiy for looking up the descriptive text that corresponds to a
Windows system error number. System Message can be used either as a desktop application
or as a simple out-of-process Object Linking and Embedding (OLE) Automation server.

Page 19 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

PROCESS MANAGEMENT

Process Management

Is the ensemble/collective of activities of planning and monitoring the performance of a process.


In multiprogramming systems the OS must allocate resources to processes, enable processes to
share and exchange information, protect the resources of each process from other processes and
enable synchronization among processes. To meet these requirements, the OS must maintain a
data structure for each process, which describes the state and resource ownership of that
process, and which enables the OS to exert control over each process

What is a process?
A process is a sequential program in execution. The components of a process are the following:

i. The object program to be executed ( called the program text in UNIX)


ii. The data on which the program will execute (obtained from a file or interactively from
the process's user)
iii. Resources required by the program ( for example, files containing requisite information)
iv. The status of the process's execution

Two concepts emerge:

• Uni-programming: - case whereby a system allows one process at a time.


• Multi-programming: - system that allows more than one process, multiple processes at a
time.

A process comes into being or is created in response to a user command to the OS. Processes
may also be created by other processes e.g. in response to exception conditions such as errors or
interrupts.

DESCRIPTION OF THE PROCESS MODEL / STATES

PROCESS STATES

As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Process state determines the effect of the instructions i.e. everything that
can affect, or be affected by the process. It usually includes code, particular data values, open
files, registers, memory, signal management information etc. We can characterize the behavior of
an individual process by listing the sequence of instruction that execute for that process. Such
listing is call the trace of processes

Each process may be in one of the following states

Page 20 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

i. New: The process is being created


ii. Running: The process is being executed i.e. actually using the CPU at that instant
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
iv. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
v. Terminated: The process has finished execution

A transition from one process state to another is triggered by various conditions as interrupts and
user instructions to the OS. Execution of a program involves creating & running to completion a
set of programs which require varying amounts of CPU, I/O and memory resources.

Process life cycle

Process Control Block (PCB) / Task Control Block.

The OS must know specific information about processes in order to manage and control them.
Also to implement the process model, the OS maintains a table (an array of structures), called the
process table, with one entry per process.

PCB information is usually grouped into two categories: Process State Information and Process
Control Information. Including these:

Page 21 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

PCB

i. Process state. The state may be new, ready, running, waiting, halted, and so on.
ii. Program counter. The counter indicates the address of the next instruction to be executed
for this process.
iii. CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information.
iv. CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
v. Memory-management information. This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on
the memory system used by the OS.
vi. Accounting information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
vii. I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

OS must make sure that processes don’t interfere with each other, this means

i. Making sure each gets a chance to run (scheduling).


ii. Making sure they don’t modify each other’s state (protection).

The dispatcher (short term scheduler) is the inner most portion of the OS that runs processes:

- Run processes for a while.


- Save state
- Load state of another process.
- Run it etc.
It only run processes but the decision on which process to run next is prioritized by a separate
scheduler.

When a process is not running, its state must be saved in its process control block. Items saved
include:
Page 22 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

i. Program counter
ii. Process status word (condition codes etc.).
iii. General purpose registers
iv. Floating - point registers etc.

When no longer needed, a process (but not the underlying program) can be deleted via the OS,
which means that all record of the process is obliterated and any resources currently allocated to
it are released.

Page 23 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

THERE ARE VARIOUS MODELS THAT CAN BE USED

i. Two state process model


ii. Three state process model
iii. Five state model

1. Two state process model

The principal responsibility of the OS is to control the execution of a process; this will include
the determination of interleaving patters for execution and allocation of resources to processes.
We can contrast the simplest model by observing that a process can either executed or not i.e.
running or not running

Each process must be presented in some way so that the OS can keep track of it i.e. the process
control block. Processes that are not running must be kept in some sort of a queue waiting their
turn to execute. There is a single queue in which each entry is a pointer to the PCB of a particular
block.

Dispatch

Enter Exit
Not Running Running

Pause

Two State transition diagram

2. Three state
Completion

Running
(Active)

Delay Suspend
Dispatch
Submit
Resume
Ready
Blocked
(Wake up)

Page 24 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Three State transition diagram

i. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
ii. Running: The process is being executed i.e. actually using the CPU at that instant
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.

3. Five State

In this model two states have been added to the three state model i.e. the new and exit state. The
new state correspond to a process that has just been defined e.g. a new user trying to log onto a
time sharing system. In this instance, any tables needed to manage the process are allocated and
built.

In the new state the OS has performed the necessary action to create the process but has not
committed itself to the execution of the process i.e. the process is not in the main memory.

In exit state a process may exit due to two reasons

i. Termination when it reaches a natural completion point


ii. When its aborted due to an unrecoverable error or when another process with appropriate
authority causes it to stop

Dispatch
Admit Release
New Ready Running Exit

Time out

Event Event
occurs wait

Blocked

Five State transition diagram

i. Running: The process is currently being executed i.e. actually using the CPU at that
instant
Page 25 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

ii. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
iv. New: The process has just been created but has not yet being admitted to the pool of
executable processes by the OS i.e. the new process has not been loaded into the main
memory although its PBC has been created.
v. Terminated/exit: The process has finished execution or the process has been released
from the pool of executable processes by the OS either because it halted or because it was
aborted for some reasons.

Creation and termination of process

When a new process is to be added to those concurrently being managed the OS builds the data
structures that are used to manage the process and allocates address space to the processor

Reasons for process creation

i. New batch job


The OS is provided with a batch job control stream usually on tape or disk. When the OS
is prepared to take on a new work it will load the next job sequence of job control
command
ii. Interactive log on
A user at a terminal logs on to the system
iii. Created by OS provider service
The OS can create a process to perform functions on behalf of a user program without the
user having to wait
iv. Sprung by existing process
For purpose of modularity or to exploit parallelism a user program can declare creation of
a number of processes

Reasons for process termination

i. Normal completion
The process executes an OS service call to indicate that it has completed running
ii. Time limit exceeded
The process has run longer than the specified total time limit
iii. Memory unavailable: The process requires more memory than the system can provide
iv. Bound variation
The process tries to access memory location that it is not allowed to access
v. Protection error
Page 26 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

The process attempts to use a resource or a file that is not allowed to use or it tries to use
it in an improper version such as writing to read only file
vi. Arithmetic error
The process tries to prohibit computation e.g. division by zero or tries to state number
larger than the hardware can accommodate
vii. Time overrun
The process has waited longer than a specified maximum time for a certain event to occur
viii. I/O failure
An error occurs during I/O such as inability to find a file. Failure to read or write or write
after a specified number of times
ix. Invalid instruction
The process attempts to execute a non-existing instruction
x. Data misuse
A piece of data is of the wrong type or is not initialized
xi. Operator / OS intervention
For some reasons the operator or OS has terminated the process e.g. if a deadlock existed

Criteria For Performance Evaluation


i. Utilization: The fraction of time a device is in use. ( ratio of in-use time / total
Observation time )
ii. Throughput: The number of job completions in a period of time. (jobs / second )
iii. Service time the time required by a device to handle a request. (Seconds)
iv. Queuing time: Time on a queue waiting for service from the device. (Seconds)
v. Residence time: The time spent by a request at a device.
vi. Residence time = service time + queuing time.
vii. Response time: Time used by a system to respond to a user job. (Seconds)
viii. Think time: The time spent by the user of an interactive system to figure out the next
request. (Seconds)
ix. The goal is to optimize both the average and the amount of variation. (But beware the
ogre Predictability)

Page 27 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

INTER PROCESS COMMUNICATION (IPC)

A capability supported by some operating systems that allows one process to communicate with
another process. The processes can be running on the same computer or on different computers
connected through a network.

IPC enables one application to control another application, and for several applications to share
the same data without interfering with one another. IPC is required in all multiprocessing
systems.

Definitions of Terms

1. Race Conditions

The race condition is the situation where several threads access (read/write) and manipulate
shared data concurrently causing wrong output. The final value of the shared data depends upon
which process finishes last. To prevent race conditions, concurrent processes must be
synchronized.

How to minimize race condition

• A programing discipline that will ensure an absence of race conditions.


• Requires a lock be held on every access to a shared variable

2. Critical Sections

Are sections in a process during which the process must not be interrupted, especially when the
resource it requires is shared. It is necessary to protect critical sections with interlocks which
allow only one thread (process) at a time to transverse them.

3. Sleep & Wakeup

This is an inter-process communication primitive that block instead of wasting CPU time when
they (processes) are not allowed to enter their critical sections. One of the simplest is the pair
SLEEP and WAKEUP.

SLEEP is a system call that causes the caller to block, that is, be suspended until another process
wakes it up. The WAKEUP call has one parameter, the process to be awakened.

E.g. the case of producer-consumer problem – where the producer, puts information into a buffer
and on the other hand, the consumer, takes it out. The producer will go to sleep if the buffer is
already full, to be awakened when the consumer has removed one or more items. Similarly, if the

Page 28 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

consumer wants to remove an item from the buffer and sees the buffer is empty, it goes to sleep
until the producer puts something in the buffer and wakes it up.

4. Event counters

An event counter is another data structure that can be used for process synchronization. Like a
semaphore, it has an integer count and a set of waiting process identifications. Un-like
semaphores, the count variable only increases. This uses a special kind of variable called an
Event Counter.

An event counter E has the following three operations defined;

i. Read (E): return the count associated with event counter E.


ii. Advance (E): atomically increment the count associated with event counter E.
iii. Await (E,v): if E.count ≥ v, then continue. Otherwise, block until E. count ≥ v.

Before a process can have access to a resource, it first reads E, if value good, advance E
otherwise await until v reached.

5. Message Passing

When processes interact with one another, two fundamental requirements must be satisfied:
synchronization and communication. One approach to providing both of this function is message
passing. A case where a processor (is a combination of a processing element (PE) and a local
main memory, it may include some external communication (I/O) facilities) when processing
elements communicate via messages transmitted between their local memories. A process will
transmit a message to other processes to indicate state and resources it is using.

In Message Passing two primitives SEND and RECEIVE, which are system calls, are used. The
SEND sends a message to a given destination and RECEIVE receives a message from a given
source.

Synchronization

Definition: Means the coordination of simultaneous threads or processes to complete a task in


order to get correct runtime order and avoid unexpected race conditions

The communication of a message between two processing will demand some level of
synchronization. Since there is need to know what happens after a send or receive primitive is
issued.

The sender and the receiver can be blocking or non-blocking. Three combinations are common
but only one can be applied in any particular system

Page 29 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

i. Blocking send, blocking receive. Both the sender and the receiver are blocked until the
message is delivered. this allows for tight synchronization
ii. Non-blocking send, blocking receive. Although the sender may continue on, the
receiver is blocked until the requested message arrives. This method is effective since it
allows a process to send more than one message to a variety of destination as quickly as
possible.
iii. Non-blocking send, non-blocking receive. Neither party is required to wait. Useful for
concurrent programming.

Page 30 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Addressing

When a message is to send it is necessary to specify the in the send primitive which process is to
receive the message. This can be either direct addressing or indirect addressing

Direct addressing

The send primitive include a specific identifier of the destination process. There are two
ways to handle the receiving primitive.

i. Require that the process explicitly designate a sending process. i.e. a process must
know a head of time from which process a message is expected
ii. Use of implicit addressing where the source parameter of the receive primitive
possesses a value returned when a receive operation has been performed.

Indirect addressing

This case instead of sending a message directly to the receiver the message is sent to a shared
data structure consisting of a queue that can temporarily hold messages. Such queues are often
referred to as mailboxes.

General Message format

Message type
Destination ID
Header Source ID
Message length
Control
information

Body Message content

6. Equivalence of primitives

Many new IPC’s have been proposed like sequencers, path expressions and serializers but are
similar to other ones. One can be able to build new methods or schemes from the four different
inter-process communication primitives – semaphores, monitors, messages & event counters.
The following are the essential equivalence of semaphores, monitors, and messages.

i. Using semaphores to implement monitors and messages


ii. Using monitors to implement semaphores and messages

Page 31 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

iii. Using messages to implement semaphores and monitors

Page 32 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

FUNDAMENTALS OF CONCURRENT PROCESSES

1. Mutual Exclusion

The mutual exclusion is a way of making sure that if one process is using a shared modifiable
data, the other processes will be excluded from doing the same thing. It’s a way of making sure
that processes are not in their critical sections at the same time

There are essentially three approaches to implementing mutual exclusion.

i. Leave the responsibility with the processes themselves: this is the basis of most
software approaches. These approaches are usually highly error-prone and carry high
overheads.
ii. Allow access to shared resources only through special-purpose machine instructions: i.e.
a hardware approach. These approaches are faster but still do not offer a complete
solution to the problem, e.g. they cannot guarantee the absence of deadlock and starvation.
iii. Provide support through the operating system, or through the programming language. We
shall outline three approaches in this category: semaphores, monitors, and message
passing.

Requirement for mutual exclusion


i. Mutual exclusion must be enforced. Only one process is allowed into its critical
section, among all processes that have critical section for the same resources or
shared objects
ii. A process that halts in its noncritical section must do so without interfering with other
processes
iii. It must be possible for a process requiring access to a critical section to be delayed
indefinitely: no deadlock no starvation
iv. When no process is in a critical section, any process that request entry to its critical
section must be permitted to enter without delay
v. No assumptions are made about relative process speeds or number of processors.
vi. A process remains inside its critical section for a finite time only

2. Semaphores

It’s an integer value for controlling access, by multiple processes, to a common resource in a
concurrent system such as a multiprogramming operating system.

Semaphores aren’t provided by hardware but have the following properties:

i. Machine independent.
ii. Simple
iii. Powerful (embody both exclusion and waiting).

Page 33 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

iv. Correctness is easy to determine.


v. Work with many processes.
vi. Can have many different critical sections with different semaphores.
vii. Can acquire many resources simultaneously.
viii. Can permit multiple processes into the critical section at once, if that is desirable.
They do a lot more than just mutual exclusion.

Page 34 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Problems with Semaphores

i. Semaphores do not completely eliminate race conditions and other problems (like
deadlock).
ii. Incorrect formulation of solutions, even those using semaphores, can result in problems.

3. Monitor

This is a condition variable used to block a thread until a particular condition is true.

It has a collection of procedures, variables and data structures that are all grouped together in a
special kind of module or package. Thus a monitor has: - shared data, a set of atomic (tiny)
operations on the data and a set of condition variables. Monitors can be imbedded in a
programming language thus mostly the compiler implements the monitors.

Typical implementation: each monitor has a lock. Acquire lock when begin a monitor operation,
and release lock when operation finishes.

The execution of a monitor obeys the following constrains:

i. Only one process can be active within a monitor at a time


ii. Procedure of a monitor can only access data local to the monitor, they cannot access an
outside variable
iii. The variables or data local to a monitor cannot be directly accessed from outside the
monitor

Advantages:

i. Reduces probability of error, biases programmer to think about the system in a certain
way

Disadvantages:

ii. Absence of concurrency: if a monitor encapsulate the source since only one process can
be active within a monitor at a time thus possibility of a deadlocks in case of nested
monitors call

4. Deadlock

A deadlock is a situation in which two or more processes sharing the same resource are
effectively preventing each other from accessing the resource, resulting in those processes
ceasing to function.

Page 35 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Resources come in two flavors/types

i. A preemptable resource is one that can be taken away from the process with no ill
effects. Memory is an example of a preemptable resource. On the other hand,
ii. A nonpreemptable resource is one that cannot be taken away from process (without
causing ill effect). For example, CD resources are not preemptable at an arbitrary
moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.

Conditions Necessary for a Deadlock

i. Mutual Exclusion Condition

The resources involved are non-shareable. At least one resource (thread) must be held in
a non-shareable mode, that is, only one process at a time claims exclusive control of the
resource. If another process requests that resource, the requesting process must be
delayed until the resource has been released

ii. Hold and Wait Condition


Requesting process hold already, resources while waiting for requested resources. There
must exist a process that is holding a resource already allocated to it while waiting for
additional resource that are currently being held by other processes.
iii. No-Preemptive Condition
Resources already allocated to a process cannot be preempted. Resources cannot be
removed from the processes are used to completion or released voluntarily by the process
holding it.
iv. Circular Wait Condition

The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.

Dealing with Deadlock Problem

In general, there are four strategies of dealing with deadlock problem:

i. The Ostrich Approach


Just ignore the deadlock problem altogether.
ii. Deadlock Detection and Recovery
Detect deadlock and, when it occurs, take steps to recover.
iii. Deadlock Avoidance
Avoid deadlock by careful resource scheduling.
iv. Deadlock Prevention

Page 36 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Prevent deadlock by resource scheduling so as to negate at least one of the four


conditions.

Deadlock Prevention

Havender in his pioneering work showed that since all four of the conditions are necessary for
deadlock to occur, it follows that deadlock might be prevented by denying any one of the
conditions.

• Elimination of “Mutual Exclusion” Condition


The mutual exclusion condition must hold for non-sharable resources. That is, several
processes cannot simultaneously share a single resource. This condition is difficult to
eliminate because some resources, such as the tap drive and printer, are inherently non-
shareable. Note that shareable resources like read-only-file do not require mutually
exclusive access and thus cannot be involved in deadlock.

• Elimination of “Hold and Wait” Condition


There are two possibilities for elimination of the second condition. The first alternative is
that a process request be granted all of the resources it needs at once, prior to execution.
The second alternative is to disallow a process from requesting resources whenever it has
previously allocated resources. This strategy requires that all of the resources a process
will need must be requested at once. The system must grant resources on “all or none”
basis. If the complete set of resources needed by a process is not currently available, then
the process must wait until the complete set is available. While the process waits,
however, it may not hold any resources. Thus the “wait for” condition is denied and
deadlocks simply cannot occur. This strategy can lead to serious waste of resources. For
example, a program requiring ten tap drives must request and receive all ten derives
before it begins executing. If the program needs only one tap drive to begin execution and
then does not need the remaining tap drives for several hours. Then substantial computer
resources (9 tape drives) will sit idle for several hours. This strategy can cause indefinite
postponement (starvation). Since not all the required resources may become available at
once.

• Elimination of “No-preemption” Condition

The no-preemption condition can be alleviated by forcing a process waiting for a


resource that cannot immediately be allocated to relinquish all of its currently held
resources, so that other processes may use them to finish. Suppose a system does allow
processes to hold resources while requesting additional resources. Consider what happens
when a request cannot be satisfied. A process holds resources a second process may need
in order to proceed while second process may hold the resources needed by the first
process. This is a deadlock. This strategy require that when a process that is holding some
resources is denied a request for additional resources. The process must release its held
resources and, if necessary, request them again together with additional resources.
Implementation of this strategy denies the “no-preemptive” condition effectively.

Page 37 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

High Cost When a process release resources the process may lose all its work to that
point. One serious consequence of this strategy is the possibility of indefinite
postponement (starvation). A process might be held off indefinitely as it repeatedly
requests and releases the same resources.

• Elimination of “Circular Wait” Condition


The last condition, the circular wait, can be denied by imposing a total ordering on all of
the resource types and then forcing, all processes to request the resources in order
(increasing or decreasing). This strategy impose a total ordering of all resources types,
and to require that each process requests resources in a numerical order (increasing or
decreasing) of enumeration. With this rule, the resource allocation graph can never have a
cycle.
For example, provide a global numbering of all the resources, as shown

1 ≡ Card reader
2 ≡ Printer
3 ≡ Plotter
4 ≡ Tape drive
5 ≡ Card punch

Now the rule is this: processes can request resources whenever they want to, but all
requests must be made in numerical order. A process may request first printer and then a
tape drive (order: 2, 4), but it may not request first a plotter and then a printer (order: 3,
2). The problem with this strategy is that it may be impossible to find an ordering that
satisfies everyone.

Deadlock Avoidance
Either : Each process provides the maximum number of resources of each type it needs.
With these information, there are algorithms that can ensure the system will never enter a
deadlock state. This is deadlock avoidance.

A sequence of processes <P1, P2, …, Pn> is a safe sequence if for each process Pi in the
sequence, its resource requests can be satisfied by the remaining resources and the sum of all
resources that are being held by P1, P2, …, Pi-1. This means we can suspend Pi and run P1, P2,
…, Pi-1 until they complete. Then, Pi will have all resources to run.

A state is safe if the system can allocate resources to each process (up to its maximum, of course)
in some order and still avoid a deadlock. In other word, a state is safe if there is a safe sequence.
Otherwise, if no safe sequence exists, the system state is unsafe. An unsafe state is not
necessarily a deadlock state. On the other hand, a deadlock state is an unsafe state
Page 38 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System

A system has 12 tapes and three processes A, B, C. At time t0, we have:

Maximum need Current Will need


holding
A 10 5 5
B 4 2 2
C 9 2 7

Then, <B, A, C> is a safe sequence (safe state). The system has 12-(5+2+2)=3 free tapes.

Since B needs 2 tapes, it can take 2, run, and return 4. Then, the system has (3-2)+4=5 tapes. A
now can take all 5 tapes and run. Finally, A returns 10 tapes for C to take 7 of them

A system has 12 tapes and three processes A, B, C. At time t1, C has one more tape:

Maximum need Current Will need


holding
A 10 5 5
B 4 2 2
C 9 3 6

The system has 12-(5+2+3)=2 free tapes.

At this point, only B can take these 2 and run. It returns 4, making 4 free tapes available.

But, none of A and C can run, and a deadlock occurs.

The problem is due to granting C one more tape.

OR

A deadlock avoidance algorithm ensures that the system is always in a safe state. Therefore, no
deadlock can occur. Resource requests are granted only if in doing so the system is still in a safe
state.

Consequently, resource utilization may be lower than those systems without using a deadlock
avoidance algorithm.

Deadlock Detection

Deadlock detection is the process of actually determining that a deadlock exists and identifying
the processes and resources involved in the deadlock. The basic idea is to check allocation
against resource availability for all possible allocation sequences to determine if the system is in

Page 39 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

deadlocked state . Of course, the deadlock detection algorithm is only half of this strategy. Once
a deadlock is detected, there needs to be a way to recover several alternatives exists:

• Temporarily prevent resources from deadlocked processes.


• Back off a process to some check point allowing preemption of a needed resource and
restarting the process at the checkpoint later.
• Successively kill processes until the system is deadlock free.

These methods are expensive in the sense that each iteration calls the detection algorithm until
the system proves to be deadlock free. The complexity of algorithm is O(N2) where N is the
number of proceeds. Another potential problem is starvation; same process killed repeatedly.

Approaches to recover from deadlock

✓ Process termination:

It is a method in which all the processes which are grouped into the deadlock cycle are aborted.
This can be done by two methods.

• First method is to abort all the processes which are in the deadlock cycle. This will be at
great expense because many of these processes are about to finish. And it will causes
recomputation of these processes from the scratch.

• The second method is that in which we abort only a single process from the deadlock
cycle. And again check for the deadlock. If any deadlock cycle still exists then we again
abort another process and check for the deadlock condition again. This process will
continue until we recover from deadlock. But this process will also causes abortion of
process which is about to complete. And we have to again execute that process from
starting.

✓ Resource preemption:

To eliminate the deadlock condition another approach is preemption. We preempt some


resources from processes and give these resources to other processes until we recover from
the deadlock condition

Page 40 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Page 41 of 42 Nachu TVC ICT Department


By phillis
CCIT Module 1 Operating System

Page 42 of 42 Nachu TVC ICT Department


By phillis

You might also like