Operating Systems
Operating Systems
It is a program that acts as an intermediary between the user of a computer and the computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs in a convenient and efficient manner
i. Compatibility with the available hardware e.g. hard disk, memory, processor speed etc
ii. Upgradability – the operating system should be able to accommodate updates if any
iii. Number of user to share the computer resources
iv. Minimum RAM requirement for the OS
v. Applications to installed in the computer
vi. Initial cost
in the form of utility programs, are supplied with the OS and are referred to as application
program developer tools.
ii. Program execution: - A number of tasks need to be performed to execute programs.
Instructions and data must be loaded into main memory, I/O devices and files must be
initialized, and other resources must be prepared. The OS handles these scheduling duties
for the user.
iii. Access to I/O devices: - Each I/O device requires its own set of instructions or control
signals for operations. The OS provides a uniform interface that includes these details so
that the programmer can access such devices using simple read and writes.
iv. Controlled access to files: - In the case of files for the operating system, control must
include a detailed understanding of not only the nature of the I/O device (disk drive, tape
drive) but also the structure of the data contained in the files on the storage medium. For
the case of a system with multiple users, the OS may provide protection mechanisms to
control access to files.
v. System Access: - For shared or public systems, the OS controls access to the system as a
whole and to specific system resources. The access functions must provide protection of
resources and data from unauthorized users and must resolve conflicts for resources
contention.
vi. Error detection and response: - When a computer is running, a number of errors occurs,
such as, internal and external hardware errors e.g. memory error, a device failure or
malfunction, software errors such as arithmetic overflow, attempt to access forbidden
memory locations and inability of the OS to grant the request of an application.
In any of these cases, the OS must provide a response that clears the error condition with
the least impact on running applications.
The response may range from ending the program that caused the error to retrying the
operation, to simply reporting the error to the application.
vii. Accounting: - An OS will collect usage statistics for various resources and monitor
performance parameters such as response time. This information is * in anticipating the
need for future enhancements and in tuning the system to improve performance on a
multi-user system, the information can be used for billing purposes.
It is a request made by any program to the operating system for performing task. It is used
whenever a program needs to access a restricted source.
ii. File management (e.g., open and close files,* create file, delete file* read , write)
iii. Device management (e.g., read and write operations* request or release a device)
iv. Information maintenance (e.g., get time or date* set process, file or device attributes*
get system data)
v. Communication (e.g., send and receive messages)
System calls allow user-level processes to request some services from the operating system
which the process itself is not allowed to do. For example, for an I/O a process involves a system
call telling the OS to read or write particular area and this request is satisfied by the operating
system.
System programs
Provide a convenient environment for program development (editors, compilers) and execution
(shells). Some of them are simply user interfaces to system calls; others are considerably more
complex. They can be divided into these categories:
i. File management: These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
ii. Status information/management: Some programs simply ask the system for the date,
time, amount of available memory or disk space, number of users, or similar status
information. That information is then formatted and printed to the terminal or other
output device or file.
iii. File modification: Several text editors may be available to create and modify the content
of files stored on disk or tape.
iv. Programming-language support: Compilers, assemblers, and interpreters for common
programming languages (such as C, C++, Java, Visual Basic, and PERL) are often
provided to the user with the operating system, although some of these programs are now
priced and provided separately.
v. Program loading and execution: Once a program is assembled or compiled, it must be
loaded into memory to be executed. The system may provide absolute loaders, re-
locatable loaders, linkage editors, and overlay loaders. Debugging systems for either
higher-level languages or machine language are needed also.
vi. Communications: These programs provide the mechanism for creating virtual
connections among processes, users, and computer systems. They allow users to send
messages to one another's screens, to browse web pages, to send electronic mail
messages, to log in remotely, or to transfer files from one machine to another.
The shell is the outermost part of an operating system that interacts with user commands. After
verifying that the commands are valid, the shell sends them to the command processor to be
executed.
Features of a GUI
The primary purpose of the shell is to invoke or "launch" another program; however, shells
frequently have additional capabilities such as viewing the contents of directories. The best
choice is often determined by the way in which a computer will be used.
The kernel is the central part of an operating system that directly controls the computer
hardware. It is the only way through which the programs (all programs including shell) can
access the hardware
i. Process management
ii. Device management
iii. Memory management
iv. Interrupt handling
v. I/O communication
vi. File system
Operating system tasks are done differently by different kernels, depending on their design and
implementation. While monolithic kernels will try to achieve these goals by executing all the
operating system code in the same address space to increase the performance of the system,
microkernels run most of the operating system services in user space as servers, aiming to
improve maintainability and modularity of the operating system. A range of possibilities exists
between these two extremes.
i. Cache manager
The cache manager handles file caching for all file system. It can dynamically increase or
decrease the size of the cache devoted to a particular file as the amount of available
physical memory varies
ii. File system drivers
The I/O manager treats a file system as just another device driver and routes I/O requests
for file system volumes to the appropriate software driver for that volume. The file
system, in turn, sends I/O requests to the software driver that manage the hardware device
adapter
iii. Network drivers
This offers the I/O manager with integrated networking capabilities and support for
remote file systems
iv. Hardware drive drivers
These are software driver that access the hardware registers of the peripheral devices
using entry points in the kernels hardware abstraction layer
The kernel's primary purpose is to manage the computer's resources and allow other
programs to run and use these resources. Typically, the resources consist of:
i. The Central Processing Unit (CPU, the processor). The kernel takes responsibility
for deciding at any time which of the many running programs should be allocated to
the processor or processors (each of which can usually run only one program at a
time)
ii. The computer's memory. The kernel is responsible for deciding which memory each
process can use, and determining what to do when not enough is available.
iii. Any Input/output (I/O). The kernel allocates requests from applications to perform
I/O to an appropriate device and provides convenient methods for using the device
(typically abstracted to the point where the application does not need to know
implementation details of the device).
A kernel may implement these features itself, or rely on some of the processes it runs
to provide the facilities to other processes, although in this case it must provide some
means of IPC to allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to
access these facilities.
The Shell is a program which allows the user to access the computer system and it act as
an interface between the user and the computer system. It acts as an interface between the
user and the kernel
The Kernel is the only way through which the programs (all programs including shell)
can access the hardware. It’s a layer between the application programs and hardware. It is
the core of most of the operating systems and manages everything including the
communication between the hardware and software.
A virtual machine (VM) is a separate and independent software instance that includes a full copy
of an operating system and application software. A physical server prepared with a server
virtualization hypervisor such as Microsoft Hyper-V, VMware vSphere or Citrix XenServer can
host multiple VMs while maintaining logical isolation between each machine. Each instance can
then share the server's computing resources -- dramatically increasing physical server hardware
usage.
i. Increase the hardware utilization: A typical non-virtualized application server may reach
just 5% to 10% utilization. But a virtual server that hosts multiple VMs can easily reach
50% to 80% utilization
ii. Decrease the capital and operating cost by sharing in number of VM: The net result is
that more virtual machines can be hosted on fewer physical servers, translating into lower
costs for hardware acquisition, maintenance, energy and cooling system usage.
iii. High availability and Secure
iv. VM can be use from anywhere in the intranet
v. You can run programs in other operating systems from what is running on the machine.
Disadvantages
(e) A process
A process is a program in execution. A process is more than a program, because it’s associated
with resources such as registers (program counter, stack pointer) list of open files etc. Moreover,
multiple processes may be associated with one program (e.g., run the same program, a web
browser, twice).
Virtual memory is a computer system technique which gives an application program the
impression that it has contiguous working memory (an address space), while in fact it may be
physically fragmented and may even overflow on to disk storage.
Systems that use this technique make programming of large applications easier and use real
physical memory (e.g. RAM) more efficiently than those without virtual memory
Thrashing
It’s a phenomenon in virtual memory schemes in which the processor spend most of its time
swapping pages rather than executing instructions
g) File
Hides away the peculiarities of disk and input/output device and provides the programmers with
an easy way to create, retrieve and modify files.
Since operating system has historically been closely tied to the architecture of the computer on
which they run we will look at successive generation of computer to see what their operations
were like.
The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages). Operating
systems were unheard of.
In these early days a single group of people (usually engineers) designed, built, programmed,
operated and maintained each machine.
2. Second generation computer (1955 1965). They used Transistor and batch systems
This computer had improved with the introduction of punch cards. The General Motors Research
Laboratories implemented the first operating systems in early 1950's for their IBM 701. This
system ran one job at a time. They were called single-stream batch processing systems because
programs and data were submitted in groups or batches.
These computers were mostly used for scientific and engineering calculation, such as partial
differentiation equations that often occur in physics and engineering. They were largely
programmed in FORTRAN and assembly language. Typically the OS were FMS ( the Fortran
Monitor System) and IBSYS, IBMS OS for the 7094
3. Third generation computers (1965 1980). They used Integrated Circuits (ICs) and
multiprogramming
The systems of the 1960's were also batch processing systems, but they were able to take better
advantage of the computer's resources by running several jobs at once.
Page 9 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
i) Multiprogramming
So operating systems designers developed the concept of multiprogramming in which
several jobs are in main memory at once; a processor is switched from job to job as
needed to keep several jobs advancing while keeping the peripheral devices in use.
While one job was waiting for I/O to complete, another job could be using the CPU.
ii) Spooling (simultaneous peripheral operations on line).
In spooling, a high-speed device like a disk interposed between a running program
and a low-speed device involved with the program in input/output. Instead of writing
directly to a printer, for example, outputs are written to the disk. Programs can run to
completion faster, and other programs can be initiated sooner when the printer
becomes available, the outputs may be printed.
iii) Time-sharing technique
4. Fourth generation computers (1980 1989) . They used Large Scale Integration
With the development of Large Scale Integrated circuits (LSI), chips, operating system entered
in the personal computer and the workstation age. Microprocessor technology evolved to the
point that it become possible to build desktop computers as powerful as the mainframes of the
1970s.
Two operating systems dominated the personal computer scene: MS-DOS, written by Microsoft,
Inc. for the IBM PC and other machines using the Intel 8088 CPU and its successors, and UNIX,
which is dominant on the large personal computers using the Motorola 6899 CPU family.
As modern operating systems are large and complex careful engineering is required. There are
four different structures that have shown in this document in order to get some idea of the
spectrum of possibilities. These are by no mean s exhaustive, but they give an idea of some
designs that have been tried in practice.
In this approach the entire operating system runs as a single program in the kernel mode. The
operating system is written as a collection of thousands of procedures, each of which can call any
of the other ones whenever it needs to without restriction making it difficult to understand the
system.
When this approach is used, one compiles all the individual procedures and then binds them all
together into a single executable file using the system linker. In terms of information hiding,
there is essentially none- every procedure is visible to every other one i.e. opposed to a structure
containing modules or packages, in which much of the information is local to module, and only
officially designated entry points can be called from outside the module.
Main Main
procedure
procedure
Service procedures
Utility procedure
i. Difficult to maintain
ii. Difficult to take care of concurrency due to multiple users/jobs
The operating system is broken up into a number of layers (or levels), each on top of lower
layers. Each layer is an implementation of an abstract object that is the encapsulation of data and
operations that can manipulate these data. The operating system is organized as a hierarchy of
layers, each one constructed upon the one below it.
Layer Function
5 The operator
4 User program
3 i/o management
2 Operator-process communication
1 Memory and drum management
0 Process allocation and
multiprogramming
Layer 0 was responsible for the multiprogramming aspects of the operating system. It
decided which process was allocated to the CPU. It dealt with interrupts and performed the
context switches when a process change was required.
Layer 1 was concerned with allocating memory to processes. It allocated space for
processes in main memory and on a 512k word drum used for holding parts of processes (pages)
for which there was no room in main memory. Above layer 1, processes did not have to worry
about whether they were in memory or on the drum; the layer 1 software took care of making
sure pages were brought into memory whenever they were needed.
Layer 2 deals with inter-process communication and communication between the operating
system and the console.
Layer 3 managed all I/O between the devices attached to the computer. This included
buffering information from the various devices. It also deals with abstract I/O devices with nice
properties, instead of real devices with many peculiarities.
Page 13 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
Layer 4 was where the user programs were found. They did not have to worry about process,
memory, console, or I/O management.
Layer 5 was the overall control of the system (called the system operator)
The system is divides the OS into several processes each of which implements a single set of
services.
In client-Server Model, all the kernel does is handle the communication between clients and
servers. By splitting the operating system up into parts, each of which only handles one fact of
the system, such as file service, process service. The kernel validates messages, passes them
between the components and grant access to the hardware.
Terminal service, or memory service, each part becomes small and manageable; furthermore,
because all the servers run as user-mode processes, and not in kernel mode, they do not have
direct access to the hardware. As a consequence, if a bug in the file server is triggered, the file
service may crash, but this will not usually bring the whole machine down.
Sent mode
Reply
Hardware
Benefits include
NB in client server model OS is divided into modules instead of layers. Modules are treated
more or less equal. Instead of calling each other like procedures they communicate through
sending messages via external message handler
1. Batch processing
It’s the earliest OS to be develops. It refers to a single processor OS that controls a single
microprocessor which is centralized. They allow one job to run at a time e.g. an OS of the 2nd
generation whereby the job are processed serially. Programs and data are submitted to the
computer in form of a “Job”. The job has to be completed for the next to be loaded and
processed.
In batch systems several jobs are collected and processed once as a group then the next is
processed. The processes are is in sequential one job after another. Consequently many support
one user at a time there is little or no interaction between the user and the executing program.
Thus the OS is not user friendly and is tedious
Multiprocessing - An operating system capable of supporting and utilizing more than one
computer processor at a time.
Advantages
Disadvantages
Distributed operating system is an operating system which manages a number of computers and
hardware devices which make up a distributed system.
With the advent of computer networks, in which many computers are linked together and are
able to communicate with one another, distributed computing became feasible. A distributed
computation is one that is carried out on more than one machine in a cooperative manner. A
group of linked computers working cooperatively on tasks
NB A good distributed operating system should give the user the impression that they are
interacting with a single computer.
Advantages
Disadvantages
Others
4. Interactive OS
Operating systems such as Windows 95, Windows NT Workstation and Windows 2000
professional are essentially single user operating systems.
Today, these terminals are generally personal computers and use a network to send and receive
information to the multi-user computer system. Examples of multi-user operating systems are
UNIX, Linux (a UNIX clone) and mainframes such as the IBM AS400.
Multi-user operating system must manage and run all user requests, ensuring they do not
interfere with each other. Devices that are serial in nature (devices which can only be used by
one user at a time, like printers and disks) must be shared amongst all those requesting them (so
that all the output documents are not jumbled up).
If each user tried to send their document to the printer at the same time, the end result would be
garbage. Instead, documents are sent to a queue, and each document is printed in its entirety
before the next document to be printed is retrieved from the queue. When you wait in-line at the
cafeteria to be served you are in a queue. Imagine that all the people in the queue are documents
waiting to be printed and the cashier at the end of the queue is the printer.
A networking operating system is an operating system that contains components and programs
that allow a computer on a network to serve requests from other computers for data and provide
access to other resources such as printer and file systems.
Features
• Add, remove and manage users who wish to use resources on the network.
• Allow users to have access to the data on the network. This data commonly resides on the
server.
• Allow users to access data found on other network such as the internet.
• Allow users to access hardware connected to the network.
• Protect data and services located on the network.
• Enables the user to pass documents on the attached network.
JOB CONTROL
Is the control of multiple tasks/Jobs to ensuring that they each have access to adequate resources
to perform correctly, that competition for limited resources does not cause a deadlock where two
or more jobs are unable to complete, resolving such situations where they do occur, and
terminating jobs that, for any reason, are not performing as expected.
Command language interfaces uses artificial language much like programming language.
They usually permits a user to combine constructs in a new and complex ways hence more
powerful for advance users. For then command language provides a strong feeling that they
are in charge and that they are taking the initiative rather than responding to the computer.
Command language users must learn the syntax but they can often express complex
possibilities without having distracting prompts. Command language interfaces are also the
style most enabled to programming that is writing programs or scripts of user input command
PROCESS MANAGEMENT
Process Management
What is a process?
A process is a sequential program in execution. The components of a process are the following:
A process comes into being or is created in response to a user command to the OS. Processes
may also be created by other processes e.g. in response to exception conditions such as errors or
interrupts.
PROCESS STATES
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Process state determines the effect of the instructions i.e. everything that
can affect, or be affected by the process. It usually includes code, particular data values, open
files, registers, memory, signal management information etc. We can characterize the behavior of
an individual process by listing the sequence of instruction that execute for that process. Such
listing is call the trace of processes
A transition from one process state to another is triggered by various conditions as interrupts and
user instructions to the OS. Execution of a program involves creating & running to completion a
set of programs which require varying amounts of CPU, I/O and memory resources.
The OS must know specific information about processes in order to manage and control them.
Also to implement the process model, the OS maintains a table (an array of structures), called the
process table, with one entry per process.
PCB information is usually grouped into two categories: Process State Information and Process
Control Information. Including these:
PCB
i. Process state. The state may be new, ready, running, waiting, halted, and so on.
ii. Program counter. The counter indicates the address of the next instruction to be executed
for this process.
iii. CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information.
iv. CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
v. Memory-management information. This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on
the memory system used by the OS.
vi. Accounting information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
vii. I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
OS must make sure that processes don’t interfere with each other, this means
The dispatcher (short term scheduler) is the inner most portion of the OS that runs processes:
When a process is not running, its state must be saved in its process control block. Items saved
include:
Page 22 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
i. Program counter
ii. Process status word (condition codes etc.).
iii. General purpose registers
iv. Floating - point registers etc.
When no longer needed, a process (but not the underlying program) can be deleted via the OS,
which means that all record of the process is obliterated and any resources currently allocated to
it are released.
The principal responsibility of the OS is to control the execution of a process; this will include
the determination of interleaving patters for execution and allocation of resources to processes.
We can contrast the simplest model by observing that a process can either executed or not i.e.
running or not running
Each process must be presented in some way so that the OS can keep track of it i.e. the process
control block. Processes that are not running must be kept in some sort of a queue waiting their
turn to execute. There is a single queue in which each entry is a pointer to the PCB of a particular
block.
Dispatch
Enter Exit
Not Running Running
Pause
2. Three state
Completion
Running
(Active)
Delay Suspend
Dispatch
Submit
Resume
Ready
Blocked
(Wake up)
i. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
ii. Running: The process is being executed i.e. actually using the CPU at that instant
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
3. Five State
In this model two states have been added to the three state model i.e. the new and exit state. The
new state correspond to a process that has just been defined e.g. a new user trying to log onto a
time sharing system. In this instance, any tables needed to manage the process are allocated and
built.
In the new state the OS has performed the necessary action to create the process but has not
committed itself to the execution of the process i.e. the process is not in the main memory.
Dispatch
Admit Release
New Ready Running Exit
Time out
Event Event
occurs wait
Blocked
i. Running: The process is currently being executed i.e. actually using the CPU at that
instant
Page 25 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
ii. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
iv. New: The process has just been created but has not yet being admitted to the pool of
executable processes by the OS i.e. the new process has not been loaded into the main
memory although its PBC has been created.
v. Terminated/exit: The process has finished execution or the process has been released
from the pool of executable processes by the OS either because it halted or because it was
aborted for some reasons.
When a new process is to be added to those concurrently being managed the OS builds the data
structures that are used to manage the process and allocates address space to the processor
i. Normal completion
The process executes an OS service call to indicate that it has completed running
ii. Time limit exceeded
The process has run longer than the specified total time limit
iii. Memory unavailable: The process requires more memory than the system can provide
iv. Bound variation
The process tries to access memory location that it is not allowed to access
v. Protection error
Page 26 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
The process attempts to use a resource or a file that is not allowed to use or it tries to use
it in an improper version such as writing to read only file
vi. Arithmetic error
The process tries to prohibit computation e.g. division by zero or tries to state number
larger than the hardware can accommodate
vii. Time overrun
The process has waited longer than a specified maximum time for a certain event to occur
viii. I/O failure
An error occurs during I/O such as inability to find a file. Failure to read or write or write
after a specified number of times
ix. Invalid instruction
The process attempts to execute a non-existing instruction
x. Data misuse
A piece of data is of the wrong type or is not initialized
xi. Operator / OS intervention
For some reasons the operator or OS has terminated the process e.g. if a deadlock existed
A capability supported by some operating systems that allows one process to communicate with
another process. The processes can be running on the same computer or on different computers
connected through a network.
IPC enables one application to control another application, and for several applications to share
the same data without interfering with one another. IPC is required in all multiprocessing
systems.
Definitions of Terms
1. Race Conditions
The race condition is the situation where several threads access (read/write) and manipulate
shared data concurrently causing wrong output. The final value of the shared data depends upon
which process finishes last. To prevent race conditions, concurrent processes must be
synchronized.
2. Critical Sections
Are sections in a process during which the process must not be interrupted, especially when the
resource it requires is shared. It is necessary to protect critical sections with interlocks which
allow only one thread (process) at a time to transverse them.
This is an inter-process communication primitive that block instead of wasting CPU time when
they (processes) are not allowed to enter their critical sections. One of the simplest is the pair
SLEEP and WAKEUP.
SLEEP is a system call that causes the caller to block, that is, be suspended until another process
wakes it up. The WAKEUP call has one parameter, the process to be awakened.
E.g. the case of producer-consumer problem – where the producer, puts information into a buffer
and on the other hand, the consumer, takes it out. The producer will go to sleep if the buffer is
already full, to be awakened when the consumer has removed one or more items. Similarly, if the
consumer wants to remove an item from the buffer and sees the buffer is empty, it goes to sleep
until the producer puts something in the buffer and wakes it up.
4. Event counters
An event counter is another data structure that can be used for process synchronization. Like a
semaphore, it has an integer count and a set of waiting process identifications. Un-like
semaphores, the count variable only increases. This uses a special kind of variable called an
Event Counter.
Before a process can have access to a resource, it first reads E, if value good, advance E
otherwise await until v reached.
5. Message Passing
When processes interact with one another, two fundamental requirements must be satisfied:
synchronization and communication. One approach to providing both of this function is message
passing. A case where a processor (is a combination of a processing element (PE) and a local
main memory, it may include some external communication (I/O) facilities) when processing
elements communicate via messages transmitted between their local memories. A process will
transmit a message to other processes to indicate state and resources it is using.
In Message Passing two primitives SEND and RECEIVE, which are system calls, are used. The
SEND sends a message to a given destination and RECEIVE receives a message from a given
source.
Synchronization
The communication of a message between two processing will demand some level of
synchronization. Since there is need to know what happens after a send or receive primitive is
issued.
The sender and the receiver can be blocking or non-blocking. Three combinations are common
but only one can be applied in any particular system
i. Blocking send, blocking receive. Both the sender and the receiver are blocked until the
message is delivered. this allows for tight synchronization
ii. Non-blocking send, blocking receive. Although the sender may continue on, the
receiver is blocked until the requested message arrives. This method is effective since it
allows a process to send more than one message to a variety of destination as quickly as
possible.
iii. Non-blocking send, non-blocking receive. Neither party is required to wait. Useful for
concurrent programming.
Addressing
When a message is to send it is necessary to specify the in the send primitive which process is to
receive the message. This can be either direct addressing or indirect addressing
Direct addressing
The send primitive include a specific identifier of the destination process. There are two
ways to handle the receiving primitive.
i. Require that the process explicitly designate a sending process. i.e. a process must
know a head of time from which process a message is expected
ii. Use of implicit addressing where the source parameter of the receive primitive
possesses a value returned when a receive operation has been performed.
Indirect addressing
This case instead of sending a message directly to the receiver the message is sent to a shared
data structure consisting of a queue that can temporarily hold messages. Such queues are often
referred to as mailboxes.
Message type
Destination ID
Header Source ID
Message length
Control
information
6. Equivalence of primitives
Many new IPC’s have been proposed like sequencers, path expressions and serializers but are
similar to other ones. One can be able to build new methods or schemes from the four different
inter-process communication primitives – semaphores, monitors, messages & event counters.
The following are the essential equivalence of semaphores, monitors, and messages.
1. Mutual Exclusion
The mutual exclusion is a way of making sure that if one process is using a shared modifiable
data, the other processes will be excluded from doing the same thing. It’s a way of making sure
that processes are not in their critical sections at the same time
i. Leave the responsibility with the processes themselves: this is the basis of most
software approaches. These approaches are usually highly error-prone and carry high
overheads.
ii. Allow access to shared resources only through special-purpose machine instructions: i.e.
a hardware approach. These approaches are faster but still do not offer a complete
solution to the problem, e.g. they cannot guarantee the absence of deadlock and starvation.
iii. Provide support through the operating system, or through the programming language. We
shall outline three approaches in this category: semaphores, monitors, and message
passing.
2. Semaphores
It’s an integer value for controlling access, by multiple processes, to a common resource in a
concurrent system such as a multiprogramming operating system.
i. Machine independent.
ii. Simple
iii. Powerful (embody both exclusion and waiting).
i. Semaphores do not completely eliminate race conditions and other problems (like
deadlock).
ii. Incorrect formulation of solutions, even those using semaphores, can result in problems.
3. Monitor
This is a condition variable used to block a thread until a particular condition is true.
It has a collection of procedures, variables and data structures that are all grouped together in a
special kind of module or package. Thus a monitor has: - shared data, a set of atomic (tiny)
operations on the data and a set of condition variables. Monitors can be imbedded in a
programming language thus mostly the compiler implements the monitors.
Typical implementation: each monitor has a lock. Acquire lock when begin a monitor operation,
and release lock when operation finishes.
Advantages:
i. Reduces probability of error, biases programmer to think about the system in a certain
way
Disadvantages:
ii. Absence of concurrency: if a monitor encapsulate the source since only one process can
be active within a monitor at a time thus possibility of a deadlocks in case of nested
monitors call
4. Deadlock
A deadlock is a situation in which two or more processes sharing the same resource are
effectively preventing each other from accessing the resource, resulting in those processes
ceasing to function.
i. A preemptable resource is one that can be taken away from the process with no ill
effects. Memory is an example of a preemptable resource. On the other hand,
ii. A nonpreemptable resource is one that cannot be taken away from process (without
causing ill effect). For example, CD resources are not preemptable at an arbitrary
moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.
The resources involved are non-shareable. At least one resource (thread) must be held in
a non-shareable mode, that is, only one process at a time claims exclusive control of the
resource. If another process requests that resource, the requesting process must be
delayed until the resource has been released
The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.
Deadlock Prevention
Havender in his pioneering work showed that since all four of the conditions are necessary for
deadlock to occur, it follows that deadlock might be prevented by denying any one of the
conditions.
High Cost When a process release resources the process may lose all its work to that
point. One serious consequence of this strategy is the possibility of indefinite
postponement (starvation). A process might be held off indefinitely as it repeatedly
requests and releases the same resources.
1 ≡ Card reader
2 ≡ Printer
3 ≡ Plotter
4 ≡ Tape drive
5 ≡ Card punch
Now the rule is this: processes can request resources whenever they want to, but all
requests must be made in numerical order. A process may request first printer and then a
tape drive (order: 2, 4), but it may not request first a plotter and then a printer (order: 3,
2). The problem with this strategy is that it may be impossible to find an ordering that
satisfies everyone.
Deadlock Avoidance
Either : Each process provides the maximum number of resources of each type it needs.
With these information, there are algorithms that can ensure the system will never enter a
deadlock state. This is deadlock avoidance.
A sequence of processes <P1, P2, …, Pn> is a safe sequence if for each process Pi in the
sequence, its resource requests can be satisfied by the remaining resources and the sum of all
resources that are being held by P1, P2, …, Pi-1. This means we can suspend Pi and run P1, P2,
…, Pi-1 until they complete. Then, Pi will have all resources to run.
A state is safe if the system can allocate resources to each process (up to its maximum, of course)
in some order and still avoid a deadlock. In other word, a state is safe if there is a safe sequence.
Otherwise, if no safe sequence exists, the system state is unsafe. An unsafe state is not
necessarily a deadlock state. On the other hand, a deadlock state is an unsafe state
Page 38 of 42 Nachu TVC ICT Department
By phillis
CCIT Module 1 Operating System
Then, <B, A, C> is a safe sequence (safe state). The system has 12-(5+2+2)=3 free tapes.
Since B needs 2 tapes, it can take 2, run, and return 4. Then, the system has (3-2)+4=5 tapes. A
now can take all 5 tapes and run. Finally, A returns 10 tapes for C to take 7 of them
A system has 12 tapes and three processes A, B, C. At time t1, C has one more tape:
At this point, only B can take these 2 and run. It returns 4, making 4 free tapes available.
OR
A deadlock avoidance algorithm ensures that the system is always in a safe state. Therefore, no
deadlock can occur. Resource requests are granted only if in doing so the system is still in a safe
state.
Consequently, resource utilization may be lower than those systems without using a deadlock
avoidance algorithm.
Deadlock Detection
Deadlock detection is the process of actually determining that a deadlock exists and identifying
the processes and resources involved in the deadlock. The basic idea is to check allocation
against resource availability for all possible allocation sequences to determine if the system is in
deadlocked state . Of course, the deadlock detection algorithm is only half of this strategy. Once
a deadlock is detected, there needs to be a way to recover several alternatives exists:
These methods are expensive in the sense that each iteration calls the detection algorithm until
the system proves to be deadlock free. The complexity of algorithm is O(N2) where N is the
number of proceeds. Another potential problem is starvation; same process killed repeatedly.
✓ Process termination:
It is a method in which all the processes which are grouped into the deadlock cycle are aborted.
This can be done by two methods.
• First method is to abort all the processes which are in the deadlock cycle. This will be at
great expense because many of these processes are about to finish. And it will causes
recomputation of these processes from the scratch.
• The second method is that in which we abort only a single process from the deadlock
cycle. And again check for the deadlock. If any deadlock cycle still exists then we again
abort another process and check for the deadlock condition again. This process will
continue until we recover from deadlock. But this process will also causes abortion of
process which is about to complete. And we have to again execute that process from
starting.
✓ Resource preemption: