Os Notes 543322
Os Notes 543322
SYSTEM BSc I
SEMESTER
Operating System
UNIT I:
Introduction: Basics of Operating Systems: Definition, types of Operating
Systems, OS Service, System Calls, OS structure: Layered, Monolithic,
Microkernel Operating Systems – Concept of Virtual Machine.
UNIT II:
Process Management Process Definition , Process Relationship , Process
states , Process State transitions , Process Control Block , Context switching ,
Threads, Concept of multi threads , Benefits of threads, Types of threads.
Process Scheduling: Definition, Scheduling objectives, Types of Schedulers,
CPU scheduling algorithms, performance evaluation of the scheduling.
UNIT III:
Inter-process Communication, Race Conditions, Critical Section, Mutual
Exclusion, Hardware Solution, Strict Alternation, Peterson’s Solution, The
Producer Consumer Problem, Semaphores, Event Counters, Monitors,
Message Passing, and Classical IPC Problems.
Deadlocks: Definition, Deadlock characteristics, Deadlock Prevention,
Deadlock Avoidance (concepts only).
UNIT IV:
Memory Management: Logical and Physical address map, Memory
allocation, Internal and External fragmentation and Compaction, Paging.
Virtual Memory: Demand paging, Page Replacement policies.
UNIT V:
I/O Management Principles of I/O Hardware: Disk structure, Disk
scheduling algorithm
File Management: Access methods, File types, File operation, Directory
structure, File System structure, Allocation methods, Free-space
management, and directory implementation. Structure of Linux Operating
System, Exploring the Directory Structure, Naming Files and Directories,
Concept of shell, Types of shell, Editors for shell programming (e.g. vi), basics
of Shell programming.
UNIT I
Introduction: Basics of Operating Systems: Definition, types of
Operating Systems, OS Service, System Calls, OS structure:
Layered, Monolithic, Microkernel Operating Systems – Concept of
Virtual Machine.
User Views:-
The user view of the computer depends on the interface used. ∙ Some users
may use PC’s. In this the system is designed so that only one user can
utilize the resources and mostly for ease of use where the attention is
mainly on performances and not on the resource utilization.
∙ Some users may use a terminal connected to a mainframe or
minicomputers.
∙ Other users may access the same computer through other terminals.
These users may share resources and exchange information. In this case
the OS is designed to maximize resource utilization-so that all available
CPU time, memory & I/O are used efficiently.
∙ Other users may sit at workstations, connected to the networks of other
workstations and servers. In this case OS is designed to compromise
between individual visibility & resource utilization.
System Views:
∙ We can view a system as resource allocator i.e. a computer system has
many resources that may be used to solve a problem. The OS acts as a
manager of these resources. The OS must decide how to allocate these
resources to programs and the users so that it can operate the Computer
system efficiently and fairly.
∙ A different view of an OS is that it need to control various I/O devices &
user programs i.e. an OS is a control program used to manage the
execution of user program to prevent errors and improper use of the
computer.
∙ Resources can be either CPU Time, memory space, file storage space, I/O
devices and so on. The OS must support the following tasks
Drawback of First OS
The CPU may be idle for some time because the speed of the mechanical
devices is slower compared to the electronic devices.
In order to eliminate this drawback, a batch OS was used to perform the task
of batching jobs.
Advantages:
⮚ Simple, Sequential job Scheduling.
⮚ Human interventions minimized.
⮚ Increased performance & System throughput due to batching of jobs.
Disadvantages:
⮚ Turn around time can be large from user point of view due to
batching.
⮚ Difficult to debug the program.
⮚ A job can enter into infinite loop.
⮚ A job could corrupt the monitor.
⮚ Due to lack of protection scheme, one job may affect the pending
jobs.
Example: Payroll System, Bank Statements etc
Note: Turn around time means time elapsed between the time of submission of a
process or job by a user and the time of completion of that process or job.
Spooling:
⮚ SPOOL(Simultaneous Peripheral Operation On-Line)
⮚ Spooling is a process in which data is temporarily held to be used and
executed by a device, program or the system.
⮚ Data is sent to and stored in memory of other volatile (Temporary
memory) storage until the program or computer requests it for
execution.
⮚ In above figure the user 5 is active but user 1, user 2, user 3, and user 4
are in waiting state whereas user 6 is in ready status.
⮚ As soon as the time slice of user 5 is completed, the control moves on to
the next ready user i.e. user 6. In this state user 2, user 3, user 4, and user
5 are in waiting state and user 1 is in ready state. The process continues
in the same way and so on.
Disadvantages of RTOS:
∙ Limited Tasks: Very few tasks run at the same time and their
concentration is very less on few applications to avoid errors. ∙ Use heavy
system resources: Sometimes the system resources are not so good and
they are expensive as well.
∙ Complex Algorithms: The algorithms are very complex and difficult for
the designer to write on.
∙ Device driver and interrupt signals: It needs specific device drivers and
interrupts signals to response earliest to interrupts.
∙ Thread Priority: It is not good to set thread priority as these systems are
very less pron to switching tasks.
∙ I/O Operation:
∙ Accomplish the task of Device allocation and control I/O devices. ∙
Provide for notifying device errors, device status, etc.
∙ Communication:
∙ Accomplish the task of inter-process communication either on the
same computer system or between different computer systems on a
computer network.
∙ Provide for Message passing and shared memory access in safe mode.
∙ Error detection:
∙ The operating system should take the appropriate actions for the
occurrences of any type like arithmetic overflow; divide by zero
error, access to the illegal memory location and too large user CPU
time.
∙ Accomplish the task of error detection and recovery if any. Ex: Paper
jam or out of paper in a printer.
∙ Keep track status of CPU, Memory, I/O devices, Storage devices, File
system, networking, etc.
∙ Abort execution in case of fatal errors such as RAM parity errors,
power fluctuations, if any.
∙ Resource Allocation:
∙ Accomplish the task of resource allocation to multiple jobs. ∙ Reclaim
the allocated resources after their use or as and when the job
terminates.
∙ When multiple users are logged on to the system the resources must
be allocated to each of them.
∙ For current distribution of the resource among the various processes
the operating system uses the CPU scheduling run times which
determine which process will be allocated with the resource.
∙ Accounting:
∙ The operating system keep track of which users use how many and
which kind of computer resources.
∙ Maintain logs of system activities for performance analysis and error
recovery.
∙ Protection:
∙ Accomplish the task of protecting the system resources against
malicious use.
∙ Provide for safe computing by employing security scheme against
unauthorized access/users.
∙ Authenticate legitimate users with login passwords and registrations.
∙ The operating system is responsible for both hardware as well as
software protection.
∙ The operating system protects the information stored in a multiuser
computer system.
❖System Calls
⮚ System calls provide the interface between a process & the OS. These
are usually available in the form of assembly language instruction. ⮚
Some systems allow system calls to be made directly from a high level
language program like C, BCPL and PERL etc.
⮚ System calls occur in different ways depending on the computer in
use. System calls can be roughly grouped into 5 major categories.
3. Device Management:
∙ Request device, release device: If there are multiple users of the system,
we first request the device. After we finished with the device, we must
release it.
∙ Read, write, reposition: Once the device has been requested &
allocated to us, we can read, write & reposition the device.
4. Information Maintenance/Management:
∙ Get time or date, set time or date: Most systems have a system call to
return the current date & time or set the current date & time. ∙ Get system
data, set system data: Other system calls may return information about
the system like number of current users, version number of OS, amount of
free memory etc.
∙ Get process attributes, set process attributes: The OS keeps
information about all its processes & there are system calls to access this
information.
5. Communication Management:
∙ Create, Delete Connection.
∙ Send, Receive Message.
∙ Attach, Detach remote device(Mounting/Remote Login) ∙
Transfer status information(byte)
1. Simple Structure
⮚ There are several commercial system that don’t have a well- defined
structure such operating systems begins as small, simple & limited
systems and then grow beyond their original scope.
⮚ MS-DOS is an example of such system. It was not divided into modules
carefully.
⮚ Another example of limited structuring is the UNIX operating system. ⮚
There was no CPU Execution Mode (user and kernel), so errors in
applications could cause the whole system to crash.
2. Monolithic Approach
⮚ A monolithic design of the operating system architecture makes no
special accommodation for the special nature of the operating system.
Although the design follows the separation of concerns, no attempt is
made to restrict the privileges granted to the individual parts of the
operating system.
⮚ The entire operating system executes with maximum privileges.
⮚ The communication overhead inside the monolithic operating system is
the same as the communication overhead inside any other software,
considered relatively low.
CP/M and DOS are simple examples of monolithic operating systems. Both
CP/M and DOS are operating systems that share a single address space with
the applications. In CP/M, the 16 bit address space starts with system
variables and the application area and ends with three parts of the
operating system, namely CCP (Console Command Processor), BDOS (Basic
Disk Operating System) and BIOS (Basic Input/Output System). In DOS, the
20 bit address space starts with the array of interrupt vectors and the
system variables, followed by the resident part of DOS and the application
area and ending with a memory block used by the video card and BIOS.
3. Layered approach:
⮚ In the layered approach, the OS is broken into a number of layers
(levels) each built on top of lower layers. The bottom layer (layer 0) is the
hardware & top most layer (layer N) is the user interface. ⮚ The main
advantage of the layered approach is modularity.
▪ The layers are selected such that each users functions (or operations) &
services of only lower layer.
▪ This approach simplifies debugging & system verification, i.e. the first
layer can be debugged without concerning the rest of the system. Once
the first layer is debugged, its correct functioning is assumed while the
2nd layer is debugged & so on.
▪ If an error is found during the debugging of a particular layer, the error
must be on that layer because the layers below it are already debugged.
Thus the design & implementation of the system are simplified when the
system is broken down into layers.
▪ Each layer is implemented using only operations provided by lower
layers. A layer doesn’t need to know how these operations are
implemented; it only needs to know what these operations do.
▪ The layer approach was first used in the operating system. It was
defined in six layers.
4. Microkernel approach:
This structures the operating system by removing all nonessential
portions of the kernel and implementing them as system and
user level programs.
∙ Generally they provide minimal process and memory
management, and a communications facility.
∙ Communication between components of the OS is provided by
message passing.
The benefits of the microkernel are as follows:
∙ Extending the operating system becomes much easier.
∙ Any changes to the kernel tend to be fewer, since the kernel is
smaller.
∙ The microkernel also provides more security and reliability.
❖Virtual Machines
⮚ A virtual machine takes the layered approach to its logical conclusion.
⮚ It treats hardware and the operating system kernel as though they
were all hardware.
⮚ A virtual machine provides an interface identical to the underlying
bare hardware.
⮚ The operating system creates the illusion of multiple processes, each
executing on its own processor with its own (virtual) memory. ⮚ The
resources of the physical computer are shared to create the virtual
machines.
⮚ CPU scheduling can create the appearance that users have their own
processor.
⮚ Spooling and a file system can provide virtual card readers and virtual
line printers.
⮚ A normal user time-sharing terminal serves as the virtual machine
operator’s console.
Drawback:
∙ Virtual Machine includes the increased system overhead which is due
to simulation of virtual machine operation heavily.
∙ The efficiency of VM OS depends upon the number of operations that
must be simulated by VM monitor.
UNIT II
Process Management Process Definition , Process Relationship , Process
states , Process State transitions , Process Control Block , Context
switching , Threads, Concept of multithreads , Benefits of threads, Types
of threads.
Process Scheduling: Definition, Scheduling objectives, Types of
Schedulers, CPU scheduling algorithms, performance evaluation of the
scheduling.
Process Management:
⮚ A program does nothing unless their instructions are executed by a
CPU.A process is a program in execution. A time shared user program
such as a complier is a process. A word processing program being run by
an individual user on a pc is a process.
⮚ A system task such as sending output to a printer is also a process. A
process needs certain resources including CPU time, memory files & I/O
devices to accomplish its task.
⮚ These resources are either given to the process when it is created or
allocated to it while it is running.
⮚ The OS is responsible for the following activities of process
management.
o Creating & deleting both user & system processes.
o Suspending & resuming processes.
o Providing mechanism for process synchronization.
o Providing mechanism for process communication.
o Providing mechanism for deadlock handling.
Process:
⮚ A process or task is an instance of a program in execution. ⮚ The
execution of a process must programs in a sequential manner. ⮚ At
any time at most one instruction is executed.
⮚ The process includes the current activity as represented by the value of
the program counter and the content of the processors registers. Also it
includes the process stack which contain temporary data (such as
method parameters return address and local variables) & a data section
which contain global variables. A process may also include a heap, which
is memory that is dynamically allocated during process run time.
Difference between process & program:
⮚ A program by itself is not a process.
⮚ A program in execution is known as a process.
⮚ A program is a passive entity, such as the contents of a file stored on
disk whereas process is an active entity with a program counter
specifying the next instruction to execute and a set of associated
resources may be shared among several process with some scheduling
algorithm being used to determinate when the stop work on one process
and service a different one.
Process Relationship:
Process States
As a process executes, it changes state. The state of a process is defined by the
correct activity of that process. Each process may be in one of the following
states.
∙ New: The process is being created.
∙ Ready: The process is waiting to be assigned to a processor. ∙
Running: Instructions are being executed.
∙ Waiting: The process is waiting for some event to occur. ∙
Terminated: The process has finished execution.
Context Switching
1. When CPU switches to another process, the system must save the state
of the old process and load the saved state for the new process. This is
known as context switch.
2. Context-switch time is overhead; the system does no useful work while
switching.
3. Switching speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the
existence of special instructions. A typical speed is a few milliseconds. 4.
Context switch times are highly dependent on hardware support.
Threads
⮚ A thread is a flow of execution through the process code, with its own
program counter, system registers and stack.
⮚ Threads are a popular way to improve application performance
through parallelism.
⮚ A thread is sometimes called a light weight process.
⮚ Threads represent a software approach to improving performance of
operating system by reducing the over head thread is equivalent to a
classical process.
⮚ Each thread belongs to exactly one process and no thread can exist
outside a process.
⮚ Each thread represents a separate flow of control.
In this model, developers can create as many user threads as necessary and
the corresponding Kernel threads can run in parallels on a multiprocessor.
It allows another thread to run when a thread makes a blocking system call.
It supports multiple threads to execute in parallel on microprocessors.
Disadvantages:
▪ Kernel threads are generally slower to create and manage than the user
threads.
▪ Transfer of control from one thread to another within same process
requires a mode switch to the Kernel.
Benefits/Advantages of Thread
⮚ Thread minimizes context switching time.
⮚ Use of threads provides concurrency within a process.
⮚ Efficient communication.
⮚ Economy- It is more economical to create and context switch
threads. ⮚ Utilization of multiprocessor architectures –
The benefits of multithreading can be greatly increased in a multiprocessor
architecture.
Process Scheduling:
Scheduling is a fundamental function of OS. When a computer is
Multiprogrammed, it has multiple processes competing for the
CPU at the same time. If only one CPU is available, then a choice
has to be made regarding which process to execute next. This
decision making process is known as scheduling and the part of
the OS that makes this choice is called a scheduler. The
algorithm it uses in making this choice is called scheduling
algorithm.
General Goals
Fairness
Fairness is important under all circumstances. A scheduler makes sure
that each process gets its fair share of the CPU and no process can suffer
indefinite postponement.
Note that giving equivalent or equal time is not fair. Think of safety
control and payroll at a nuclear plant.
Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For
example, if the local policy is safety then the safety control processes must
be able to run whenever they want to, even if it means delay in payroll
processes.
Efficiency
Scheduler should keep the system (or in particular CPU) busy cent
percent of the time when possible. If the CPU and all the Input/Output
devices can be kept running all the time, more work gets done per
second than if some components are idle.
Response Time
A scheduler should minimize the response time for interactive user.
Turnaround
A scheduler should minimize the time batch users must wait for an
output.
Throughput
A scheduler should maximize the number of jobs processed per unit
time.
A little thought will show that some of these goals are contradictory. It
can be shown that any scheduling algorithm that favors some class of
jobs hurts another class of jobs. The amount of CPU time available is
finite, after all.
Non-preemptive Scheduling
A scheduling discipline is non-preemptive if, once a process has been given
the CPU, the CPU cannot be taken away from that process.
Preemptive Scheduling
∙ A scheduling discipline is preemptive if, once a process has been given
the CPU can taken away.
∙ The strategy of allowing processes that are logically runnable to be
temporarily suspended is called Preemptive Scheduling and it is
contrast to the "run to completion" method.
0 3 8 10 14
0 2 5 9 14
0 1 5 10 17 26
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Inter-process Communication:
∙ Inter process communication (IPC) is a mechanism which allows
processes to communicate each other and synchronize their actions. The
communication between these processes can be seen as a method of co-
operation between them.
∙ Processes executing concurrently in the operating system may be either
independent processes or cooperating processes.
∙ A process is independent if it cannot affect or be affected by the other
processes executing in the system. Any process that does not share data
with any other process is independent.
∙ A process is cooperating if it can affect or be affected by the other
processes executing in the system. Clearly, any process that shares data
with other processes is a cooperating process.
∙ Reasons for co-operating processes:
o Information sharing: Since several users may be interested in the
same piece of information (for instance, a shared file), we must
provide an environment to allow concurrent access to such
information.
Race Conditions:
▪ A situation like this, where several processes access and manipulate the
same data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race condition.
⮚ Suppose that two processes, P1 and P2, share the global variable a. At
some point in its execution, P1 updates a to the value 1, and at some
point in its execution, P2 updates a to the value 2. Thus, the two tasks
are in a race to write variable a.
⮚ In this example the "loser" of the race (the process that updates last)
determines the final value of a.
Consider a system consisting of n processes (P0, P1, ……… ,Pn-1) each process
has a segment of code which is known as critical section in which the process
may be changing common variable, updating a table, writing a file and so on.
∙ The important feature of the system is that when the process is executing
in its critical section no other process is to be allowed to execute in its
critical section.
∙ The execution of critical sections by the processes is a mutually
exclusive.
∙ The critical section problem is to design a protocol that the process can
use to co-operate each process must request permission to enter its
critical section.
∙ The section of code implementing this request is the entry section. The
critical section is followed on exit section. The remaining code is the
remainder section.
Example:
While (1)
{
Entry Section;
Critical Section;
Exit Section;
Remainder Section;
}
A solution to the critical section problem must satisfy the following three
conditions.
1. Mutual Exclusion: If process Pi is executing in its critical section
then no any other process can be executing in their critical section.
Mutual Exclusion:
Requirements for Mutual Exclusion:
1. Mutual exclusion must be enforced: Only one process at a time is
allowed into its critical section, among all processes that have critical
sections for the same resource or shared object.
2. A process that halts in its non critical section must do so without
interfering with other processes.
3. It must not be possible for a process requiring access to a critical
section to be delayed indefinitely: no deadlock or starvation. 4. When no
process is in a critical section, any process that requests entry to its
critical section must be permitted to enter without delay. 5. No
assumptions are made about relative process speeds or number of
processors.
6. A process remains inside its critical section for a finite time only.
Hardware Solution:
Hardware approaches to mutual exclusion.
∙ Interrupt Disabling:
In a uniprocessor machine, concurrent processes cannot be overlapped; they
can only be interleaved. Furthermore, a process will continue to run until it
invokes an operating system service or until it is interrupted. Therefore, to
guarantee mutual exclusion, it is sufficient to prevent a process from being
interrupted. This capability can be provided in the form of primitives
defined by the system kernel for disabling and enabling interrupts.
Solution to Critical-section Problem Using Locks
do
{
acquire lock
critical section;
release lock
remainder section;
} while (TRUE);
Disadvantages
∙ It works only in a single processor environment.
∙ Interrupts can be lost if not serviced promptly.
∙ A process waiting to enter its critical section could suffer from
starvation.
Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Advantages
1. It is simple and easy to verify.
2. It is applicable to any number of processes. 3. It
can b used to support multiple critical section.
Disadvantages
1. Busy waiting is possible.
2. Starvation is also possible.
3. There may be deadlock.
Swap Instruction:
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key);
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Bounded-waiting Mutual Exclusion with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Peterson’s Solution:
Problem
When one process is updating shared modifiable data in its critical section,
no other process should allowed to enter in its critical section.
Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful
technique within the kernel of an operating system, but it is not appropriate
as a general mutual exclusion mechanism for user process. The reason is
that it is unwise to give user process the power to turn off interrupts.
Conclusion
The flaw in this proposal can be best explained by example. Suppose process
A sees that the lock is 0. Before it can set the lock to 1 another process B is
scheduled, runs, and sets the lock to 1. When the process A runs again, it will
also set the lock to 1, and two processes will be in their critical section
simultaneously.
In this proposed solution, the integer variable 'turn' keeps track of whose turn
is to enter the critical section. Initially, process A inspects turn, finds it to be
0, and enters in its critical section. Process B also finds it to be 0 and sits in a
loop continually testing 'turn' to see when it becomes 1.
Continuously testing a variable waiting for some value to appear is called the
Busy-Waiting.
Conclusion:
Taking turns is not a good idea when one of the processes is much slower than
the other. Suppose process 0 finishes its critical section quickly, so both
processes are now in their noncritical section. This situation violates above
mentioned condition 3.
The Producer Consumer Problem:
∙ A producer process produces information that is consumed by a
consumer process.
∙ For example, a compiler may produce assembly code that is consumed by
an assembler. The assembler, in turn, may produce object modules that
are consumed by the loader.
∙ The producer–consumer problem also provides a useful metaphor for
the client–server paradigm.
∙ One solution to the producer–consumer problem uses shared memory ∙
To allow producer and consumer processes to run concurrently, we must
have available a buffer of items that can be filled by the producer and
emptied by the consumer.
∙ This buffer will reside in a region of memory that is shared by the
producer and consumer processes. A producer can produce one item
while the consumer is consuming another item.
∙ The producer and consumer must be synchronized, so that the consumer
does not try to consume an item that has not yet been produced.
∙ Two types of buffers can be used.
o Unbounded buffer
o Bounded buffer
∙ The unbounded buffer places no practical limit on the size of the buffer.
The consumer may have to wait for new items, but the producer can
always produce new items.
∙ The bounded buffer assumes a fixed buffer size. In this case, the consumer
must wait if the buffer is empty, and the producer must wait if the buffer
is full.
∙ Let’s look more closely at how the bounded buffer illustrates inter
process communication using shared memory. The following variables
reside in a region of memory shared by the producer and consumer
processes.
#define BUFFER SIZE 10
typedef struct {
...
}item;
item buffer[BUFFER SIZE];
int in = 0;
int out = 0;
∙ The shared buffer is implemented as a circular array with two logical
pointers: in and out.
∙ The variable in points to the next free position in the buffer; ∙
The variable out points to the first full position in the buffer.
∙ The buffer is empty when in ==out;
∙ The buffer is full when ((in + 1) % BUFFER SIZE) == out. ∙ The
producer process has a local variable next produced in which the new
item to be produced is stored.
∙ The consumer process has a local variable next consumed in which the
item to be consumed is stored.
item next produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
}
The producer process using shared memory.
PROGRAM CODING
#include<stdio.h>
int mutex=1,full=0,empty=3,x=0; main()
{
int n;
void producer();
void consumer();
int wait(int);
int signal(int);
printf("\n 1.producer\n2.consumer\n3.exit\n");
while(1) {
printf(" \nenter ur choice");
scanf("%d",&n);
switch(n)
{
case 1: if((mutex==1)&&(empty!=0))
producer();
else
printf("buffer is full\n");
break;
case 2: if((mutex==1)&&(full!=0))
consumer();
else
printf("buffer is empty");
break;
case 3: exit(0);
break;
}
}
}
int wait(int s)
{
return(--s);
}
int signal(int s)
{
return (++s);
}
void producer()
{
mutex=wait(mutex);
full=signal(full);
empty=wait(empty);
x++;
printf("\n producer produces the items %d",x);
mutex=signal(mutex);
}
void consumer()
{
mutex=wait(mutex);
full=wait(full);
empty=signal(empty);
printf("\n consumer consumes the item %d",x);
x--;
mutex=signal(mutex);
}
Semaphores:
The integer value of the semaphore in the wait and signal operations must be
executed indivisibly. That is, when one process modifies the semaphore value,
no other process can simultaneously modify that same semaphore value.
In addition, in the case of the wait(S), the testing of the integer value of S (S
0) and its possible modification (S := S – 1), must also be executed without
interruption.
Semaphore Implementation
⮚ Must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time
⮚ Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
⮚ Could now have busy waiting in critical section implementation But
implementation code is short Little busy waiting if critical section rarely
occupied
⮚ Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Semaphores are not provided by hardware. But they have several attractive
properties:
1. Semaphores are machine independent.
2. Semaphores are simple to implement.
3. Correctness is easy to determine.
4. Can have many different critical sections with different semaphores.
5. Semaphore acquires many resources simultaneously.
Drawback of Semaphore
1. They are essentially shared global variables.
2. Access to semaphores can come from anywhere in a program.
3. There is no control or guarantee of proper usage.
4. There is no linguistic connection between the semaphore and the data
to which the semaphore controls access.
5. They serve two purposes, mutual exclusion and scheduling constraints.
Monitors:
Although semaphores provide a convenient and effective mechanism for
process synchronization, using them incorrectly can result in timing errors
that are difficult to detect, since these errors happen only if particular
execution sequences take place and these sequences do not always occur.
monitor monitor_name
{
/* shared variable declarations */
function P1 ( . . . ) {
...
}
function P2 ( . . . ) {
...
}
.
.
.
function Pn ( . . . ) {
...
}
initialization code ( . . . ) {
...
}
}
Figure: Syntax of a monitor.
∙ Thus, a function defined within a monitor can access only those variables
declared locally within the monitor and its formal parameters.
∙ Similarly, the local variables of a monitor can be accessed by only the
local functions.
∙ The monitor construct ensures that only one process at a time is active
within the monitor
∙ However, the monitor construct, as defined so far, is not sufficiently
powerful for modeling some synchronization schemes.
∙ For this purpose, we need to define additional synchronization
mechanisms.
condition x, y;
∙ The only operations that can be invoked on a condition variable are
wait() and signal().
x.wait();
∙ The operation means that the process invoking this operation is
suspended until another process invokes
x.signal();
∙ The x.signal() operation resumes exactly one suspended process. ∙ If no
process is suspended, then the signal() operation has no effect;
∙ On the one hand, since P was already executing in the monitor, the
signal and continue method seems more reasonable. On the other, if
we allow thread P to continue, then by the time Q is resumed, the
logical condition for which Q was waiting may no longer hold. When
thread P executes the signal operation, it immediately leaves the
monitor. Hence, Q is immediately resumed.
Message Passing:
∙ Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same
address space.
∙ It is particularly useful in a distributed environment, where the
communicating processes may reside on different computers connected
by a network.
∙ A message-passing facility provides at least two operations:
o send(message)
o receive(message)
∙ If P and Q wish to communicate, they need to establish a communication
link between them exchange messages via send/receive
Implementation of communication link physical (e.g., shared memory,
hardware bus) logical (e.g., logical properties)
Direct Communication
∙ Processes must name each other explicitly:
o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q
∙ Properties of communication link
∙ Links are established automatically
∙ A link is associated with exactly one pair of communicating processes ∙
Between each pair there exists exactly one link
∙ The link may be unidirectional, but is usually bi-directional
Indirect Communication
∙ Messages are directed and received from mailboxes (also referred to as
ports)
∙ Each mailbox has a unique id
∙ Processes can communicate only if they share a mailbox
∙ Properties of communication link
∙ Link established only if processes share a common mailbox ∙
A link may be associated with many processes
∙ Each pair of processes may share several communication links ∙
Link may be unidirectional or bi-directional
Operations
∙ create a new mailbox
∙ send and receive messages through mailbox
∙ destroy a mailbox
∙ Primitives are defined as:
o send(A, message) – send a message to mailbox A
o receive(A, message) – receive a message from mailbox A
∙ Mailbox sharing
Problem: Now suppose that processes P1, P2, and P3 all share mailbox A.
Process P1 sends a message to A, while both P2 and P3 execute a receive()
from A. Which process will receive the message sent by P1?
Solutions
∙ Allow a link to be associated with at most two processes ∙ Allow only one
process at a time to execute a receive operation ∙ Allow the system to
select arbitrarily the receiver. Sender is notified who the receiver was.
Synchronization
∙ Message passing may be either blocking or non-blocking ∙
Blocking is considered synchronous
∙ Blocking send has the sender block until the message is received ∙
Blocking receive has the receiver block until a message is available ∙
Non-blocking is considered asynchronous
∙ Non-blocking send has the sender send the message and continue ∙
Non-blocking receive has the receiver receive a valid message or null
Buffering
Queue of messages attached to the link; implemented in one of three ways
1. Zero capacity – 0 messages Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages Sender must wait if link
full
3. Unbounded capacity – infinite length Sender never waits
Classical IPC Problems:
These problems are used for testing nearly every newly proposed
synchronization scheme. In our solutions to the problems, we use
semaphores for synchronization, since that is the traditional way to present
such solutions. However, actual implementations of these solutions could use
mutex locks in place of binary semaphores.
1. The Bounded-Buffer Problem
2. The Readers–Writers Problem
3. The Dining-Philosophers Problem
SYSTEM MODEL
• A system may consist of finite number of resources and is distributed among
number of processes. There resources are partitioned into several instances
each with identical instances.
• A process must request a resource before using it and it must release the
resource after using it. It can request any number of resources to carry out a
designated task. The amount of resource requested may not exceed the total
number of resources available.
Deadlock characteristics:
Necessary Conditions: A deadlock situation can occur if the following 4
conditions occur simultaneously in a system:-
1. Mutual Exclusion: Only one process must hold the resource at a time. If
any other process requests for the resource, the requesting process
must be delayed until the resource has been released.
2. Hold and Wait:- A process must be holding at least one resource and
waiting to acquire additional resources that are currently being held by
the other process.
3. No Preemption:- Resources can’t be preempted i.e., only the process
holding the resources must release it after the process has completed its
task.
4. Circular Wait:- A set {P0,P1……..Pn} of waiting process must exist such
that P0 is waiting for a resource i.e., held by P1, P1 is waiting for a
resource i.e., held by P2. Pn-1 is waiting for resource held by process Pn
and Pn is waiting for the resource i.e., held by P1. All the four conditions
must hold for a deadlock to occur.
Deadlock Prevention:
For a deadlock to occur each of the four necessary conditions must hold. If at
least one of the conditions does not hold then we can prevent occurrence of
deadlock.
4. Circular Wait: The fourth and the final condition for deadlock is the
circular wait condition. One way to ensure that this condition never, is to
impose ordering on all resource types and each process requests resource
in an increasing order.
Safe State:
∙ A state is a safe state in which there exists at least one order in which all
the process will run completely without resulting in a deadlock. ∙ A system
is in safe state if there exists a safe sequence.
∙ A sequence of processes <P1,P2,………..Pn> is a safe sequence for the
current allocation state if for each Pi the resources that Pi can request
can be satisfied by the currently available resources.
∙ If the resources that Pi requests are not currently available then Pi can
obtain all of its needed resource to complete its designated task. ∙ A safe
state is not a deadlock state.
∙ Whenever a process request a resource i.e., currently available, the
system must decide whether resources can be allocated immediately or
whether the process must wait. The request is granted only if the
allocation leaves the system in safe state.
∙ In this, if a process requests a resource i.e., currently available it must still
have to wait. Thus resource utilization may be lower than it would be
without a deadlock avoidance algorithm.
Banker’s Algorithm:
∙ This algorithm is applicable to the system with multiple instances of each
resource types, but this is less efficient then the resource allocation
graph algorithm.
∙ When a new process enters the system it must declare the maximum
number of resources that it may need. This number may not exceed the
total number of resources in the system. The system must determine that
whether the allocation of the resources will leave the system in a safe
state or not. If it is so resources are allocated else it should wait until the
process release enough resources.
∙ Several data structures are used to implement the banker’s algorithm. Let
‘n’ be the number of processes in the system and ‘m’ be the number of
resources types.
Safety Algorithm:-
This algorithm is used to find out whether or not a system is in safe state or
not.
Step 1: Let Work and Finish be vectors of length m and n, respectively
Initialize:
Work = Available
For i = 1,2, …, n,
if Allocationi¹ 0, then
Finish[i] = false;otherwise, Finish[i] = true.
Step 2: Find an index i such that both:
Finish[i] == false
Requesti <= Work
If no such i exists, go to step 4.
Step 4: If Finish [i] = false, for some i, 1<=i<= n, then the system is in a
deadlock state.
Moreover, if Finish[i] = false, then process Pi is