OS Unit 3 Notes
OS Unit 3 Notes
UNIT-3
3.1 Inter Process Communication Mechanisms:
In an operating system, we often have multiple processes running at the same time. Sometimes, these
processes need to communicate with each other — either to share data or to inform one another that
something has happened (like an event or a status change). This communication between processes is
called Interprocess Communication (IPC). Operating systems provide special mechanisms or methods to
make this communication possible.
If these processes don’t communicate or share information, they can’t work together properly.
1. Independent Process
2. Cooperating Process
Here are some main reasons why cooperation among processes is important:
1. Information Sharing
Many users or programs may need access to the same file or data.
For example, multiple users may read a file stored in a shared drive.
The system should allow such safe and coordinated access.
2. Computation Speedup
If a large task is broken into smaller subtasks, and those subtasks run at the same time (in parallel),
the job finishes faster.
Example: When a video is rendered using multiple cores — each core does a part of the work.
3. Modularity
A complex system is easier to manage when it’s divided into smaller, separate parts (modules).
Each module can be created as an independent process.
This makes the system easier to develop, test, and maintain.
4. Convenience
1. Message Passing
1. Message-Passing Systems
In a message-passing system, two or more processes communicate with each other by sending and
receiving messages. This model is useful when processes do not share the same memory and especially
important in distributed systems (like different computers connected over a network).
Key Points:
In this model, processes don’t share memory – they just exchange messages.
It helps synchronize the processes (i.e., keep them working in proper order).
It's mostly used when processes are on different systems or machines.
Two main actions involved:
o Send (sending a message)
o Receive (receiving a message)
Messages can be of fixed size or variable size.
Example: In an internet chat app, users communicate by sending and receiving messages.
Let’s say Process P and Process Q want to talk to each other. They can send and receive messages through
a communication link like:
1. NAMING
Processes must know where to send or from whom to receive messages. There are two ways of doing this:
a) Direct Communication
b) Indirect Communication
How it works:
This refers to how sending and receiving happens — whether a process waits or continues immediately.
Non-blocking Send: The sender sends the message and continues working without waiting.
Non-blocking Receive: The receiver checks — if a message is there, it receives it.
If not, it moves on (may get null or empty response).
3. BUFFERING
When messages are sent, they are usually kept in a temporary queue until received.
There are three types of buffering based on how this queue behaves:
a) Zero Capacity
No queue at all.
Sender must wait until receiver is ready.
No message is stored in between.
b) Bounded Capacity
c) Unbounded Capacity
In early UNIX systems, pipes were one of the first IPC (Interprocess Communication) mechanisms used
to allow processes to communicate with each other.
1. Ordinary Pipes
2. Named Pipes
1. Ordinary Pipes
Ordinary pipes are used for one-way communication between two related processes — usually a parent
and its child process. They follow a Producer-Consumer model.
One process writes data into one end of the pipe (called the write-end).
The other process reads data from the other end (called the read-end).
Key Points:
Creation in UNIX:
Ordinary read() and write() system calls are used to communicate through the pipe.
Important:
Named Pipes, also known as FIFOs (First In, First Out), are used for communication between unrelated
processes. They allow two-way communication, but in a controlled manner.
Key Features:
Communication Nature:
Although FIFOs support two-way communication, in practice, each FIFO handles only one
direction at a time.
So if you want data to travel in both directions, you need to create two FIFOs (one for each
direction).
System Support: Both UNIX and Windows support named pipes.
Limitation:
3. Shared Memory:
In the shared-memory model, a specific region of memory is shared between multiple processes that need
to cooperate and exchange data.
How It Works:
A shared memory region (or segment) is created inside a process’s address space.
Other processes that want to communicate attach this segment to their own address space.
Once attached, processes can read from or write to this shared area to exchange data.
Important Notes:
Normally, operating systems don’t allow one process to access another process’s memory for safety.
But in shared memory, this restriction is lifted by mutual agreement of the processes.
The format and location of the data in the shared region is decided by the processes themselves,
not by the OS.
It’s the responsibility of the processes to make sure they don't access or update the same memory
location at the same time, which could cause data issues.
IPC (Inter-Process Communication) allows processes to communicate with each other and coordinate their
actions. This is essential for processes that need to share data or synchronize execution.
There are several types of IPC mechanisms provided by UNIX/Linux:
1. Pipes
2. FIFOs (Named Pipes)
3. Semaphores (System V API)
4. Message Queues (System V API)
5. Shared Memory (System V API)
6. IPC Status Commands
1. Pipes
Definition:
A pipe is a unidirectional communication channel used between related processes (such as a parent and its
child process).
System Call:
Parameters:
Explanation:
The pipe() system call creates a pair of file descriptors. One end is for reading and the other for writing.
Data written into fd[1] can be read from fd[0].
Example Program:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
int main() {
int fd[2];
char buffer[20];
pipe(fd); // create pipe
if (fork() == 0) { // child process
close(fd[0]); // close read end
write(fd[1], "Hello", strlen("Hello") + 1); // write message
} else { // parent process
close(fd[1]); // close write end
read(fd[0], buffer, sizeof(buffer)); // read message
printf("Received: %s\n", buffer);
}
return 0;
}
Explanation:
A pipe is created.
The child writes "Hello" to the pipe.
The parent reads from the pipe and prints the message.
Definition:
Normal pipes are unidirectional. To achieve two-way communication (like a conversation), we use two
pipes.
Concept:
Example Program:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
int main() {
int fd1[2], fd2[2];
char parent_msg[] = "Hello from Parent";
char child_msg[] = "Hello from Child";
char buffer[100];
pipe(fd1); // Parent to Child
pipe(fd2); // Child to Parent
if (fork() == 0) { // Child process
close(fd1[1]); // Close write end of fd1
close(fd2[0]); // Close read end of fd2
read(fd1[0], buffer, sizeof(buffer));
printf("Child received: %s\n", buffer);
write(fd2[1], child_msg, strlen(child_msg) + 1);
} else { // Parent process
close(fd1[0]); // Close read end of fd1
close(fd2[1]); // Close write end of fd2
write(fd1[1], parent_msg, strlen(parent_msg) + 1);
read(fd2[0], buffer, sizeof(buffer));
printf("Parent received: %s\n", buffer);
}
return 0;
}
Definition:
FIFOs (named pipes) are special file types that allow communication between unrelated processes. Unlike
regular pipes, FIFOs exist in the filesystem with a name.
System Call:
mkfifo() creates a named pipe (FIFO) special file in the filesystem. This FIFO can then be opened using
open() and used just like a regular file descriptor for reading or writing.
Parameters:
pathname: The name of the FIFO file to be created. mode: The file permission bits (e.g., 0666 allows
read/write for all users).
Example Programs:
Writer.c
#include <fcntl.h>
#include <sys/stat.h>
#include <unistd.h>
int main() {
int fd;
mkfifo("myfifo", 0666); // create FIFO file
fd = open("myfifo", O_WRONLY); // open FIFO in write-only mode
write(fd, "Hello FIFO", 10); // write to FIFO
close(fd);
return 0;
}
Reader.c
#include <fcntl.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
int main() {
int fd;
char buffer[20];
fd = open("myfifo", O_RDONLY); // open FIFO in read-only mode
read(fd, buffer, sizeof(buffer)); // read from FIFO
printf("Received: %s\n", buffer);
close(fd);
return 0;
}
Explanation:
Definition:
A semaphore is a synchronization tool used to control access to a shared resource by multiple processes in
a concurrent system.
System V semaphores use a semaphore set, which may contain one or more semaphores.
Used For:
3.1 semget()
Parameters:
key: A unique key to identify the semaphore set (can be generated using ftok()).
nsems: Number of semaphores in the set.
semflg: Permissions (e.g., 0666) and flags (e.g., IPC_CREAT).
3.2 semop()
Parameters:
struct sembuf {
unsigned short sem_num; // semaphore index
short sem_op; // operation (-1, 0, +1)
short sem_flg; // operation flags
};
3.3 semctl()
Parameters:
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <unistd.h>
int main() {
int semid = semget(1234, 1, 0666 | IPC_CREAT);
return 0;
}
Explanation:
Message Queues
Definition:
A message queue is an IPC mechanism that allows processes to communicate by sending and receiving
messages. Messages are stored in a queue, and processes can retrieve them in a first-in, first-out (FIFO)
order.
System Calls:
4.1 msgget()
Parameters:
key: A unique key to identify the message queue. This can be generated using ftok().
msgflg: Flags and permissions for the queue. Flags include:
o IPC_CREAT: Create a new queue if it doesn’t exist.
o IPC_EXCL: Fail if the queue already exists.
o 0666: Permissions (read/write for all users).
4.2 msgsnd()
int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg);
Parameters:
4.3 msgrcv()
int msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
Parameters:
4.4 msgctl()
Parameters:
Sender.c:
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string.h>
#include <unistd.h>
// Message structure
struct msgbuf {
long mtype; // Message type
char mtext[100]; // Message content
};
int main() {
int msqid = msgget(1234,0666 | IPC_CREAT); // create/access message queue
struct msgbuf message;
message.mtype = 1; // message type
strcpy(message.mtext, "Hello, Message Queue!"); // message content
msgsnd(msqid, &message, sizeof(message.mtext), 0); // send message
printf("Message sent: %s\n", message.mtext);
return 0;
}
Receiver.c:
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string.h>
#include <unistd.h>
// Message structure
struct msgbuf {
long mtype; // Message type
char mtext[100]; // Message content
};
int main() {
int msqid = msgget(1234,0666 | IPC_CREAT); // create/access message queue
struct msgbuf message;
msgrcv(msqid, &message, sizeof(message.mtext), 1, 0); // receive
message of type 1
printf("Received message: %s\n", message.mtext);
msgctl(msqid, IPC_RMID, NULL); // remove the message queue
return 0;
}
Explanation:
Sender Program:
o The message queue is created/accessed using msgget().
o The message structure is populated with a message type (mtype) and the message content
(mtext).
o msgsnd() is used to send the message to the queue.
Receiver Program:
o The receiver generates the same key and accesses the message queue.
o It receives the message of type 1 using msgrcv().
o After receiving, the message queue is removed using msgctl() with the IPC_RMID
command.
Shared Memory
Definition:
Shared memory is an IPC mechanism that allows multiple processes to access the same memory space.
This enables efficient communication between processes as they can directly read and write to the shared
memory region.
System Calls:
Parameters:
key: A unique key to identify the shared memory segment. This can be generated using ftok().
size: The size of the shared memory segment in bytes.
shmflg: Flags and permissions for the shared memory segment. Common flags include:
o IPC_CREAT: Create a new shared memory segment if it doesn't exist.
o IPC_EXCL: Fail if the segment already exists.
o 0666: Permissions (read/write for all users).
5.2 shmat()
Parameters:
5.3 shmdt()
Parameters:
shmaddr: The address of the shared memory segment that should be detached.
5.4 shmctl()
Parameters:
Structure shmid_ds
struct shmid_ds {
struct ipc_perm shm_perm; // Permissions
size_t shm_segsz; // Segment size
time_t shm_atime; // Last attach time
time_t shm_dtime; // Last detach time
time_t shm_ctime; // Last change time
pid_t shm_cpid; // PID of creator
pid_t shm_lpid; // PID of last operation
shmatt_t shm_nattch; // Number of attached processes
};
Writer.c:
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <string.h>
int main() {
int shmid = shmget(1234, 1024, 0666 | IPC_CREAT); // create shared memory
char *str = (char*) shmat(shmid, (void*) 0, 0); // attach shared memory
printf("Enter some text: ");
fgets(str, 1024, stdin); // write to shared memory
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int main() {
int shmid = shmget(1234, 1024, 0666 | IPC_CREAT); // access shared memory
char *str = (char*) shmat(shmid, (void*) 0, 0); // attach shared memory
printf("Data read from shared memory: %s\n", str); // read from shared
memory
shmdt(str); // detach shared memory
shmctl(shmid, IPC_RMID, NULL); // remove shared memory
return 0;
}
Explanation:
Writer Program:
o The shared memory segment is created using shmget().
o The writer attaches the segment to its address space using shmat().
o It then writes user input into the shared memory.
Reader Program:
o The reader generates the same key and accesses the shared memory segment using
shmget().
o It attaches the segment to its address space using shmat().
o The reader reads the data from the shared memory and prints it.
o Finally, the reader detaches the shared memory using shmdt() and removes the shared
memory segment using shmctl().
IPC status commands are used to manage and get information about IPC resources such as message queues,
shared memory, and semaphores. They provide crucial system information such as the state of the resources,
attached processes, etc.
System Calls:
Syntax:
ipcs [options]
Common Options:
Example:
ipcs -m
2. ipcrm: The ipcrm command is used to remove IPC resources, including message queues, shared
memory segments, and semaphores. This is useful for cleaning up unused or orphaned IPC resources.
Syntax:
Common Options:
Example:
ipcrm -m <shmid>
This command removes the shared memory segment with the specified shmid.
DEADLOCKS
A deadlock occurs when a group of processes are stuck waiting for each other to release resources, and
none of them can proceed. In other words, every process in the group is waiting for something that only
another process in the same group can do, like releasing a resource.
System Model
A computer system has a limited number of resources that must be shared among multiple processes.
Types of Resources
Example: If a system has 2 CPUs, then the resource type “CPU” has 2 instances.
1. Request –
The process asks for a resource.
o If the resource is free it gets the resource.
o If not it has to wait until the resource becomes free.
2. Use –
Once the resource is assigned, the process performs operations using it.
o Example: If it's a printer, the process starts printing.
3. Release –
After using the resource, the process releases it so that other waiting processes can use it.
System Calls for Resource Handling
System Table
So, if a process requests a resource that is already in use, it will be added to the waiting queue for that
resource.
DEADLOCK CHARACTERIZATION
Definition
Deadlock Characterization refers to the set of conditions and structures that help us understand how
and why a deadlock can occur in a system. It identifies the specific features or prerequisites that must
be present for deadlock to happen.
A deadlock is a situation in a multiprogramming system where two or more processes get stuck, each
waiting for resources that are being held by the others. Because of this, none of the processes can proceed,
and they remain blocked forever.
Causes of Deadlock
Deadlock occurs when certain specific conditions are all true at the same time. These are known as the
Deadlock Prerequisites.
i. Deadlock Prerequisites – Four Necessary Conditions
1. Mutual Exclusion
o Only one process can use a resource at any given time.
o If another process requests it, it must wait until it is released.
2. Hold and Wait
o A process is holding at least one resource and is waiting for more that are being used by
other processes.
3. No Preemption
o A resource can’t be forcibly taken away from a process.
o The process must release it willingly.
4. Circular Wait
o There is a circular chain of processes waiting for each other’s resources.
For example:
…
Pn waits for a resource held by P0.
The Resource Allocation Graph (RAG) is a visual tool used to identify deadlocks in a system. It shows
how resources are requested and allocated among processes.
Definition
1. Process Nodes (P): Represent all the active processes in the system
Example: P = {P1, P2, ..., Pn}
2. Resource Nodes (R): Represent all the resource types in the system
Example: R = {R1, R2, ..., Rm}
Types of Edges
Symbols Used
Process = Circle
Resource = Rectangle
Instance of resource = Small dot inside rectangle
Observations:
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2
Observations:
Observations:
Methods For Handling Deadlocks
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
4. Deadlock Ignorance (Ostrich Method)
Deadlock Prevention
Deadlock is prevented by ensuring at least one of the four necessary conditions (Mutual Exclusion, Hold
and Wait, No Preemption, Circular Wait) never holds.
Deadlock Avoidance
The system is carefully scheduled so it never enters an unsafe state where deadlocks could occur.
Let deadlocks happen but detect them using algorithms and recover afterward.
Use detection algorithms to check if a cycle (deadlock) exists in the resource allocation graph.
Once detected, take recovery actions such as:
o Terminate one or more processes
o Preempt resources from some processes
This method is called the Ostrich Method, based on the idea that "if you ignore the problem, maybe it
will go away."
Disadvantage:
If a deadlock occurs, the system may become unresponsive and require a manual restart.
Deadlock Prevention:
Deadlock prevention is a technique that ensures at least one of the four necessary conditions for deadlock
never holds true. These four conditions are:
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
1. Mutual Exclusion
This condition applies when resources cannot be shared and are non-sharable.
We cannot prevent deadlock by removing mutual exclusion, because some resources are
naturally non-sharable.
Read-only files – multiple processes can access them at the same time.
These resources do not cause deadlocks.
To prevent this condition, a process must not hold one resource while waiting for others.
Example:
A process needs a DVD drive, disk file, and printer. It must request all three before starting, even if the
printer is needed only at the end. This causes poor resource usage, as the printer stays idle for most of the
time.
Example:
A process copies data from DVD to hard disk. Then, it releases those resources and later requests the disk
and printer to complete the job.
Disadvantages:
3. No Preemption
If a process requests a resource that is not immediately available, it must release all currently held
resources.
These resources are added to its waiting list.
The process is restarted only when all required resources are available together.
This ensures no process can "hold" resources while waiting for others.
4. Circular Wait
Example:
A process that needs a tape drive and a printer must request the tape drive first, then the printer, following
the order.
Alternative Rule:
A process can only request a resource if it has released any resource with a higher or equal
number.
Deadlock Avoidance
Deadlock occurs when a group of processes are each waiting for resources that are held by the others in the
group. This situation leads to a state where none of the processes can continue their execution.
To prevent this situation, deadlock avoidance is used. It works by carefully examining resource requests
before they are granted, to ensure the system does not enter an unsafe or deadlocked state.
Safe State: A system is in a safe state if there exists at least one sequence of all processes (called a safe
sequence) such that each process can finish with the available resources at some point.
Unsafe State: A system is in an unsafe state if no such sequence exists. Unsafe state does not mean
deadlock has occurred, but it means there is a possibility of entering deadlock if processes proceed.
Safe Sequence: A safe sequence is an ordering of processes (e.g., P1, P3, P0, etc.) where each process can
finish using currently available resources plus the resources held by previously completed processes.
Ensuring that the system always remains in a safe state is the main goal of deadlock avoidance.
1. Resource Allocation Graph Algorithm (Used for Single Instance Resources)
This algorithm is used only when each resource type has a single instance.
Graph Components:
Example:
Safe if no cycles.
Unsafe if a cycle is created.
Limitation: This algorithm does not work when multiple instances of a resource are allowed.
2. Safe State Algorithm (To Check System's Safety)
This algorithm checks if the system is currently in a safe state. It does not handle resource requests directly
but helps us verify if the system is safe.
Let’s consider a system with 12 magnetic tape drives and 3 processes: P0, P1, and P2.
P0 10 5
P1 4 2
P2 9 2
Step 1: Try P1
P1 needs 4 - 2 = 2 more.
3 are available. So allocate 2 to P1.
P1 finishes and returns 4 tape drives. Total available = 3 + 2 = 5
Step 2: Try P0
P0 needs 10 - 5 = 5 more.
5 are available. So allocate 5 to P0.
P0 finishes and returns 10 tape drives. Total available = 5 + 5 = 10
Step 3: Try P2
P2 needs 9 - 2 = 7 more.
10 are available. So allocate 7 to P2.
P2 finishes and returns 9 tape drives. Total available = 10 + 2 = 12
Problem with this algorithm: Even if a resource is free, a process may not get it immediately to avoid
unsafe state. This leads to low resource utilization.
This is a general solution used when multiple instances of resources are available. It is called "Banker's
Algorithm" because it works similarly to how a bank lends money while ensuring it can meet the needs of
all customers.
Data Structures:
Name Description
1. Initialize:
o Work = Available
o Finish[i] = false for all i
2. Find a process i such that:
o Finish[i] == false
o Need[i] ≤ Work
3. If found:
o Work = Work + Allocation[i]
o Finish[i] = true
o Repeat step 2
4. If all Finish[i] = true → Safe state
Example: Banker's Algorithm
System has 5 processes (P0 to P4) and 3 resources (A=10, B=5, C=7).
Current Snapshot:
P1 200 322
P2 302 902
P3 211 222
P4 002 433
P0 743
P1 122
P2 600
P3 011
P4 431
Deadlock Detection:
If a system does not use deadlock prevention or avoidance, it must rely on deadlock detection and recovery.
P1 → P2
P2 → P5
P2 → P3
P4 → P1
P5 → P4
Time Complexity:
Cycle detection in a graph has O(n²) complexity, where n is the number of processes.
Wait-For Graphs do not work if there are multiple instances of a resource type. Instead, we use a matrix-
based Deadlock Detection Algorithm, similar to the Banker's Algorithm.
Data Structures:
Algorithm Steps:
1. Let Work = Available, and Finish[i] = false if Allocation[i] ≠ 0; else Finish[i] = true.
2. Find a process i such that:
o Finish[i] == false
o Request[i] ≤ Work
3. If such a process exists:
o Work = Work + Allocation[i]
o Finish[i] = true
o Repeat step 2
4. If no such i is found and some Finish[i] == false → System is in deadlock.
Example:
P1 200 202
P2 303 000
P3 211 100
P4 002 002
Step-by-step Detection:
Step 1: Check P0
Step 2: Check P2
Step 3: Check P3
Step 5: Check P4
Final Check:
Deadlock Recovery
When a deadlock is detected, the system must recover. There are two main approaches:
1. Process Termination
To eliminate the deadlock, processes are aborted and their resources are reclaimed.
Options:
Abort all deadlocked processes: Quick and guarantees recovery, but may waste significant work.
Abort one process at a time: More conservative, but requires running detection after each
termination.
1. Process priority
2. How long the process has executed and how much more is needed
3. Resources the process is using
4. Resources the process still needs
5. Number of processes that must be terminated
6. Whether the process is interactive or batch
2. Resource Preemption
Instead of killing a process, we take away its resources and give them to others.
Issues to handle:
1. Selecting a victim: Choose the process whose preemption costs the least.
2. Rollback: The victim process must be rolled back to a safe state (possibly restarted).
3. Starvation: Prevent the same process from always being chosen; use fairness policies.
Deadlock Ignorance
Some systems choose to ignore deadlocks altogether, assuming they are rare. This strategy is called the
Ostrich Method, inspired by the idea of an ostrich burying its head in the sand to avoid danger.
Key Features:
Drawbacks:
Used in systems like UNIX and Windows where user can manually restart applications.
Works well if the cost of preventing deadlock is higher than the cost of failure recovery.