0% found this document useful (0 votes)
20 views14 pages

Os - Co3

Inter-Process Communication (IPC) allows processes to communicate and synchronize actions, with two main models: Shared Memory and Message Passing. Threads, the smallest unit of execution, enhance performance through parallel execution and resource sharing, while concurrency enables multiple tasks to appear simultaneous, improving CPU utilization. Process synchronization is crucial to prevent issues like race conditions, ensuring safe access to shared resources, with various methods like locks and condition variables to manage concurrent processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views14 pages

Os - Co3

Inter-Process Communication (IPC) allows processes to communicate and synchronize actions, with two main models: Shared Memory and Message Passing. Threads, the smallest unit of execution, enhance performance through parallel execution and resource sharing, while concurrency enables multiple tasks to appear simultaneous, improving CPU utilization. Process synchronization is crucial to prevent issues like race conditions, ensuring safe access to shared resources, with various methods like locks and condition variables to manage concurrent processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

CO-03

Inter-Process Communication (IPC)


Inter-Process Communication refers to the mechanism that allows processes to
communicate and synchronize their actions. Processes in a system can either be
independent or cooperating. Independent processes do not affect each other, whereas
cooperating processes can share data and affect one another's execution.
Two main models for IPC are:
1. Shared Memory Model: In this model, processes communicate by sharing a region
of memory. It is fast but requires synchronization.
2. Message Passing Model: Here, processes communicate by sending and receiving
messages. It is easier to implement but can be slower compared to shared memory.

Cooperating vs Independent Processes


An independent process cannot be influenced by other processes running in the system. In
contrast, cooperating processes can be influenced by or influence the execution of other
processes. The advantages of using cooperating processes include better data sharing,
faster computations, easier modular design, and improved user convenience.

Thread
A thread is the smallest unit of execution within a process. Each thread consists of a
program counter (which points to the next instruction), a set of registers (which store current
working variables), and a stack (which keeps the history of function calls).
Threads are also referred to as lightweight processes. They help improve application
performance by enabling parallel execution. Multiple threads within the same process share
the same address space and resources, which allows for efficient communication and data
sharing.

Single-threaded and Multi-threaded Processes


A single-threaded process has only one thread of execution. It performs one task at a time.
In contrast, a multi-threaded process contains multiple threads, which can perform multiple
tasks concurrently within the same application. This leads to better CPU utilization and faster
execution.
Why Do We Need Threads?
 Threads can run in parallel, thus improving application performance.
 Each thread has its own CPU register state and stack, but shares the process’s
address space and environment.
 Threads can be prioritized and scheduled individually.
 Like processes, threads can be in different states such as ready, running, or blocked.
 Context switching can also occur between threads, similar to processes, and is
managed using a structure called the Thread Control Block (TCB).

Difference Between Process and Thread

Aspect Process Thread

Independent program in
Definition Segment of a process
execution

Resources Uses more resources Uses fewer resources

Creation Time Slower to create Faster to create

Weight Heavyweight Lightweight

Termination Takes more time Takes less time

Shares memory with


Memory Sharing Separate memory space
other threads

Context
Slower Faster
Switching

Needs Inter Process Direct and faster


Communication
Communication (IPC) communication

Types of Threads
There are two main types of threads in operating systems:
1. User-level Threads: Managed entirely in user space, without kernel support.
2. Kernel-level Threads: Managed directly by the operating system kernel, with each
thread visible to the OS.

Context Switch Between Threads


When switching from one thread to another within the same process, the system saves the
current thread's state (registers, program counter) and restores the new thread's state.
Unlike process context switching, thread switching does not involve changing the memory
address space, making it faster and more efficient.
In a multi-threaded process, each thread has its own stack. This means all thread-specific
variables and function call histories are kept in thread-local storage, which is separate for
each thread.

Multithreading Models
User threads need to be mapped to kernel threads using specific models. The three main
models are:
1. Many-to-One Model:

o Multiple user threads are mapped to a single kernel thread.


o The thread library exists in user space.
o If one thread makes a blocking system call, the entire process gets blocked.
o Examples include Solaris Green Threads and GNU Portable Threads.
2. One-to-One Model:
o Each user thread maps to a separate kernel thread.
o Offers better concurrency.
o However, most systems limit the number of threads that can be created.
o Implemented in systems like Linux and older versions of Windows.
3. Many-to-Many Model:

o Many user threads are mapped to a smaller or equal number of kernel


threads.
o Combines the advantages of the previous two models.
o Users can create as many threads as needed.
o Example: Older versions of Solaris (before version 9).

Benefits of Using Threads


 Responsiveness: Applications remain responsive even if part of the program is
blocked or performing a lengthy operation.
 Resource Sharing: Threads share memory and resources of the process, enabling
efficient communication.
 Economy: Creating threads is less costly in terms of system resources compared to
creating processes.
 Scalability: Multi-threaded applications can run more efficiently on multi-core
systems.

Thread API (POSIX)

POSIX threads, commonly referred to as Pthreads, provide a standard API for thread
creation and management in UNIX-like systems. Some commonly used functions include:
 pthread_create(): Creates a new thread.
 pthread_join(): Waits for a thread to complete.
 pthread_mutex_lock(): Locks a mutex to protect shared resources.
 pthread_mutex_timelock(): Tries to lock a mutex until a specified timeout.
 pthread_cond_wait(): Waits for a condition variable.
 pthread_cond_signal(): Signals a thread waiting on a condition variable.

Thread Implementations in Different Environments


1. POSIX (UNIX-based Systems):
o Provides a standard API for thread programming in C/C++.
o Allows creation of concurrent execution flows in applications.
2. Win32 (Windows Systems):
o Offers kernel-level support for thread management.
o Threads are managed directly by the operating system.
3. Java Threads:
o Platform-independent implementation.
o Thread handling is managed by the Java Virtual Machine (JVM), and it
depends on the underlying OS and hardware.

Definition of Concurrency:
Concurrency means doing multiple tasks at the same time. In Operating Systems,
concurrency is when the system can run more than one process or thread at once. It
doesn't mean everything runs exactly at the same time, but it feels like that to the user.

Working of Concurrency (Step-by-Step Execution):


Imagine:
You are the CPU. You have to:
 Read a document (Task 1)
 Download a file (Task 2)
 Play music (Task 3)

Now let’s break down what happens:

Step 1: Start Task 1 (Read a document)


 The CPU starts reading.
 While reading, it needs to wait for data to load from disk.
Step 2: Switch to Task 2 (Download file)
 Instead of sitting idle, the CPU switches to downloading the file.
 It starts the download and then waits for the network to send data.
Step 3: Switch to Task 3 (Play music)
 While the download is in progress, the CPU starts playing music.
Step 4: Return to Task 1
 Now the data is loaded for Task 1, so the CPU goes back to reading.
Step 5: Return to Task 2
 More network data has arrived, so CPU resumes download.
This switching between tasks is called context switching, and it happens very fast. So, it
seems like all three tasks are happening at the same time.

Conclusion:
 Concurrency is all about efficiently switching between tasks.
 It helps in better use of CPU time.
 Even if tasks don’t run truly parallel, they appear to.
Advantages of Concurrency
 Faster execution
 Better CPU and resource usage
 Increased system responsiveness
 Easy to scale
 Improved fault tolerance

Disadvantages of Concurrency
 Hard to manage shared resources
 Bugs are difficult to find
 Locking may slow down the system
 Race conditions may occur
 Can lead to deadlocks

Process Synchronization
Process Synchronization means managing how different processes work together when they
need to use the same resources, like memory or files. In a computer system where many
processes run at the same time, they may need to use shared data or resources. If these
processes are not controlled properly, they might create problems like incorrect data or
system errors.
The main goal of process synchronization is to make sure that shared resources are used
safely and correctly by only one process at a time. This helps to avoid problems such as
data corruption or unexpected behavior.

Why is Process Synchronization Important?


Without synchronization, two or more processes might try to change the same data at the
same time. This can lead to a situation called a race condition, where the final result
depends on who finishes first. This makes the system unpredictable and can cause serious
problems. Synchronization ensures that processes wait for each other when needed,
keeping the system stable and reliable.

Types of Processes

There are two main types of processes based on how they work with each other:
1. Independent Processes – These processes do not affect each other. They can run
separately without causing any problems for the other.
2. Cooperative Processes – These processes depend on each other. The result of
one process may affect the other, so they need to be carefully synchronized to avoid
errors.

Race Condition
A race condition happens when two processes try to access and change the same resource
at the same time. For example, if two processes try to change the value of a variable called
counter, the final value of counter might be incorrect because both changes can happen at
the same time. The order of execution matters, and this can lead to wrong results.
To avoid race conditions, we must make sure that only one process changes the shared data
at a time. This is done using synchronization methods.
Critical Section Problem
A critical section is a part of the program where a process accesses shared resources. If
more than one process enters its critical section at the same time, they may change the data
in an unsafe way.
The challenge is to create a way for processes to take turns and avoid entering their critical
sections at the same time. This is called solving the critical section problem. Each process
must request permission before entering the critical section, and after finishing, it should
allow others to enter.

Structure of a Process with Critical Section


Each process follows a structure like this:
less
CopyEdit
do {
Entry Section // Ask permission to enter
Critical Section // Access shared resources
Exit Section // Exit and give chance to others
Remainder Section // Continue normal operations
} while (true);
This structure ensures that the critical section is accessed safely.

Requirements of a Good Solution


To solve the critical section problem properly, any solution must follow these three rules:
1. Mutual Exclusion – Only one process can be in the critical section at a time.
2. Progress – If no process is in the critical section, and some processes want to enter,
the system should not delay their entry unnecessarily.
3. Bounded Waiting – After a process asks to enter, there should be a limit on how
many other processes can enter before it gets a chance.

Peterson’s Solution
Peterson’s solution is a simple algorithm to solve the critical section problem for two
processes. It uses two shared variables:
 flag[2] – tells whether a process wants to enter the critical section.
 turn – tells whose turn it is to enter the critical section.
Each process sets its flag to true and gives the turn to the other process. Then it waits if the
other process also wants to enter and it’s the other process’s turn. Otherwise, it enters the
critical section safely.
This method follows all three rules: mutual exclusion, progress, and bounded waiting.

Producer-Consumer Problem

This is a common example of a synchronization problem. There is a producer that creates


data and stores it in a buffer. There is a consumer that takes data from the buffer and uses it.
The buffer has limited space.
The producer must wait if the buffer is full. The consumer must wait if the buffer is empty.
Also, the producer and consumer should not use the buffer at the same time. If they do, it
may lead to incorrect data or errors.

Important Terms in Producer-Consumer Problem


 in – shows the next empty space in the buffer where the producer will place data.
 out – shows the next filled space in the buffer from where the consumer will take
data.
 counter – keeps track of how many items are currently in the buffer.
Using proper synchronization, we can ensure that the producer and consumer do not
interfere with each other, and the buffer is used safely.

LOCKS, LOCKED DATA STRUCTURES, CONDITION VARIABLES, MUTEX.


1. What is a Lock?
A lock is like a “do not enter” sign. It makes sure that only one thread (a small part of a
program) can use some shared data at a time. This is important when more than one thread
wants to change the same thing, like a bank balance. If two threads try to add money at the
same time, we might get wrong results. So, we put a lock around that part of the code to
stop this from happening.
2. States of a Lock
A lock can be in two states. First is "available" — no thread is using it, so anyone can take it.
Second is "acquired" — one thread is using it, and others must wait their turn.
3. What Happens in lock()?
When a thread calls lock(), it tries to take the lock. If the lock is free, the thread takes it and
goes into the critical section (the important code). Other threads that come later will wait until
this thread finishes and calls unlock().
4. Evaluating a Lock
We check three things to know if a lock is good. First: does it stop more than one thread
from entering the critical section (mutual exclusion)? Second: does it give everyone a fair
turn (fairness)? Third: is it fast or does it slow down the program (performance)?
5. Disabling Interrupts
In old single-CPU computers, we could stop all other processes by turning off interrupts. This
worked well back then, but it doesn’t work now because modern systems have more than
one processor.
6. Hardware Support for Locks
Modern CPUs give us special instructions to make safe locks. These instructions — like Test
and Set, Compare and Swap, etc. — help a thread to check and update a value in one step,
so no one else can change it in between.
7. Spinlocks
A spinlock is a simple lock made using those CPU instructions. If a thread finds the lock is
taken, it keeps checking again and again in a loop. This is fine if the lock is released quickly,
but if not, it wastes CPU time.
8. Ticket Locks and Futex
Ticket locks are like taking a number at a store. Each thread gets a number and waits for its
turn — fair and simple. Futex (Fast Userspace Mutex) is a better lock on Linux that makes
waiting threads go to sleep instead of checking again and again, which saves CPU time.
9. Two-Phase Locks
This lock works in two steps. First, the thread spins (waits by checking quickly). If the lock is
still not free after a short time, the thread goes to sleep and waits. This way, we use the best
of both methods.
10. Locks in Data Structures
We use locks to protect shared data like counters, lists, or queues. One big lock for
everything is simple but slow. Using small locks (like one per list item) lets many threads
work at the same time, which is faster.
11. Sloppy Counter
Instead of one shared counter for all threads, we use many small counters — one for each
CPU. Each thread updates its local counter, and later, we add them together. This avoids
fighting over the same counter.
12. Concurrent Lists and Queues
In a linked list, we can use one lock for the whole list, or use a separate lock for each item
(called hand-over-hand locking). This lets many threads work at once. A queue can use two
locks — one for adding and one for removing — so they don’t block each other.
13. Concurrent Hash Tables
A hash table has buckets (like boxes). Each bucket has its own lock. So, different threads
can use different buckets at the same time without any problem. This makes things faster.
14. Condition Variables
Sometimes a thread needs to wait for something — like a task to finish. We use condition
variables for that. wait() makes a thread sleep until something changes. signal() wakes it up
when it’s time to continue.
15. Producer-Consumer Problem
One thread (producer) adds items to a buffer. Another thread (consumer) removes them. We
make sure that the producer doesn’t add if the buffer is full, and the consumer doesn’t
remove if it’s empty. Condition variables help manage this.
16. Mutex Locks
A mutex is the simplest kind of lock. Only one thread can enter the critical section using it.
The basic steps are: lock it → run the important code → unlock it. This keeps shared data
safe from problems.

SEMAPHORE AND CLASSIC CONCURENCY PROBLEMS


A semaphore is a synchronization tool used to control access to a shared resource in a
concurrent system like a computer program. It helps prevent conflicts when multiple threads
or processes try to access the same resource at the same time. Semaphores are mainly
used to manage the number of threads that can access a resource simultaneously. They are
implemented using a simple counter: when the counter is greater than zero, threads can
proceed; when it is zero, threads must wait until the counter increases. The two main types
of semaphores are binary semaphores and counting semaphores.
 A binary semaphore (also called a mutex) only has two states: 0 or 1. It’s like a
lock; when it’s set to 1, a thread can access the resource, and when it’s set to 0, the
resource is locked, and other threads must wait. This is useful when only one thread
should access a resource at a time.
 A counting semaphore, on the other hand, allows a certain number of threads to
access the resource simultaneously. For example, if the semaphore’s counter is set
to 3, up to 3 threads can access the resource at once. This type is useful when
there’s a limited amount of resources (like a limited number of printers or database
connections).

1. Bounded Buffer Problem (Producer-Consumer Problem)


This problem involves two processes: a producer, which generates data and puts it
into a buffer, and a consumer, which takes data out of the buffer. The buffer has a
limited size (bounded).
 The producer must wait if the buffer is full.
 The consumer must wait if the buffer is empty.
To avoid race conditions, we use semaphores:
 mutex to control access to the buffer,
 empty to count empty slots,
 full to count filled slots.
This ensures that the producer and consumer work in sync, without overwriting or
reading invalid data.
Real-Life Scenario:
Imagine a restaurant where there’s a chef (Producer) cooking food and a waiter
(Consumer) serving it to the customers. The kitchen (buffer) has limited space to
hold the food (let’s say 5 plates).
Classic Problem:
 If the kitchen is full (i.e., 5 plates), the chef cannot cook more food and has to wait
until the waiter serves some plates.
 If the kitchen is empty, the waiter has to wait for the chef to cook more food.

 The chef and waiter must synchronize their actions: the chef cannot put food if
there’s no space, and the waiter cannot serve if there’s nothing to serve

2. Readers-Writers Problem
This problem involves a shared resource (like a database) that is accessed by
multiple readers and writers.
 Multiple readers can read at the same time.
 But if a writer is writing, no other reader or writer can access it.
There are two versions:
 First readers-writers problem (Reader priority): Readers are not kept waiting if no
writer is active.
 Second readers-writers problem (Writer priority): Writers are not starved by
continuous readers.
We use semaphores to protect the readCount (number of readers) and to allow
mutual exclusion when needed.
Real-Life Scenario:
Imagine a library with a logbook that many readers want to access, but sometimes
writers need to update it.
Classic Problem:
 Multiple readers can read the logbook at the same time, but only one writer can
update it.
 If a writer is updating the logbook, no one can read it.
 The challenge: If a reader starts reading and then a writer wants to write, they must
wait until the logbook is free.

3. Dining Philosophers Problem


Imagine 5 philosophers sitting around a table. Each philosopher thinks and eats. To
eat, a philosopher needs 2 chopsticks (left and right). There are only 5 chopsticks,
one between each philosopher.
 If all pick up their left chopstick first, deadlock can happen because everyone
waits forever for the right one.
We solve this using semaphores and smart strategies like:
 Limiting the number of philosophers who can pick up chopsticks (4 out of 5),
 Reversing the order of picking chopsticks for one philosopher, or
 Using a mutex and state array to track hungry, eating, and thinking philosophers.
Real-Life Scenario:
Five philosophers are sitting around a circular table. Each one has a plate of
noodles in front of them, but they need two chopsticks to eat (one on the left and
one on the right). The chopsticks are placed in the center, one between each pair of
philosophers.
Classic Problem:
 If all philosophers simultaneously grab the chopstick to their left, they’ll each be
holding one chopstick but not the second one.
 This results in a deadlock: no one can eat!

4. Sleeping Barber Problem


There is one barber, one barber chair, and a waiting room with a limited number
of chairs.
 If no customers, the barber sleeps.
 When a customer comes:
o If chairs are free, he sits and waits.
o If no chairs, he leaves.
 When the barber finishes, he calls a customer from the waiting room.
This simulates real-world scheduling and is solved using semaphores:
 One for the barber,
 One for customers,
 One mutex for updating the waiting count.
Real-Life Scenario:
A barber runs a shop with one barber chair and several waiting chairs. When no
customers are in the shop, the barber sleeps. If a customer comes in:
 If there’s a free waiting chair, the customer waits until the barber is done.
 If there’s no space to sit, the customer leaves.
 Once the barber finishes with one customer, they call the next one.
Classic Problem:
 The barber can’t sleep if there’s someone waiting for a haircut, but if no one is there,
they sleep.
 The customers must decide whether to wait or leave depending on whether there’s
space.
Deadlock in Simple Terms
A deadlock occurs when two or more processes in a system are blocked because
they are each holding a resource and waiting for another resource that is held by
another process. In other words, they get stuck waiting for each other to release the
resources, and as a result, none of them can make progress. For instance, imagine
two processes, P1 and P2. If P1 holds a resource R1 and waits for R2, while P2
holds R2 and waits for R1, they are in a deadlock because they are both waiting for
each other’s resources indefinitely.
A deadlock situation can be characterized by four necessary conditions:
1. Mutual Exclusion: Only one process can hold a resource at a time.
2. Hold and Wait: A process holds at least one resource and is waiting for other
resources.
3. No Preemption: A resource cannot be forcibly taken from a process holding it.
4. Circular Wait: A set of processes are waiting for each other in a circular chain.
Example with Resource Allocation
Consider a scenario with 3 processes (P1, P2, P3) and 2 types of resources (R1,
R2). Suppose P1 holds R1, P2 holds R2, and P3 is waiting for both resources. This is
not yet a deadlock because no cycle is formed in the system. However, if P1 waits for
R2 (held by P2), and P2 waits for R1 (held by P1), a cycle forms, creating a
deadlock.
Methods of Handling Deadlocks
1. Deadlock Prevention: This method stops deadlocks before they happen by making
sure at least one of the following doesn't occur:
o Mutual Exclusion: Some resources can only be used by one process at a
time.
o Hold and Wait: Processes must ask for all resources at once or give back the
ones they have when waiting for others.
o No Preemption: Resources can be taken from a process if needed by
another.
o Circular Wait: Processes must ask for resources in a set order to avoid a
waiting loop.
2. Deadlock Avoidance: This method ensures deadlocks never happen by checking if
giving a resource to a process will cause a problem. Banker's Algorithm is used to
make sure resources are only given if the system is safe.
3. Deadlock Detection: This method allows deadlocks to happen but checks for them
later.
o Resource Allocation Graph (RAG): This graph helps find deadlocks by
showing which processes hold or need which resources.
4. Recovery from Deadlock: If a deadlock is found, it can be fixed by:
o Process Termination: Stop one or more processes to break the deadlock.
o Resource Preemption: Take resources from a process and give them to
another.
o Rollback: Restart processes from a safe point if resources were taken away.

difference between Deadlock and Starvation

Point Deadlock Starvation

Low priority process


All processes get stuck. No keeps waiting while
1. What is it?
one can move forward. high priority ones keep
working.

2. Waiting Waiting forever (infinite Waiting for a long time,


Time waiting). but not forever.

3. Every deadlock is also a Not all starvation is a


Relationship starvation. deadlock.

The resource keeps


One process is holding a
4. Why going to high priority
resource and waiting for
blocked? processes again and
another that's also being held.
again.

Happens when 4 things


Happens when priority
happen at once: mutual
5. Cause rules are not managed
exclusion, hold and wait, no
properly.
preemption, and circular wait.

Read last ppt in co3 and practise


bankers algorithm problem from
youtube

You might also like