0% found this document useful (0 votes)
22 views3 pages

Unit III

Unit-III discusses concurrency control and Inter-Process Communication (IPC) in concurrent systems, emphasizing the importance of synchronization mechanisms to prevent race conditions and manage shared resources. Key topics include process synchronization, critical section problems, classic synchronization problems, and various software and hardware solutions such as semaphores and mutexes. The unit also highlights the applications of synchronization in multi-threading, database management, and networking.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views3 pages

Unit III

Unit-III discusses concurrency control and Inter-Process Communication (IPC) in concurrent systems, emphasizing the importance of synchronization mechanisms to prevent race conditions and manage shared resources. Key topics include process synchronization, critical section problems, classic synchronization problems, and various software and hardware solutions such as semaphores and mutexes. The unit also highlights the applications of synchronization in multi-threading, database management, and networking.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Unit-III: Concurrency Control (IPC)

In concurrent systems, multiple processes or threads execute simultaneously. To ensure correct and efficient execution,
Inter-Process Communication (IPC) and Concurrency Control are essential. This unit focuses on various synchronization
mechanisms that coordinate processes, avoid race conditions, and manage shared resources.

1. Process Synchronization

Process synchronization is a mechanism to ensure that multiple processes or threads can access shared data or
resources without conflicts. The primary goal is to prevent race conditions where multiple processes access shared
resources simultaneously, leading to incorrect results.

• Race Condition: A situation where two or more processes read or write shared data, and the final outcome
depends on the order of execution, which can be unpredictable.

2. Critical Section Problem

A critical section is a segment of code where a process accesses shared resources like variables or files. To ensure that
only one process accesses the critical section at any given time, we need synchronization techniques.

• Critical Section Problem: It is the problem of designing a protocol that ensures that no two processes are in their
critical section simultaneously. This leads to mutual exclusion, which is the core of the problem.

A solution to the critical section problem must satisfy the following conditions:

1. Mutual Exclusion: Only one process can execute the critical section at a time.
2. Progress: If no process is in the critical section, and some processes wish to enter, one of the waiting
processes should be allowed to enter the critical section.
3. Bounded Waiting: A limit exists on how many times other processes are allowed to enter the critical
section before a waiting process gets a turn.

3. Classic Problems of Synchronization

These are classical problems used to illustrate synchronization issues and how they can be solved. Common problems
include:

1. Producer-Consumer Problem (Bounded Buffer Problem):


o Producers generate data and place it into a buffer.
o Consumers retrieve data from the buffer.
o Synchronization is needed to prevent race conditions when accessing the shared buffer (e.g., to avoid
overfilling or underflowing the buffer).
2. Dining Philosophers Problem:
o Five philosophers sit at a table with five forks.
o Each philosopher needs two forks to eat, leading to a potential deadlock if all pick up one fork at the
same time.
o The problem focuses on preventing deadlock and ensuring progress.
3. Readers-Writers Problem:
o Multiple processes want to read or write a shared data item.
o The goal is to ensure that readers can read simultaneously but writers need exclusive access.

4. Software Solutions for Synchronization Problem

Software-based synchronization solutions involve algorithms and techniques that ensure mutual exclusion, progress,
and bounded waiting.
• Peterson’s Algorithm: A well-known algorithm for solving the critical section problem for two processes. It uses
two variables to ensure mutual exclusion and avoid busy-waiting.
• Bakery Algorithm: This algorithm provides a solution for multiple processes. Each process gets a "number" and
enters the critical section in ascending order of their number, ensuring fairness.

5. Hardware Solutions for Synchronization Problem

Hardware solutions rely on specific hardware instructions or mechanisms to handle synchronization efficiently without
needing complex software algorithms. Examples include:

1. Test-and-Set Instruction:
o A special atomic operation that checks and modifies a memory word simultaneously.
o It helps in creating mutual exclusion by locking a variable (or flag) when a process is entering a critical
section.
2. Swap Instruction:
o Another atomic instruction that swaps the values of two variables. It can be used to implement locks by
swapping a lock variable with a specific flag.
3. Disable Interrupts:
o A process can disable interrupts to ensure that no context switching happens during the execution of a
critical section. However, this approach is generally inefficient for user-level processes.

6. Synchronization and Their Applications

Synchronization mechanisms are widely applied in the following scenarios:

• Multi-threading in Operating Systems: Ensuring threads in a process can work concurrently without conflicts.
• Database Management Systems: Controlling concurrent access to the database by multiple users.
• Networking: Coordinating packet transmission between processes.
• Shared Memory Systems: Managing memory access to avoid inconsistencies when multiple processes read or
write to the same memory area.

7. Semaphore

A semaphore is a synchronization tool that is used to control access to a common resource by multiple processes. There
are two types of semaphores:

• Binary Semaphore (similar to a mutex): Can take values 0 or 1 and is used for mutual exclusion.
• Counting Semaphore: Can take any integer value and is used to manage a resource with a limited number of
instances.

Operations on a semaphore:

• wait() or P(): Decreases the semaphore value. If the semaphore’s value is zero, the process is blocked.
• signal() or V(): Increases the semaphore value and if there are waiting processes, one is unblocked.

8. Mutex (Mutual Exclusion Object)

A mutex is a lock mechanism that ensures only one thread can access a resource at a time. It is simpler than a
semaphore and provides mutual exclusion in a critical section.

• When a thread locks a mutex, no other thread can access the critical section until the mutex is unlocked.
• Mutexes are primarily used for short sections of code that require exclusive access to shared data.

9. Monitor
A monitor is a high-level synchronization construct that encapsulates shared variables, procedures, and data structures.
It automatically provides mutual exclusion by allowing only one process to execute any of its procedures at a time.

Monitors also include condition variables that processes can use to wait for certain conditions to become true.

• Condition Variables: Used with two operations:


o wait(): Makes the process wait for a condition to be true.
o signal(): Wakes up a process that is waiting for a condition to become true.

10. Event Counters

Event counters are another synchronization primitive used in real-time systems to synchronize based on the occurrence
of events. They maintain a counter of how many times an event has occurred and are often combined with semaphores
or other synchronization mechanisms to manage system events efficiently.

You might also like