Conditions For Critical Section in OS
Conditions For Critical Section in OS
programming. Let's explore each of them and provide an example to illustrate their
significance.
The critical section is a code segment where shared variables can be accessed.
1. Mutual Exclusion: Only one process or thread can access the critical section at a
time. This condition ensures that concurrent access to shared resources does not
lead to data corruption or inconsistent results. When a process/thread is
executing in the critical section, all other processes/threads must be excluded
from entering the critical section.
2. Progress: If no process/thread is executing in the critical section and there are
multiple processes/threads waiting to enter, the selection of the next
process/thread to enter the critical section should be made in a fair and timely
manner. This condition ensures that processes/threads do not get stuck
indefinitely and that progress is made.
3. Bounded Waiting: There is a bound on the number of times other
processes/threads can enter the critical section while one process/thread is
waiting to enter it. This condition prevents starvation, ensuring that all
processes/threads have a fair chance to access the critical section.
To fulfill these conditions, various synchronization mechanisms can be used, such as
locks, semaphores, and monitors. These mechanisms enforce mutual exclusion, provide
mechanisms for process/thread synchronization, and ensure that the critical section
problem conditions are met.
• Additional performance overheads and complexities in OS are needed for switching between
applications.
Pros:
• Better performance
IPC, or Interprocess Communication, refers to the methods and techniques that allow different
processes or programs to communicate and share data with each other. In simpler terms, it is a
way for separate programs running on the same computer or on different computers to talk to
each other and exchange information.
1. Mutual Exclusion:
Mutual exclusion ensures that only one process or thread can access a
shared resource or critical section at a time.
2. Semaphore:
A semaphore is a synchronization object that maintains a counter to
control access to a shared resource.
It allows multiple processes or threads to access the resource
simultaneously up to a certain limit defined by the semaphore's value.
When the limit is reached, subsequent processes/threads must wait until
the resource becomes available.
Semaphores can be used to implement various synchronization patterns,
such as limiting concurrent access to a resource or signaling between
processes/threads.
3. Barrier:
A barrier is a synchronization point that ensures that all participating
processes or threads reach a certain point before any of them can
proceed.
When a process/thread arrives at the barrier, it waits until all other
processes/threads have also arrived. Once all have arrived, they are
released simultaneously, allowing them to continue execution.
4. Spinlock:
A spinlock is a synchronization primitive that allows a process or thread to
repeatedly check for the availability of a lock (spin) rather than blocking or
putting itself to sleep.
While spinning, the process/thread continuously polls the lock until it
becomes available, thereby avoiding context switches and potential thread
scheduling overhead.
Spinlocks are typically used in scenarios where it is expected that the lock
will be released quickly, and the waiting time is short. If the lock is held for
an extended period, spinning may waste CPU resources, and an alternative
synchronization mechanism should be used.
There are two types of semaphores commonly used in operating systems and
concurrent programming: binary semaphore and counting semaphore.
1. Binary Semaphore:
A binary semaphore is a synchronization primitive with two states: 0 and 1.
It is often used for mutual exclusion, where only one process or thread can
access a shared resource or critical section at a time.
A binary semaphore can be thought of as a simple lock. It is initially set to
1 (unlocked), and when a process/thread acquires the semaphore, it
becomes 0 (locked). If another process/thread tries to acquire the
semaphore while it is locked, it will be blocked or put to sleep until the
semaphore is released.
2. Counting Semaphore:
A counting semaphore is a synchronization primitive with a non-negative
integer value.
It is used to control access to a set of resources or to limit the number of
processes/threads that can access a resource simultaneously.
When a process/thread wants to access a resource, it must acquire the
counting semaphore. If the semaphore value is greater than 0, it is
decremented, allowing access to the resource. If the semaphore value is 0,
the process/thread is blocked or put to sleep until the semaphore value
becomes greater than 0 (signaling availability of the resource).
When a process/thread releases the counting semaphore, the semaphore
value is incremented, indicating that a resource is now available for
another process/thread to use.
Both binary and counting semaphores are used to coordinate and control access to
shared resources, ensuring that conflicts and race conditions are avoided. They provide
a mechanism for processes/threads to synchronize their actions and prevent
simultaneous access to critical sections or limited resources.
The choice between binary and counting semaphore depends on the specific
requirements of the application. Binary semaphores are typically used for mutual
exclusion scenarios, while counting semaphores are suitable for scenarios where
multiple processes/threads can access a shared resource simultaneously, up to a certain
limit defined by the semaphore value.
Module 4
The threads within a multithreaded process can perform different tasks simultaneously
or work together to accomplish a common goal. Each thread has its own program
counter, stack, and set of registers, which allows them to maintain their own execution
context.
In a single-threaded environment, there is only one thread of execution, often referred to as the
main thread. The main thread starts at the beginning of the program, executes each instruction
sequentially, and completes one task before moving on to the next. In simple terms, it means
that the program performs tasks one at a time, following a linear flow of execution. Single
threading is commonly used in simple applications or situations where concurrency or parallel
execution is not necessary. It provides a straightforward and easy-to-understand programming
model since there are no concerns about synchronization, race conditions, or resource sharing
among threads.
1. Many-to-One Model:
In the Many-to-One model, multiple user-level threads are mapped to a
single kernel-level thread.
Thread management and scheduling are handled by a user-level threading
library, and the operating system is unaware of the individual threads.
This model can provide high concurrency and low overhead but may suffer
from limited parallelism and the potential for a single blocking thread to
block all other threads.
2. One-to-One Model:
In the One-to-One model, each user-level thread corresponds to a
separate kernel-level thread.
Thread management and scheduling are handled by the operating system
kernel.
This model offers maximum parallelism and allows blocking operations of
one thread to be handled independently of others.
However, creating a large number of kernel-level threads can be resource-
intensive and may have higher overhead compared to other models.
3. Many-to-Many Model:
The Many-to-Many model provides a flexible mapping of user-level
threads to kernel-level threads.
Multiple user-level threads are mapped to an equal or smaller number of
kernel-level threads.
Thread management and scheduling are handled by a combination of the
user-level threading library and the operating system kernel.
This model allows for a balanced approach, offering both concurrency and
parallelism, and can handle situations where the number of user-level
threads exceeds the available kernel-level threads.