0% found this document useful (0 votes)
39 views5 pages

Process Synchronization

Uploaded by

rajneeshsingh018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views5 pages

Process Synchronization

Uploaded by

rajneeshsingh018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

1

Process Synchronization is the coordination of execution of multiple


processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a
concurrent system.

The main objective of process synchronization is to ensure that multiple


processes access shared resources without interfering with each other,
and to prevent the possibility of inconsistent data due to concurrent
access. To achieve this, various synchronization techniques such as
semaphores, monitors, and critical sections are used.

In a multi-process system, synchronization is necessary to ensure data


consistency and integrity, and to avoid the risk of deadlocks and other
synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in
ensuring the correct and efficient functioning of multi-process systems.

On the basis of synchronization, processes are categorized as one of


the following two types:

Independent Process: The execution of one process does not affect the
execution of other processes.
Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative
process also because resources are shared in Cooperative processes.
2

Race Condition:

When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and
process the manipulations over the same data concurrently, then the outcome depends on
the particular order in which the access takes place. A race condition is a situation that may
occur inside a critical section. This happens when the result of multiple thread execution in
the critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent
race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.
3

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be


signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.

Binary Semaphores
: They can only be either 0 or 1. They are also known as mutex locks, as the locks can
provide mutual exclusion. All the processes can share the same mutex semaphore that is
initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can
make the mutex semaphore 1 and start its critical section. When it completes its critical
section, it can reset the value of the mutex semaphore to 0 and some other process can
enter its critical section.
Counting Semaphores: They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if the
number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value of the
counting semaphore by.
1. After the process is over with the use of the instance of the resource, it can leave the
critical section thereby adding 1 to the number of available instances of the resource.

Advantages and Disadvantages:

Advantages of Process Synchronization:

1 Ensures data consistency and integrity


2 Avoids race conditions
3 Prevents inconsistent data due to concurrent access
4 Supports efficient and effective use of shared resources
5 Disadvantages of Process Synchronization:

1 Adds overhead to the system


2 Can lead to performance degradation
3 Increases the complexity of the system
4 Can cause deadlocks if not implemented properly.

A thread is a single sequence stream within a process. Threads are


also called lightweight processes as they possess some of the
4

properties of processes. Each thread belongs to exactly one


process. In an operating system that supports multithreading, the
process can consist of many threads.

Why Do We Need Thread?


Threads run in parallel improving the application performance. Each
such thread has its own CPU state and stack, but they share the
address space of the process and the environment.
Threads can share common data so they do not need to use
interprocess communication. Like the processes, threads also have
states like ready, executing, blocked, etc.
Priority can be assigned to the threads just like the process,
and the highest priority thread is scheduled first.
Each thread has its own
Thread Control Block (TCB)
. Like the process, a context switch occurs for the thread,
and register contents are saved in (TCB). As threads share
the same address space and resources, synchronization is
also required for the various activities of the thread.
Why Multi-Threading?
A thread is also known as a lightweight process. The idea is
to achieve parallelism by dividing a process into multiple
threads. For example, in a browser, multiple tabs can be
different threads. MS Word uses multiple threads: one
thread to format the text, another thread to process inputs,
etc. More advantages of multithreading are discussed
below.

Multithreading is a technique used in operating systems to


improve the performance and responsiveness of computer
systems. Multithreading allows multiple threads (i.e.,
5

lightweight processes) to share the same resources of a


single process, such as the CPU, memory, and I/O devices.

Difference Between Process and Thread

The primary difference is that threads within the same


process run in a shared memory space, while processes
run in separate memory spaces. Threads are not
independent of one another like processes are, and as a
result, threads share with other threads their code section,
data section, and OS resources (like open files and
signals). But, like a process, a thread has its own program
counter (PC), register set, and stack space.
Components of Threads
These are the basic components of the Operating System.

Stack Space
Register Set
Program Counter

You might also like