0% found this document useful (0 votes)
23 views20 pages

Unit 5

Process synchronization is essential in operating systems to prevent conflicts when multiple processes access shared resources, ensuring data consistency and coordination. Semaphores are synchronization primitives that manage access to shared resources, with binary and counting types, using atomic operations like Wait and Signal. Operating systems provide various services, including process management, memory management, and user interfaces, to facilitate efficient computing.

Uploaded by

7079akashgaikwad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views20 pages

Unit 5

Process synchronization is essential in operating systems to prevent conflicts when multiple processes access shared resources, ensuring data consistency and coordination. Semaphores are synchronization primitives that manage access to shared resources, with binary and counting types, using atomic operations like Wait and Signal. Operating systems provide various services, including process management, memory management, and user interfaces, to facilitate efficient computing.

Uploaded by

7079akashgaikwad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Q.1 What is the need of Process synchronization?

Explain Semaphore in

detail.

What is the Need for Process Synchronization?

Process synchronization is required in operating systems to ensure that


multiple processes or threads can execute concurrently while accessing
shared resources without conflicts or inconsistencies. It is crucial for
maintaining the correctness and reliability of the system.

Reasons for Process Synchronization:

1. Avoiding Race Conditions:

o A race condition occurs when multiple processes access and


manipulate shared data simultaneously, leading to
unpredictable results. Synchronization ensures orderly access.

2. Ensuring Data Consistency:

o Shared resources like variables, files, or databases must be


accessed in a way that prevents data corruption.

3. Coordination Between Processes:

o Some processes depend on the completion of others.


Synchronization helps manage these dependencies effectively.

4. Preventing Deadlock and Starvation:

o Synchronization prevents multiple processes from getting


stuck waiting for resources or ensures that no process is
perpetually denied access.

5. Efficient Resource Utilization:

o Synchronization ensures that resources are used optimally and


fairly among processes.

What is a Semaphore?

A semaphore is a synchronization primitive used to manage access to


shared resources by multiple processes in a concurrent system. It acts as
a signaling mechanism to coordinate process execution.

Types of Semaphores:
1. Binary Semaphore:

o Can have only two values: 0 (locked) and 1 (unlocked).

o Used to implement mutual exclusion (mutex).

2. Counting Semaphore:

o Can take any integer value.

o Used to manage access to a resource pool with multiple


instances (e.g., printers, database connections).

Working of Semaphore:

 A semaphore is represented by an integer variable S.

 It is manipulated using two atomic operations:

1. Wait (P operation): Decrements the semaphore value by 1.


If the result is negative, the process is put to sleep.

Wait(S):

S=S-1

if S < 0, block the process

2. Signal (V operation): Increments the semaphore value by 1.


If there are blocked processes, one of them is awakened.

Signal(S):

S=S+1

if S ≤ 0, unblock a waiting process

Example of Semaphore Usage:

Consider a scenario where multiple processes want to write to a shared


file.

1. Initialization: Semaphore S is initialized to 1 (binary semaphore).

2. Process Behavior:

o Before writing: A process performs the Wait(S) operation. If


S is 1, it decrements to 0, and the process writes to the file. If
S is 0, the process waits.
o After writing: The process performs the Signal(S) operation,
incrementing S back to 1, allowing another process to write.

Advantages of Semaphores:

1. Provides a mechanism for mutual exclusion, preventing


simultaneous access to shared resources.

2. Can synchronize complex interactions between multiple processes.

3. Supports both binary and counting mechanisms for flexible resource


management.

Disadvantages of Semaphores:

1. Risk of Deadlocks: Misuse of Wait and Signal can lead to circular


waits, causing deadlocks.

2. Difficult Debugging: Managing semaphores in large systems can


be challenging and error-prone.

3. Busy Waiting: In some implementations, a process may stay in a


loop checking semaphore values, wasting CPU cycles.

Q.2 What is Operating System? Explain various operating system services


in

detail.

What is an Operating System?

An Operating System (OS) is a system software that acts as an


interface between computer hardware and the user. It manages hardware
resources and provides an environment for software applications to run.
The OS is essential for efficient and coordinated use of the computer
system.

Functions of an Operating System:

1. Resource Management: Controls and allocates hardware


resources like CPU, memory, I/O devices, etc.
2. Process Management: Manages process creation, scheduling,
execution, and termination.

3. File System Management: Organizes and manages data on


storage devices.

4. User Interface: Provides command-line or graphical interfaces for


interaction.

Operating System Services

An operating system provides several key services that make computing


convenient for users and efficient for the system:

1. Process Management

 Handles creation, scheduling, and termination of processes.

 Functions:

o Multitasking: Running multiple processes simultaneously.

o Process Synchronization: Ensures orderly execution of


dependent processes.

o Process Communication: Allows processes to exchange


information.

2. Memory Management

 Manages allocation and deallocation of memory space.

 Functions:

o Keeps track of memory usage.

o Allocates memory to processes and releases it when no longer


needed.

o Implements virtual memory for efficient use of physical


memory.

3. File System Management

 Provides mechanisms for data storage, retrieval, and organization.


 Functions:

o File creation, deletion, and access control.

o Directory structure management.

o Disk management for efficient data access.

4. Device Management

 Controls and monitors I/O devices.

 Functions:

o Manages device drivers.

o Allocates devices to processes.

o Handles interrupts and errors from devices.

5. Security and Protection

 Ensures secure access to system resources.

 Functions:

o Authentication (e.g., usernames and passwords).

o Authorization to control access to files and resources.

o Protects against malware and unauthorized access.

6. User Interface

 Enables interaction between users and the system.

 Types:

o Command-Line Interface (CLI): Text-based interaction.

o Graphical User Interface (GUI): Uses windows, icons, and


menus.

7. Networking Services

 Provides communication between systems over a network.

 Functions:
o Manages network connections and protocols.

o Facilitates data sharing and remote resource access.

8. Error Detection and Handling

 Identifies and resolves errors in hardware, processes, or software.

 Functions:

o Logs errors for debugging.

o Takes corrective actions like restarting a process.

Q.3 Explain preemptive and Non preemptive scheduling in detail

Preemptive vs. Non-Preemptive Scheduling

Process scheduling is a fundamental feature of an operating system that


determines how CPU time is allocated to processes. It can be broadly
classified into preemptive and non-preemptive scheduling based on
whether a running process can be forcibly interrupted or not.

1. Preemptive Scheduling

In preemptive scheduling, the operating system can suspend the


execution of a running process and assign the CPU to another process.
This ensures that higher-priority or time-critical processes can execute as
soon as possible.

Key Characteristics:

 Processes are interrupted before completing their CPU burst.

 The CPU is reallocated to higher-priority or time-sensitive processes.

 A process can lose the CPU even if it hasn't completed its execution.

How it Works:

1. A process is running.

2. A higher-priority process arrives or the current process exceeds its


allocated time slice.
3. The OS saves the current process state (context switching) and
assigns the CPU to the new process.

Advantages:

1. Better Responsiveness: Crucial for real-time systems as it allows


high-priority processes to execute immediately.

2. Efficient CPU Utilization: Prevents a low-priority process from


holding the CPU unnecessarily.

3. Fairness: Ensures all processes get CPU time in time-sharing


systems.

Disadvantages:

1. Complexity: Requires context switching, which adds overhead and


complexity.

2. Overhead: Frequent context switching consumes CPU cycles and


may lead to reduced performance.

Examples of Algorithms:

 Round Robin (RR)

 Shortest Remaining Time First (SRTF)

 Priority Scheduling (with preemption)

2. Non-Preemptive Scheduling

In non-preemptive scheduling, once a process starts executing on the


CPU, it cannot be interrupted until it finishes its CPU burst or voluntarily
yields the CPU (e.g., for I/O operations).

Key Characteristics:

 Processes run to completion or voluntarily release the CPU.

 The OS does not interrupt running processes.

 Context switching occurs only when a process terminates or enters


a waiting state.

How it Works:

1. A process is selected to run based on the scheduling algorithm.

2. It continues to run until it completes or requests I/O.


Advantages:

1. Simple to Implement: Requires less complexity as no forced


context switching is needed.

2. Less Overhead: No frequent context switches, leading to better


CPU utilization for long-running processes.

Disadvantages:

1. Poor Responsiveness: High-priority processes may have to wait


for a long-running process to complete.

2. Potential for Starvation: Low-priority processes can block higher-


priority ones.

Examples of Algorithms:

 First Come First Serve (FCFS)

 Shortest Job Next (SJN)

 Priority Scheduling (without preemption)

Q.4 Explain any two scheduling algorithm with suitable example

Scheduling Algorithms

Scheduling algorithms determine how processes are selected for


execution on the CPU. Here, we'll explain First Come First Serve (FCFS)
and Round Robin (RR) scheduling algorithms with examples.

1. First Come First Serve (FCFS) Scheduling

Description:

 FCFS is the simplest scheduling algorithm.

 Processes are executed in the order they arrive in the ready queue.

 It works like a queue: the process that arrives first is executed first
(FIFO).
Steps to Implement FCFS:

1. Sort processes based on their arrival time.

2. Allocate CPU to each process in the sorted order.

3. Calculate the metrics (turnaround time, waiting time).

Advantages:

1. Easy to implement and understand.

2. Works well for batch systems.

Disadvantages:

1. Convoy Effect: Long processes can block short processes, leading


to inefficiency.

2. Poor utilization of CPU for interactive systems.

Example:

Process Arrival Time Burst Time

P1 0 4

P2 1 3

P3 2 1

Step-by-Step Execution:

 Order of execution: P1 → P2 → P3.

 Gantt Chart:

| P1 | P2 | P3 |

0 4 7 8

Calculations:

 Turnaround Time (TAT) = Completion Time - Arrival Time.

 Waiting Time (WT) = Turnaround Time - Burst Time.


Turnaround Time Waiting Time
Process Completion Time
(TAT) (WT)

P1 4 4-0=4 4-4=0

P2 7 7-1=6 6-3=3

P3 8 8-2=6 6-1=5

Average TAT: (4 + 6 + 6) / 3 = 5.33


Average WT: (0 + 3 + 5) / 3 = 2.67

2. Round Robin (RR) Scheduling

Description:

 In RR, each process is assigned a fixed time quantum (or time


slice) during which it can execute.

 If a process does not complete within its time quantum, it is


preempted and moved to the back of the queue.

 It is ideal for time-sharing systems.

Steps to Implement RR:

1. Use a time quantum to allocate CPU time to each process.

2. If a process completes before its time quantum ends, it is removed


from the queue.

3. If not, the remaining execution time is added back to the queue.

Advantages:

1. Fair and equitable to all processes.

2. Reduces response time for interactive processes.

Disadvantages:

1. Frequent context switching leads to overhead.

2. Performance depends on the choice of time quantum.


Example:

Process Arrival Time Burst Time

P1 0 4

P2 1 3

P3 2 1

Assume Time Quantum = 2.

Step-by-Step Execution:

 Gantt Chart:

| P1 | P2 | P3 | P1 | P2 |

0 2 4 5 7 8

Calculations:

 Turnaround Time (TAT): Completion Time - Arrival Time.

 Waiting Time (WT): Turnaround Time - Burst Time.

Process Completion Time Turnaround Time (TAT) Waiting Time (WT)

P1 7 7-0=7 7-4=3

P2 8 8-1=7 7-3=4

P3 5 5-2=3 3-1=2

Average TAT: (7 + 7 + 3) / 3 = 5.67


Average WT: (3 + 4 + 2) / 3 = 3.0

Q.5 Write a short note on following with example?

i) Semaphore ii) Monitor iii) Mutex


Short Notes with Examples

i) Semaphore

A semaphore is a synchronization tool used to manage concurrent


processes and prevent race conditions. It is an integer variable that can
be incremented or decremented using two atomic operations:

1. Wait (P operation): Decrements the semaphore value. If the value


is less than 0, the process is blocked.

2.

3. +process if necessary.

Types of Semaphores:

1. Counting Semaphore: Can have any integer value, used to control


access to resources with multiple instances.

2. Binary Semaphore: Works like a mutex (value 0 or 1).

Example:

Semaphore S = 1; // Binary Semaphore

Wait(S);

// Critical Section

Signal(S);

Use Case: Controlling access to a shared printer in a network.

ii) Monitor

A monitor is a high-level synchronization construct that combines mutual


exclusion and condition synchronization. It encapsulates shared variables,
procedures, and the synchronization mechanisms needed to access those
variables.

 Only one process can execute in the monitor at a time.

 Monitors use condition variables for signaling:


o Wait: Suspends the process until a condition is met.

o Signal: Wakes up a waiting process.

Example:

class MonitorExample {

private int count = 0;

public synchronized void increment() {

while (count >= 10) {

try {

wait(); // Wait if the count is full

} catch (InterruptedException e) {}

count++;

notify(); // Signal other processes

public synchronized void decrement() {

while (count <= 0) {

try {

wait(); // Wait if the count is empty

} catch (InterruptedException e) {}

count--;

notify(); // Signal other processes

Use Case: Managing bounded buffer (producer-consumer problem).


iii) Mutex

A mutex (Mutual Exclusion) is a locking mechanism that allows only


one thread to access a shared resource at a time. Unlike a binary
semaphore, a mutex is owned by the thread that locked it and must be
unlocked by the same thread.

Key Properties:

1. Ownership: Only the thread that locks the mutex can unlock it.

2. Mutual Exclusion: Ensures only one thread accesses the critical


section.

Example:

pthread_mutex_t lock; // Declare mutex

pthread_mutex_lock(&lock); // Lock the mutex

// Critical Section

pthread_mutex_unlock(&lock); // Unlock the mutex

Use Case: Preventing race conditions in multithreaded programs, like


updating a shared counter

Q.6 What is deadlock? State and explain the conditions for deadlock,
Explain

them with example?

What is Deadlock?

A deadlock occurs in a system when two or more processes are unable to


proceed because each is waiting for a resource held by another process.
This situation causes a circular dependency where no process can make
progress, effectively halting the system.

Conditions for Deadlock


For a deadlock to occur, the following four conditions (Coffman conditions)
must hold simultaneously:

1. Mutual Exclusion

 At least one resource must be in a non-shareable mode, meaning


only one process can use the resource at a time.

 Example: A printer can only handle one printing job at a time.

2. Hold and Wait

 A process holding one or more resources is waiting to acquire


additional resources that are currently held by other processes.

 Example: Process P1 is holding a scanner and waiting for access to a


printer, which is held by P2.

3. No Preemption

 Resources cannot be forcibly taken away from a process; they must


be released voluntarily.

 Example: Process P2 cannot forcefully take the scanner from P1; it


must wait until P1 releases it.

4. Circular Wait

 A set of processes is waiting in a circular chain, where each process


is waiting for a resource held by the next process in the chain.

 Example:

o P1 is waiting for a resource held by P2.

o P2 is waiting for a resource held by P3.

o P3 is waiting for a resource held by P1.

Example of Deadlock

Scenario:

1. Process P1 is holding Resource R1 and needs Resource R2.


2. Process P2 is holding Resource R2 and needs Resource R1.

Resource Allocation Table:

Process Held Resource Requested Resource

P1 R1 R2

P2 R2 R1

Deadlock Cycle:

 P1 → Holds R1, requests R2.

 P2 → Holds R2, requests R1.

 Circular wait occurs as neither process can proceed until the other
releases its resource.

Prevention and Avoidance

Ways to Prevent Deadlock:

1. Mutual Exclusion: Make resources shareable wherever possible


(e.g., read-only files).

2. Hold and Wait: Require processes to request all resources at once.

3. Preemption: Allow processes to forcibly release resources.

3. Circular Wait: Impose a resource allocation order and require


processes to request resources in that order.

Avoidance:

 Use algorithms like Banker's Algorithm to ensure safe allocation of


resources.

Q.7 Explain producer Consumer problem & Dining Philosopher problem


with

solution?
LATER REVISIT

Q.8 Explain Deadlock prevention, deadlock avoidance, deadlock detection,

deadlock recovery with example?

Deadlock in Operating Systems

A deadlock occurs when a set of processes is stuck in a circular waiting


situation where each process is holding a resource and waiting for another
resource held by another process.

To handle deadlocks, we use strategies like prevention, avoidance,


detection, and recovery.

1. Deadlock Prevention

Definition:
Deadlock prevention ensures that at least one of the necessary conditions
for deadlock cannot occur.

The four conditions are:

1. Mutual Exclusion: No two processes can access a resource


simultaneously.

2. Hold and Wait: A process holds one resource and waits for
another.

3. No Preemption: Resources cannot be forcibly taken from


processes.

4. Circular Wait: Processes form a circular chain, each waiting for a


resource held by the next process.

Strategies to Prevent Deadlocks:

Condition Prevention Strategy

Mutual
Share resources where possible (e.g., read-only resources).
Exclusion

Hold and Require processes to request all resources at once (but this
Condition Prevention Strategy

Wait may lead to resource wastage).

No
Allow resources to be forcibly taken away from processes.
Preemption

Impose a hierarchy on resource acquisition to avoid circular


Circular Wait
dependency.

Example:

If processes P1 and P2 need resources R1 and R2:

 Avoid circular wait by always acquiring R1 before R2.

2. Deadlock Avoidance

Definition:
Deadlock avoidance dynamically checks whether allocating a resource to
a process will cause a deadlock.

Banker’s Algorithm (Example of Deadlock Avoidance):

 Assumptions:

o Each process declares its maximum resource needs in


advance.

o The system keeps track of available, allocated, and maximum


resources.

o A process is allocated a resource only if it leaves the system in


a safe state.

Steps in Banker’s Algorithm:

1. Check if the process request can be fulfilled with available


resources.

2. Simulate the allocation.

3. If the system is still in a safe state (no deadlock), grant the request;
otherwise, deny it.

Example:
Consider:

 Total resources: 10

 Allocated: 7

 Maximum demand of process P1: 5

If P1 requests 4, granting it would leave only 1 resource available, which


may cause a deadlock. The system denies the request.

3. Deadlock Detection

Definition:
Deadlock detection involves allowing deadlocks to occur and then using
algorithms to detect the circular wait among processes.

Methods for Detection:

 Resource-Allocation Graph: Analyze if there's a circular wait in


the graph.

 Matrix-Based Approach: Use a "Wait-For" graph in matrix form to


identify deadlocks.

Example:

If P1 holds R1 and waits for R2, and P2 holds R2 and waits for R1, a
circular wait exists, signaling a deadlock.

4. Deadlock Recovery

Definition:
After detecting a deadlock, recovery methods are used to break the
deadlock and resume process execution.

Recovery Methods:

1. Process Termination:

o Terminate one or more processes involved in the deadlock.

o Priority can be based on the process's importance, resource


usage, or progress.
2. Resource Preemption:

o Forcefully take resources away from some processes and


allocate them to others.

o Requires rollback of processes to a safe state.

Example:

If processes P1 and P2 are deadlocked over resources R1 and R2:

 Terminate P1 to free R1, allowing P2 to proceed.

 Alternatively, preempt R1 from P1 and allocate it to P2.

You might also like