0% found this document useful (0 votes)
15 views26 pages

Unit 3 Part1

The document discusses the principles of concurrent processes in operating systems, focusing on managing multiple processes, inter-process communication, and the challenges of concurrency such as race conditions and resource competition. It highlights the producer-consumer problem as a paradigm for cooperating processes and outlines the requirements for mutual exclusion and synchronization. Additionally, it covers methods of interprocess communication, including direct and indirect communication, and the various buffering strategies for message passing.

Uploaded by

harshitkota1357
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views26 pages

Unit 3 Part1

The document discusses the principles of concurrent processes in operating systems, focusing on managing multiple processes, inter-process communication, and the challenges of concurrency such as race conditions and resource competition. It highlights the producer-consumer problem as a paradigm for cooperating processes and outlines the requirements for mutual exclusion and synchronization. Additionally, it covers methods of interprocess communication, including direct and indirect communication, and the various buffering strategies for message passing.

Uploaded by

harshitkota1357
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating System

CSF204

Unit-3 Part-1

1
Concurrent Processes
Principle of Concurrency
Co-operating Processes
Producer / Consumer Problem
Inter Process Communication

2
Multiple Processes
• Central to the design of modern Operating Systems is managing
multiple processes
• Multiprogramming
• Multiprocessing
• Distributed Processing
• Big Issue is Concurrency
• Managing the interaction of all of these processes

3
Interleaving and
Overlapping Processes
• processes may be interleaved on uniprocessors

4
Interleaving and
Overlapping Processes
• And not only interleaved but overlapped on multi-processors
• Both interleaving and overlapping present the same problems
in concurrent processing

5
Difficulties of
Concurrency
• Sharing of global resources
• consider two processes perform reads and writes on the same global variable
• Optimally managing the allocation of resources is difficult
• Difficult to locate programming errors as results are not deterministic
and reproducible.

6
A Simple Example
Suppose echo is a shared procedure and
P1 echoes ‘x’ and P2 echoes ‘y’
void echo()
{ // send a keyboard-input character to display
chin = getchar(); What would happen if P1 is
chout = chin; interrupted here by P2?

putchar(chout);
}
What would happen if only one process is
permitted at a time to be in the procedure?

7
A Simple Example:
On a Multiprocessor
Process P1 Process P2
. .
chin = getchar(); .
. chin = getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
8
Enforce Single Access
• If we enforce a rule that only one process may enter the function
at a time then:
• P1 & P2 run on separate processors
• P1 enters echo first,
• P2 tries to enter but is blocked
• P1 completes execution
• P2 resumes and executes echo

Solution: Control access to


shared resource
9
Race Condition
• A race condition occurs when
• Multiple processes or threads read and write data items
• They do so in a way where the final result depends on the order of execution
of the processes
• The output depends on who finishes the race last (the loser)
• However, the processes and outputs must be independent of the
processing speed

10
Process Interaction

Example: multiprogramming of
multiple independent processes

Example: processes share access to


some object, such as an I/O buffer

Example: Processes designed to


work jointly on some activity

11
Competition among
Processes for Resources
Three main control problems:
• Need for Mutual Exclusion
• Critical resource: non sharable resource, e.g., printer
• Critical section: portion of the program that uses a critical resource
• Deadlock
• Starvation

12
Requirements for
Mutual Exclusion
• Only one process at a time is allowed in the critical section for a
resource
• A process that halts in its noncritical section must do so without
interfering with other processes
• No deadlock or starvation

13
Requirements for
Mutual Exclusion
• A process must not be delayed access to a critical section when there
is no other process using it
• No assumptions are made about relative process speeds or number of
processes
• A process remains inside its critical section for a finite time only

14
Cooperating Processes
• Independent process cannot affect or be affected by the execution
of another process.
• Cooperating process can affect or be affected by the execution of
another process
• Advantages of process cooperation
• Information sharing
• Computation speed-up
• Modularity
• Convenience

15
Producer-Consumer Problem
• Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process.
• unbounded-buffer places no practical limit on the size of the buffer.
• bounded-buffer assumes that there is a fixed buffer size.
• General Situation:
• One or more producers are generating data and placing these in a buffer
• A single consumer is taking items out of the buffer one at time
• Only one producer or consumer may access the buffer at any one time
• The Problem:
• Ensure that the Producer can’t add data into full buffer and consumer
can’t remove data from empty buffer
16
Bounded-Buffer – Shared-
Memory Solution
• Shared data
#define BUFFER_SIZE 10
Typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
• Solution is correct, but can only use
BUFFER_SIZE-1 elements
17
Bounded-Buffer – Producer
Process

item nextProduced;

while (1) {
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
} 18
Bounded-Buffer – Consumer
Process
item nextConsumed;

while (1) {
while (in == out)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}

19
Interprocess Communication
(IPC)
• Mechanism for processes to communicate and to synchronize their actions.
• Shared memory system - a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the
shared region.
• Message passing system – processes communicate with each other without resorting to
shared variables.
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
• Implementation of communication link
• physical (e.g., shared memory, hardware bus)
20
• logical (e.g., logical properties)
Direct Communication
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
• Properties of communication link
• Links are established automatically.
• A link is associated with exactly one pair of communicating
processes.
• Between each pair there exists exactly one link.
• The link may be unidirectional, but is usually bi-directional.

21
Indirect Communication
• Messages are directed and received from mailboxes
(also referred to as ports).
• Each mailbox has a unique id.
• Processes can communicate only if they share a mailbox.

• Properties of communication link


• Link established only if processes share a common mailbox
• A link may be associated with many processes.
• Each pair of processes may share several communication
links.
• Link may be unidirectional or bi-directional.

22
Indirect Communication
• Operations
• create a new mailbox
• send and receive messages through mailbox
• destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from
mailbox A

23
Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A.
• P1, sends; P2 and P3 receive.
• Who gets the message?
• Solutions
• Allow a link to be associated with at most two processes.
• Allow only one process at a time to execute a receive operation.
• Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.

24
Synchronization

• Message passing may be either blocking or non-blocking.


• Blocking is considered synchronous
• Non-blocking is considered asynchronous
• send and receive primitives may be either blocking or non-blocking.

25
Buffering

• Queue of messages attached to the link; implemented in one of


three ways.
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous).
2. Bounded capacity – finite length of n messages
Sender must wait if link full.
3. Unbounded capacity – infinite length
Sender never waits.

26

You might also like