04 Process Con
04 Process Con
CS 241
Spring 2021
By
Marwa M. A. Elfattah
Main Reference
Operating System Concepts, Abraham Silbrschatz,
10th Edition
Processes
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes.
▪ Reasons for cooperating processes:
• Information sharing
Since several applications may be interested in the
same piece of information (for instance, copying and
pasting)
• Computation speedup
• Modularity
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
Only if the computer has multiple processing cores,
process may be broken it into subtasks, each of which
will be executing in parallel with the others.
• Modularity
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
Itmay be wanted to construct the system in a modular
fashion, dividing the system functions into separate
processes or threads
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
Can be faster
– since message-passing systems are implemented
using system calls and thus require the more for
kernel intervention.
– In shared-memory systems, system calls are
required only to establish shared memory regions.
• Message passing
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing
Useful for exchanging smaller amounts of
data, because no conflicts need be avoided.
Easier to implement in a distributed system
– Although there are systems that provide distributed
shared memory
Communications Models
register1=counter register1=counter
register1=register1+1 register1=register1-1
counter =register1 counter =register1
Producer Consumer
while (counter == while (counter == 0);
BUFFER_SIZE); /*While Full buffer
/*While Full buffer do nothing */
do nothing */
buffer[in] = next_consumed
What if both have =
attempted to modifybuffer[out];
next_produced; the
counter concurrently
in = (in + 1) % out = (out + 1) %
BUFFER_SIZE; BUFFER_SIZE;
counter++; counter--;
} }
register1=counter register1=counter
register1=register1+1 register1=register1-1
counter =register1 counter =register1
IPC – Message Passing
▪ Processes communicate with each other without
sharing address space
▪ Useful in a distributed environment.
• Ex: an Internet chat program
▪ IPC facility provides two operations:
• send(message)
• Receive(message)
▪ If processes P and Q wish to communicate, they
need to:
• Establish a communication link between them
Direct or Indirect
IPC – Message Passing
▪ Processes communicate with each other without
sharing
Direct address
Link space Indirect Link
▪ •Useful a distributed• Messages
A link isinassociated environment.
are directed and
with exactly one pair received from mailboxes (also
•of Ex: an Internet chatcalled
communicating programport).
▪ IPC
processes
facility provides two operations:
Each mailbox has a unique id
• Processes must
•name
send(message)
each other:
send (A, message),
receive(A, message)
• Receive(message)
send (P, • Processes can communicate
message), P and Q wish
▪ If processes only ifto communicate,
they they
share a mailbox
receive(Q,
need to:
message) • A link may be associated with
many processes
• Establish a communication link between them
Direct or Indirect
Message Passing – Message Size
▪ The message size is either fixed or variable.
• If only fixed-sized messages can be sent, the
system-level implementation is
straightforward. But the task of programming
more difficult.
• Variable-sized messages require amore
complex system-level implementation, but the
programming task becomes simpler.
Concurrency vs. Parallelism
▪ Parallelism implies a system can perform more
than one task simultaneously
▪ Concurrency supports more than one task making
progress
• On a single processor, CPU schedulers were
designed to provide the illusion of parallelism by
rapidly switching between processes, thereby
allowing each process to make progress. Such
processes were running concurrently, but not in
parallel.
▪ Thus, it is possible to have concurrency without
parallelism.
Concurrency vs. Parallelism
▪ Concurrent execution on single-core system:
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵
Amdahl’s Law
▪ IF: S is serial portion, and N processing cores
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵