0% found this document useful (0 votes)
45 views26 pages

04 Process Con

This document provides an overview of operating systems concepts related to processes and interprocess communication (IPC). It discusses how processes can cooperate by sharing information or computation. The two main models for IPC are shared memory and message passing. Shared memory allows processes to directly access the same memory region, while message passing involves processes exchanging data through message queues. Issues like synchronization arise with shared memory access. The document also covers concurrency versus parallelism and challenges with multicore programming like workload balancing.

Uploaded by

omnia amir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views26 pages

04 Process Con

This document provides an overview of operating systems concepts related to processes and interprocess communication (IPC). It discusses how processes can cooperate by sharing information or computation. The two main models for IPC are shared memory and message passing. Shared memory allows processes to directly access the same memory region, while message passing involves processes exchanging data through message queues. Issues like synchronization arise with shared memory access. The document also covers concurrency versus parallelism and challenges with multicore programming like workload balancing.

Uploaded by

omnia amir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating Systems 1

CS 241
Spring 2021

By
Marwa M. A. Elfattah

Main Reference
Operating System Concepts, Abraham Silbrschatz,
10th Edition
Processes
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes.
▪ Reasons for cooperating processes:
• Information sharing
Since several applications may be interested in the
same piece of information (for instance, copying and
pasting)
• Computation speedup
• Modularity
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
Only if the computer has multiple processing cores,
process may be broken it into subtasks, each of which
will be executing in parallel with the others.
• Modularity
Interprocess Communication
▪ Processes within a system may be independent or
cooperating
• Cooperating process can affect or be affected by
other processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
Itmay be wanted to construct the system in a modular
fashion, dividing the system functions into separate
processes or threads
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
Can be faster
– since message-passing systems are implemented
using system calls and thus require the more for
kernel intervention.
– In shared-memory systems, system calls are
required only to establish shared memory regions.
• Message passing
Interprocess Communication
▪ Cooperating processes need interprocess
communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing
Useful for exchanging smaller amounts of
data, because no conflicts need be avoided.
Easier to implement in a distributed system
– Although there are systems that provide distributed
shared memory
Communications Models

Shared memory Message passing


An area of memory Queue of messages
shared among the between processes
processes
IPC – Shared Memory
▪ Normally, the OS tries to prevent one process from
accessing another process’s memory.
▪ Shared memory IPC requires that two or more
processes agree to remove this restriction.
• The communication is under the control of the users
processes not the operating system.
• The code for accessing the shared memory be written
explicitly by the application programmer.
The form and the location of the data are determined
by these processes and are not under the OS’s control.
The processes are also responsible for ensuring that
they are not accessing the same location
simultaneously- synchronization problem.
synchronization problem
▪ Processes can execute concurrently
• May be interrupted at any time, partially
completing execution
▪ Concurrent access to shared data may result in
data inconsistency
▪ Maintaining data consistency requires mechanisms
to ensure the orderly execution of cooperating
processes
▪ EX: producer consumer problem
Producer-Consumer Problem
▪ A Model for cooperating processes:
• Producer process produces information that is
consumed by a consumer process
• A shared buffer is needed that can be filled by the
producer and emptied by the consumer.
Reside in a shared memory
▪ A producer can produce one item while a consumer
is consuming another item.
• The producer and consumer
must be
synchronized
Bounded-Buffer – Shared-Memory
▪ Shared data reside in a region of memory shared by
the producer and consumer processes to implement
the buffer:
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0; int out = 0;
int count = 0;
▪ The shared buffer is implemented as a circular
Queue with two logical pointers: in and out, and
counter that keeps track of the number of items.
Producer Consumer
while (counter == while (counter == 0);
BUFFER_SIZE); /*While Full buffer
/*While Full buffer do nothing */
do nothing */
buffer[in] = next_consumed =
next_produced; buffer[out];
in = (in + 1) % out = (out + 1) %
BUFFER_SIZE; BUFFER_SIZE;
counter++; counter--;
} }

register1=counter register1=counter
register1=register1+1 register1=register1-1
counter =register1 counter =register1
Producer Consumer
while (counter == while (counter == 0);
BUFFER_SIZE); /*While Full buffer
/*While Full buffer do nothing */
do nothing */
buffer[in] = next_consumed
What if both have =
attempted to modifybuffer[out];
next_produced; the
counter concurrently
in = (in + 1) % out = (out + 1) %
BUFFER_SIZE; BUFFER_SIZE;
counter++; counter--;
} }

register1=counter register1=counter
register1=register1+1 register1=register1-1
counter =register1 counter =register1
IPC – Message Passing
▪ Processes communicate with each other without
sharing address space
▪ Useful in a distributed environment.
• Ex: an Internet chat program
▪ IPC facility provides two operations:
• send(message)
• Receive(message)
▪ If processes P and Q wish to communicate, they
need to:
• Establish a communication link between them
Direct or Indirect
IPC – Message Passing
▪ Processes communicate with each other without
sharing
Direct address
Link space Indirect Link
▪ •Useful a distributed• Messages
A link isinassociated environment.
are directed and
with exactly one pair received from mailboxes (also
•of Ex: an Internet chatcalled
communicating programport).
▪ IPC
processes
facility provides two operations:
Each mailbox has a unique id
• Processes must
•name
send(message)
each other:
send (A, message),
receive(A, message)
• Receive(message)
send (P, • Processes can communicate
message), P and Q wish
▪ If processes only ifto communicate,
they they
share a mailbox
receive(Q,
need to:
message) • A link may be associated with
many processes
• Establish a communication link between them
Direct or Indirect
Message Passing – Message Size
▪ The message size is either fixed or variable.
• If only fixed-sized messages can be sent, the
system-level implementation is
straightforward. But the task of programming
more difficult.
• Variable-sized messages require amore
complex system-level implementation, but the
programming task becomes simpler.
Concurrency vs. Parallelism
▪ Parallelism implies a system can perform more
than one task simultaneously
▪ Concurrency supports more than one task making
progress
• On a single processor, CPU schedulers were
designed to provide the illusion of parallelism by
rapidly switching between processes, thereby
allowing each process to make progress. Such
processes were running concurrently, but not in
parallel.
▪ Thus, it is possible to have concurrency without
parallelism.
Concurrency vs. Parallelism
▪ Concurrent execution on single-core system:

▪ Parallelism on a multi-core system:


Multicore Programming
▪ Multicore or multiprocessor systems putting
pressure on programmers, challenges include:
• Dividing independent activities ➔ thus can run in
parallel

• Balance ➔ tasks perform nearly equal work


• Data splitting ➔ to run different tasks in different cores
• Data dependency ➔ When one task depends on data
from another, programmers must ensure that the execution of
the tasks is synchronized

• Testing and debugging ➔ When a program is


running in parallel on multiple cores, many different execution
paths are possible. Testing and debugging such concurrent
programs is inherently more difficult.
Multicore Programming - Type of parallelism
▪ Data parallelism – distributes subsets of the same
data across multiple cores, same operation on each

▪ Task parallelism – distributing threads across


cores, each thread performing unique operation
Amdahl’s Law
▪ Identifies performance gains from adding additional
cores to an application that has both serial and
parallel components
▪ IF: S is serial portion, and N processing cores

𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵
Amdahl’s Law
▪ IF: S is serial portion, and N processing cores
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵

▪ That is, if application is 75% parallel and 25% serial,


• Having 2 cores results in speedup of
𝟏
(𝟎.𝟕𝟓) = 1.6 times
𝟎.𝟐𝟓+ 𝟐

• Having 3 cores results in speedup of


𝟏
(𝟎.𝟕𝟓) = 2 times
𝟎.𝟐𝟓+ 𝟑
Amdahl’s Law
▪ IF: S is serial portion, and N processing cores
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵
▪ As N approaches infinity, speedup approaches 1 / S

Serial portion of an application has


disproportionate effect on performance gained
by adding additional cores
Thank You

You might also like