0% found this document useful (0 votes)
25 views26 pages

Process Synchronisation - 1

This document discusses parallel programming and process synchronization. It covers interprocess communication using shared memory and message passing. Key topics include the producer-consumer problem, bounded-buffer solutions in shared memory, race conditions that can occur, and motivation for using threads. Thread models like many-to-one, one-to-one, and many-to-many are described. The document also discusses user threads, kernel threads, and implementing threads in Java.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views26 pages

Process Synchronisation - 1

This document discusses parallel programming and process synchronization. It covers interprocess communication using shared memory and message passing. Key topics include the producer-consumer problem, bounded-buffer solutions in shared memory, race conditions that can occur, and motivation for using threads. Thread models like many-to-one, one-to-one, and many-to-many are described. The document also discusses user threads, kernel threads, and implementing threads in Java.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Parallel Programming

Process Synchronization - 1

Dr. Omar Zakaria


Interprocess Communication

▪ Processes within a system may be independent or cooperating


▪ Cooperating process can affect or be affected by other
processes, including sharing data
▪ Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
▪ Cooperating processes need interprocess communication (IPC)
▪ Two models of IPC
• Shared memory
• Message passing

3.2
Communications Models
(a) Shared memory. (b) Message passing.

3.3
Producer-Consumer Problem
▪ Paradigm for cooperating processes:
• producer process produces information that is consumed by a
consumer process
▪ Two variations:
• unbounded-buffer places no practical limit on the size of the
buffer:
 Producer never waits
 Consumer waits if there is no buffer to consume
• bounded-buffer assumes that there is a fixed buffer size
 Producer must wait if all buffers are full
 Consumer waits if there is no buffer to consume

3.4
IPC – Shared Memory

▪ An area of memory shared among the processes that wish to


communicate
▪ The communication is under the control of the users
processes not the operating system.
▪ Major issues is to provide mechanism that will allow the user
processes to synchronize their actions when they access
shared memory.

3.5
Bounded-Buffer – Shared-Memory Solution

▪ Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

▪ Solution is correct, but can only use BUFFER_SIZE-1


elements

3.6
Producer Process – Shared Memory

item next_produced;

while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}

3.7
Consumer Process – Shared Memory
item next_consumed;

while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}

3.8
What about Filling all the Buffers?
▪ Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers.
▪ We can do so by having an integer counter that keeps track
of the number of full buffers.
▪ Initially, counter is set to 0.
▪ The integer counter is incremented by the producer after it
produces a new buffer.
▪ The integer counter is and is decremented by the consumer
after it consumes a buffer.

3.9
Producer

while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE)


; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

3.10
Consumer

while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}

3.11
Race Condition
▪ counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
▪ counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2

▪ Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

3.12
Threads
Motivation

▪ Most modern applications are multithreaded


▪ Threads run within application
▪ Multiple tasks with the application can be implemented by
separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
▪ Process creation is heavy-weight while thread creation is
light-weight
▪ Can simplify code, increase efficiency
▪ Kernels are generally multithreaded

3.14
Single and Multithreaded Processes

3.15
Multithreaded Server Architecture

3.16
Benefits

▪ Responsiveness – may allow continued execution if part of


process is blocked, especially important for user interfaces
▪ Resource Sharing – threads share resources of process, easier
than shared memory or message passing
▪ Economy – cheaper than process creation, thread switching
lower overhead than context switching
▪ Scalability – process can take advantage of multicore
architectures

3.17
Concurrency vs. Parallelism
▪ Parallelism implies a system can perform more than one task
simultaneously
▪ Concurrency supports more than one task making progress
▪ Single processor / core, scheduler providing concurrency
▪ Concurrent execution on single-core system:

▪ Parallelism on a multi-core system:

3.18
User Threads and Kernel Threads

▪ User threads - management done by user-level threads library


▪ Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
▪ Kernel threads - Supported by the Kernel
▪ Examples – virtually all general -purpose operating systems, including:
• Windows
• Linux
• Mac OS X
• iOS
• Android

3.19
User and Kernel Threads
Multithreading Models
▪ Many-to-One
▪ One-to-One
▪ Many-to-Many

3.20
Many-to-One
▪ Many user-level threads mapped to single kernel thread
▪ One thread blocking causes all to block
▪ Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
▪ Few systems currently use this model

3.21
One-to-One

▪ Each user-level thread maps to kernel thread


▪ Creating a user-level thread creates a kernel thread
▪ More concurrency than many-to-one
▪ Number of threads per process sometimes restricted due to overhead
▪ Examples
• Windows
• Linux

3.22
Many-to-Many Model
▪ Allows many user level threads to be mapped to many kernel threads
▪ Allows the operating system to create a sufficient number of kernel
threads
▪ Windows with the ThreadFiber package
▪ Otherwise not very common

3.23
Thread Libraries

▪ Thread library provides programmer with API for creating and


managing threads
▪ Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS

3.24
Java Threads

▪ Java threads are managed by the JVM


▪ Typically implemented using the threads model provided by underlying
OS
▪ Java threads may be created by:
• Extending Thread class
• Implementing the Runnable interface

• Standard practice is to implement Runnable interface

3.25
Java Threads
Implementing Runnable interface:

Creating a thread:

Waiting on a thread:

3.26

You might also like