0% found this document useful (0 votes)
62 views102 pages

Event Counter

Uploaded by

Ashutosh Dash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views102 pages

Event Counter

Uploaded by

Ashutosh Dash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 102

Module 3

Threads
• Most modern applications are multithreaded
• Threads run within application
• Multiple tasks with the application can be implemented
by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while thread creation is
light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
Benefits
• Responsiveness – may allow continued execution if
part of process is blocked, especially important for user
interfaces
• Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
• Economy – cheaper than process creation, thread
switching lower overhead than context switching
• Scalability – process can take advantage of
multiprocessor architectures
Less time to Threads enhance
terminate a efficiency in
thread than a Switching communication
Takes less between two between
process
time to threads takes programs
create a less time than
new thread switching
than a between
process processes
Single and Multithreaded Processes
Single thread Vs multi thread
Multicore programming

Concurrent execution on a single core system

Parallel execution on a multi core system


Each thread has:

• an execution state (Running, Ready,


etc.)
• saved thread context when not
running (TCB)
• an execution stack
• some per-thread static storage for
local variables
• access to the shared memory and
resources of its process (all threads
of a process share this)
Thread Scheduling

 In an OS that supports threads,


scheduling and dispatching is done on a
thread basis
Most of the state information dealing
with execution is maintained in thread-
level data structures
suspending a process involves
suspending all threads of the process
termination of a process terminates all
threads within the process
Thread Execution
States

The key states Thread


for operations
a thread are: associated with
a change in
• Running thread state are:
• Ready
 Spawn (create)
• Blocked
 Block
 Unblock
 Finish
Thread Execution
• A key issue with threads is whether they can
be scheduled independently of the process to
which they belong.

• Or is it possible to block one thread in a


process without blocking the entire process?
• If not, then much of the flexibility of
threads is lost.
Types of Threads

User Level
Thread (ULT)
Kernel level
Thread (KLT)
User-Level Threads (ULTs)
• Thread
management is
done by the
application
• The kernel is not
aware of the
existence of
threads
Scheduling of user level threads
Relationships Between ULT
States and Process States

Possible
transitions
from 4.6a:

4.6a→4.6b
4.6a→4.6c
4.6a→4.6d
Advantages of ULTs

ULTs
can
Scheduling can
run on
be application
any OS
specific
Thread switching does
not require kernel
mode privileges (no
mode switches)
Disadvantages of ULTs

• In a typical OS many system calls are


blocking
 as a result, when a ULT executes a
system call, not only is that thread
blocked, but all of the threads
within the process are blocked
• In a pure ULT strategy, a multithreaded
application cannot take advantage of
multiprocessing
Kernel-Level Threads
(KLTs)
Thread
management is
done by the kernel
(could call them
KMT)
 no thread
management is
done by the
application
Windows is an
example of this
approach
Scheduling of kernel level
threads
Advantages of KLTs

• The kernel can simultaneously schedule


multiple threads from the same process on
multiple processors
• If one thread in a process is blocked, the
kernel can schedule another thread of the
same process
Disadvantage of KLTs
The transfer of control from
one thread to another within
the same process requires a
mode switch to the kernel
Combined Approaches

• Thread creation is done


in the user space
• Bulk of scheduling and
synchronization of
threads is by the
application
• Solaris is an example
Multithreading Models
• In general, user-level threads can be implemented using
one of four models.
• Many-to-one
• One-to-one
• Many-to-many
• Two-level
• All models maps user-level threads to kernel-level
threads. A kernel thread is similar to a process in a
non-threaded (single-threaded) system. The kernel
thread is the unit of execution that is scheduled by the
kernel to execute on the CPU.
Many-to-One Model
Many-to-One Model
Many user-level threads mapped to single kernel thread

User-level threads can be concurrent without being parallel,


thread switching incurs low overhead, and blocking of a user-
level thread leads to blocking of all threads in the process.

Examples:

Solaris Green Threads


GNU Portable Threads
One-to-one Model
One-to-One
Each user-level thread maps to kernel thread
Threads can operate in parallel on different
CPUs of a multiprocessor system; however,
switching between threads is performed at
the kernel level and incurs high overhead.
Blocking of a user-level thread does not block
other user-level threads of the process
because they are mapped into different
kernel-level threads.

Examples
Windows NT/XP/2000
Linux
Solaris 9 and later
Many-to-Many Model
Many-to-Many Model
Allows many user level threads to be
mapped to many kernel threads

It provides parallelism between user-level


threads that are mapped into different
kernel-level threads at the same time, and
provides low overhead of switching.
Examples
• Solaris prior to version 9
• Windows NT/2000 with the ThreadFiber
package
Two-level Model
Similar to M:M, except that it allows a
user thread to be bound to kernel
thread

Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
Two-level Model
Thread Libraries
Thread library provides programmer
with API for creating and managing
threads

Two primary ways of implementing


Library entirely in user space
Kernel-level library supported by the OS
Pthreads
The ANSI/IEEE Portable Operating System Interface
(POSIX) standard defines the pthreads application
program interface for use by C language programs.
May be provided either as user-level or kernel-level

A POSIX standard (IEEE 1003.1c) API for thread


creation and synchronization

API specifies behavior of the thread library,


implementation is up to development of the library

Common in UNIX operating systems (Solaris, Linux,


Mac OS X)
Interprocess Communication

• Processes within a system may be independent or


cooperating
• Cooperating process can affect or be affected by other
processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Convenience
• Cooperating processes need interprocess
communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
(a) Message passing. (b) shared memory.
Producer-Consumer Problem

• Paradigm for cooperating processes,


producer process produces information that
is consumed by a consumer process
• unbounded-buffer places no practical limit on
the size of the buffer
• bounded-buffer assumes that there is a fixed
buffer size
Bounded-Buffer – Shared-Memory Solution

• Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Bounded-Buffer – Producer

item next_produced;
while (true){
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing
*/
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded Buffer – Consumer
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}
Interprocess Communication – Shared Memory

• An area of memory shared among the


processes that wish to communicate
• The communication is under the control of
the users processes not the operating
system.
• Major issues is to provide mechanism that
will allow the user processes to synchronize
their actions when they access shared
memory.
• Synchronization is discussed in great details
later
Interprocess Communication – Message Passing

• Mechanism for processes to communicate


and to synchronize their actions

• Message system – processes


communicate with each other without
resorting to shared variables

• IPC facility provides two operations:


• send(message)
• receive(message)

• The message size is either fixed or


variable
Message Passing (Cont.)

• If processes P and Q wish to communicate, they


need to:
• Establish a communication link between them
• Exchange messages via send/receive
• Implementation issues:
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of
communicating processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate
fixed or variable?
• Is a link unidirectional or bi-directional?
Direct Communication
• Processes must name each other
explicitly:
• send (P, message) – send a message to
process P
• receive(Q, message) – receive a message
from process Q
• Properties of communication link
• Links are established automatically
• A link is associated with exactly one pair of
communicating processes
• Between each pair there exists exactly one
link
• The link may be unidirectional, but is
usually bi-directional
Indirect Communication
• Messages are directed and received from
mailboxes (also referred to as ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a
mailbox
• Properties of communication link
• Link established only if processes share a
common mailbox
• A link may be associated with many processes
• Each pair of processes may share several
communication links
• Link may be unidirectional or bi-directional
Indirect Communication
• Operations
• create a new mailbox (port)
• send and receive messages through mailbox
• destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a
message from mailbox A
Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
• Solutions
• Allow a link to be associated with at
most two processes
• Allow only one process at a time to
execute a receive operation
• Allow the system to select arbitrarily
the receiver. Sender is notified who
the receiver was.
Pipes
• Acts as a conduit allowing two processes to
communicate
• Issues:
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-
duplex?
• Must there exist a relationship (i.e., parent-child)
between the communicating processes?
• Ordinary pipes – cannot be accessed from outside
the process that created it. Typically, a parent
process creates a pipe and uses it to
communicate with a child process that it created.
• Named pipes – can be accessed without a parent-
child relationship.
Ordinary Pipes
 Ordinary Pipes allow communication in standard
producer-consumer style
 Producer writes to one end (the write-end of the pipe)
 Consumer reads from the other end (the read-end of
the pipe)
 Ordinary pipes are therefore unidirectional
 Require parent-child relationship between
communicating processes

 Windows calls these anonymous pipes


#include<stdio.h>
#include<sys/types.h>
int main()
{
char msg[25]=“Greetings”;
char read[25];
int fd[2];
pid_t pid;
if(pipe(fd)==-1)
{
fprintf(stderr,”Pipe failed”);
return 1;
}
pid=fork();
if(pid<0)
{
Error;
}
If(pid>0)
{
close(fd[0]);
write(fd[1],msg,strlen(msg+1));
close(fd[1]);
}
else{
close(fd[1]);
read(fd[0],msg,25);
close(fd[0]);
}return 0;}
Process
Synchronization
Producer-Consumer Problem
• Paradigm for cooperating
processes, producer process
produces information that is
consumed by a consumer process
• unbounded-buffer places no practical
limit on the size of the buffer
• bounded-buffer assumes that there is
a fixed buffer size
Bounded-Buffer – Shared-Memory Solution

• Shared data
#define BUFFER_SIZE 10
Typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
• Solution is correct, but can only use
BUFFER_SIZE-1 elements
Bounded-Buffer – Insert() Method
while (true) {
/* Produce an item */
while((((in + 1) % BUFFER SIZE count) == out)

// ; /* do nothing -- no free buffers */


Busy waiting
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;

}
Bounded Buffer – Remove() Method
while (true) {
while (in == out)
; // do nothing -- nothing to
consume

// remove an item from the buffer


item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Background
• Concurrent access to shared data may result
in data inconsistency
• Maintaining data consistency requires
mechanisms to ensure the orderly execution
of cooperating processes
• Suppose that we wanted to provide a solution
to the consumer-producer problem that fills
all the buffers. We can do so by having an
integer count that keeps track of the number
of full buffers. Initially, count is set to 0. It is
incremented by the producer after it
produces a new buffer and is decremented
by the consumer after it consumes a buffer.
Producer
while (true) {

/* produce an item and put in


nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) %
BUFFER_SIZE;
count--;

/* consume the item in


nextConsumed
}
Race Condition
• count++ could be implemented as

register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as

register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1
{register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1
{register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Critical Sections
A section of code, common to n cooperating processes, in which the processes may
be accessing common variables.

A Critical Section Environment contains:

Entry Section Code requesting entry into the critical section.

Critical Section Code in which only one process can execute at any one time.

Exit Section The end of the critical section, releasing or allowing others in.

Remainder Section Rest of the code AFTER the critical section.


Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its
critical section, then no other processes can be
executing in their critical sections
2. Progress - If no process is executing in its critical
section and there exist some processes that wish to
enter their critical section, then the selection of the
processes that will enter the critical section next
cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the
number of times that other processes are allowed to
enter their critical sections after a process has made a
request to enter its critical section and before that
request is granted.
Peterson’s Solution
• Two process solution
• Assume that the LOAD and STORE
instructions are atomic; that is, cannot be
interrupted.
• The two processes share two variables:
• int turn;
• Boolean flag[2]
• The variable turn indicates whose turn it is
to enter the critical section.
• The flag array is used to indicate if a
process is ready to enter the critical
section. flag[i] = true implies that process
Pi is ready!
Algorithm for Process Pi
while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] == true && turn == j);

CRITICAL SECTION

flag[i] = FALSE;

REMAINDER SECTION
}
Bakery Algorithm
• N process solution
Synchronization Hardware
• Many systems provide hardware support
for critical section code
• Uniprocessors – could disable interrupts
• Currently running code would execute
without preemption
• Generally too inefficient on multiprocessor
systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic
hardware instructions
• Atomic = non-interruptable
• Either test memory word and set value
• Or swap contents of two memory words
TestAndSet Instruction
• Definition:

boolean TestAndSet (boolean


*target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
• Shared boolean variable lock., initialized to false.

while (true) {
while ( TestAndSet (&lock )) ; /* do nothing

// critical section

lock = FALSE;

// remainder section

}
boolean waiting[n];
boolean lock;
lock = FALSE; waiting[i] = FALSE;
do
{
waiting[i]=TRUE;
Key=TRUE;
While(waiting[i] && key)
key=TestAndSet(&lock);
Waiting[i]=FALSE;
//CS
j=(i+1)%n;
While((j!=i)&&!waiting[j])
j=(j+1)%n;
If(j==i)
lock=FALSE;
Else
Waiting[j]=FALSE;
//Remainder section
} while(TRUE);
Swap Instruction
• Definition:

void Swap (boolean *a, boolean


*b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
• Shared Boolean variable lock initialized to FALSE; Each
process has a local Boolean variable key.
• Solution:
while (true) {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

}
Semaphore
• Semaphore S – integer variable
• Two standard operations modify S: wait() and signal()
• Originally called P() and V()
• Can only be accessed via two indivisible (atomic) operations
• wait (S) {
while S <= 0; // no-op
S--;
}
• signal (S) {
S++;
}
Semaphore as General Synchronization Tool

• Counting semaphore – integer value can range over an


unrestricted domain
• Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
• Also known as mutex locks
• Can implement a counting semaphore S as a binary
semaphore
• Provides mutual exclusion
• Semaphore S; // initialized to 1
• wait (S);
Critical Section
signal (S);
Classical Problems of Synchronization

• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• N buffers, each can hold one item
• Semaphore mutex initialized to
the value 1
• Semaphore full initialized to the
value 0
• Semaphore empty initialized to
the value N.
The structure of the producer process
mutex=1, full=0,empty=N
while (true) {

// produce an item

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
}
The structure of the consumer process

while (true) {
wait (full);
wait (mutex);

// remove an item from buffer

signal (mutex);
signal (empty);

// consume the removed item

}
Readers-Writers Problem
• A data set is shared among a number of
concurrent processes
• Readers – only read the data set; they do not
perform any updates
• Writers – can both read and write.

• Problem – allow multiple readers to read at the


same time. Only one single writer can access
the shared data at the same time.

• Shared Data
• Data set
• Semaphore mutex initialized to 1.
• Semaphore wrt initialized to 1.
• Integer readcount initialized to 0.
Readers-Writers Problem (Cont.)
• The structure of a writer process

while (true) {
wait (wrt) ;

// writing is performed

signal (wrt) ;
}
Readers-Writers Problem (Cont.)
• The structure of a reader process

while (true) {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex);

// reading is performed

wait (mutex) ;
readcount - - ;
if (redacount == 0) signal (wrt) ;
signal (mutex) ;
}
Dining-Philosophers Problem

• Shared data
• Bowl of rice (data set)
• Semaphore chopstick [5] initialized to 1
Philo 1
Wait(chopstick[0]); //0
Wait(chopstick[1]); //0
Eat
Signal(chopstick[0]);
Signal(chopstick[1]);
Dining-Philosophers Problem (Cont.)
• The structure of Philosopher i:

While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

}
Sleeping Barber Problem
• If there are no customers, the barber falls asleep in the
chair
• A customer must wake the barber if he is asleep
• If a customer arrives while the barber is working, the
customer leaves if all chairs are occupied or sits in an
empty chair if it's available
• When the barber finishes a haircut, he inspects the
waiting room to see if there are any waiting customers
and falls asleep if there are none
Problems with Semaphores
• Correct use of semaphore
operations:

• signal (mutex) …. wait (mutex)

• wait (mutex) … wait (mutex)

• Omitting of wait (mutex) or signal


(mutex) (or both)
Semaphore Implementation with no Busy waiting

• With each semaphore there is an


associated waiting queue. Each entry in a
waiting queue has two data items:
• value (of type integer)
• pointer to next record in the list

• Two operations:
• block – place the process invoking the operation
on the appropriate waiting queue.
• wakeup – remove one of processes in the
waiting queue and place it in the ready queue.
Semaphore Implementation with no Busy waiting
(Cont.)

• Implementation of wait:

wait (S){
S--;
if (S < 0) {
add this process to waiting queue
block(); }
}

• Implementation of signal:

Signal (S){
S++;
if (S <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);

• Starvation – indefinite blocking. A process may never be removed


from the semaphore queue in which it is suspended.
Monitors
• A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
• Only one process may be active within the monitor at a time

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }



}
}
Schematic view of a Monitor
Condition Variables
• condition x, y;

• Two operations on a condition variable:


• x.wait () – a process that invokes the
operation is
suspended.
• x.signal () – resumes one of processes (if
any) that
invoked x.wait ()
Now suppose that, when the x.signal() operation is invoked by a process
P, there exists a suspended processQassociated with condition x.
Clearly, if the suspended process Q is allowed to resume its execution, the
signaling process P must wait.
Otherwise, both P and Q would be active simultaneously within
the monitor.

Two possibilities exist:


1. Signal and wait. P either waits until Q leaves the monitor or waits for
another condition.
2. Signal and continue. Q either waits until P leaves the monitor or waits
for another condition.
Monitor with Condition Variables
Producer - Consumer
monitor PC
{ void consumer()
int slots=0; {
condition full, empty; while(slots==0) full.wait();
slots--;
void producer() empty.signal();
{ }
while(slots==N) empty.wait();
slots++;
full.signal();
}
void putdown (int i) {
Solution to Dining Philosophers state[i] = THINKING;
// test left and right
monitor DP neighbors
{ test((i + 4) % 5);
test((i + 1) % 5);
enum { THINKING; HUNGRY, }
EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self
[i].wait();
}
Solution to Dining Philosophers (cont)

• Each philosopher I invokes the operations


pickup()
and putdown() in the following sequence:

dp.pickup (i)

EAT

dp.putdown (i)

You might also like