Threads Part 2
Threads Part 2
• One-to-One
• Many-to-Many
Many-to-One
• Many user-level threads mapped to single kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on muticore system because only one may be in kernel
at a time
One-to-One
• Each user-level thread maps to kernel thread
• Creating a user-level thread creates a kernel thread
• More concurrency than many-to-one
• Number of threads per process sometimes restricted due to overhead
• Examples
Windows
Linux
Many-to-Many Model or hybrid model
• Allows many user level threads to be mapped to many kernel threads
• Allows the operating system to create a sufficient number of kernel threads
Thread Libraries
• Thread library provides programmer with API for creating and
managing threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
• Implementation:
• Asynchronous
• Synchronous
Pthreads
• May be provided either as user-level or kernel-level
• A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
• Specification, not implementation
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX operating systems (Linux & Mac OS X)
Pthreads Example
Pthreads Example (cont)
Pthreads Code for Joining 10 Threads
Implicit Threading
• Growing in popularity as numbers of threads increase, program correctness
more difficult with explicit threads
• Creation and management of threads done by compilers and run-time
libraries rather than programmers
• Five methods explored
• Thread Pools
• Fork-Join
• OpenMP
• Grand Central Dispatch
• Intel Threading Building Blocks
Thread Pools
• Create a number of threads in a pool where they await work
• Advantages:
• Usually slightly faster to service a request with an existing thread than
create a new thread
• Allows the number of threads in the application(s) to be bound to the size
of the pool
• Separating task to be performed from mechanics of creating task allows
different strategies for running task
• i.e. Tasks could be scheduled to run periodically
Fork-Join Parallelism
Multiple threads (tasks) are forked, and then joined.
Fork-Join Parallelism
General algorithm for fork-join strategy:
Fork-Join Parallelism
OpenMP
• Set of compiler directives and an API for
C, C++, FORTRAN
• Provides support for parallel
programming in shared-memory
environments
• Identifies parallel regions – blocks of
code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
OpenMP
ˆ{ printf("I am a block"); }
Eg:
synchronous signal (illegal memory access/division by zero): to be delivered to the specific thread.
Asynchronous signal(Ctrl C): to be delivered to all threads in process.
Thread Cancellation
• Terminating a thread before it has finished
• Eg: Database search by one thread completes or loading of web page is stopped by user.
• Thread to be canceled is target thread
• Two general approaches:
• Asynchronous cancellation terminates the target thread immediately
• Deferred cancellation allows the target thread to periodically check if it should be cancelled
• Pthread code to create and cancel a thread:
Thread Cancellation
• Check the status of thread for resources, so that it can be cancelled.
• Invoking thread cancellation requests cancellation, but actual cancellation depends on thread
state
• If thread has cancellation disabled, cancellation remains pending until thread enables it
• Default type is deferred
• Cancellation only occurs when thread reaches cancellation point
• I.e. pthread_testcancel()
• Then cleanup handler is invoked
• On Linux systems, thread cancellation is handled through signals
Scheduler Activations
• Both M:M and Two-level models require
communication to maintain the appropriate number of
kernel threads allocated to the application
• Typically use an intermediate data structure between
user and kernel threads – lightweight process (LWP)
• Appears to be a virtual processor on which process can
schedule user thread to run
• Each LWP attached to kernel thread
• How many LWPs to create?
• Scheduler activations provide upcalls - a
communication mechanism from the kernel to the
upcall handler in the thread library
• This communication allows an application to maintain
the correct number kernel threads