0% found this document useful (0 votes)
7 views49 pages

Ch4 Update Upload

Chapter 4 discusses threads as fundamental units of CPU utilization, emphasizing their role in multithreading and multicore programming. It covers the benefits of multithreaded programming, including responsiveness and resource sharing, and outlines various multithreading models like many-to-one and one-to-one. The chapter also addresses challenges in multicore programming, such as task division and data dependency, and introduces implicit threading methods like thread pools and OpenMP for easier management of threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views49 pages

Ch4 Update Upload

Chapter 4 discusses threads as fundamental units of CPU utilization, emphasizing their role in multithreading and multicore programming. It covers the benefits of multithreaded programming, including responsiveness and resource sharing, and outlines various multithreading models like many-to-one and one-to-one. The chapter also addresses challenges in multicore programming, such as task division and data dependency, and introduces implicit threading methods like thread pools and OpenMP for easier management of threads.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter 4: Threads

 Overview
 Multicore Programming
 Multithreading Models
 Implicit Threading
 Threading Issues

Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne ©2013
Overview
 A thread is a basic unit of CPU utilization; it comprises a
thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same
process its code section, data section, and other
operating-system resources, such as open files and
signals.
 A traditional (or heavyweight) process has a single
thread of control.
 If a process has multiple threads of control, it can
perform more than one task at a time.

Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes

Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne ©2013
Motivation

 Most software applications that run on modern computers and mobile


devices are multithreaded.
 An application typically is implemented as a separate process
with several threads of control
 Multiple tasks with the application can be implemented by separate
threads
 An application that creates photo thumbnails from a collection of
images may use a separate thread to generate a thumbnail from each
separate image.
 Web browser -- Update display, response to network requests
 Word processor – Spell checking, displaying graphics, responding
to key strokes.
 Process creation is heavy-weight while thread creation is light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture

 In certain situations, a single application may be required to


perform several similar tasks.
 A web server accepts client requests for web pages, images,
sound, and so forth.
 If the web-server process is multithreaded, the server will create
a separate thread that listens for client requests.
 When a request is made, rather than creating another process,
the server creates a new thread to service the request and
resumes listening for additional requests.

Operating System Concepts – 9th Edition 4.6 Silberschatz, Galvin and Gagne ©2013
Benefits of Multi Threaded Programming

 Responsiveness – may allow continued execution if part of process is


blocked, especially important for user interfaces
 Resource Sharing – threads share resources of process, easier than
shared memory or message passing. threads share the memory and
the resources of the process to which they belong by default.
 The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address
space.
 Economy – Allocating memory and resources for process creation is
costly. Cheaper than process creation, thread switching lower
overhead than context switching.
 Scalability – The benefits of multithreading can be even greater in a
multiprocessor architecture, where threads may be running in parallel
on different processing cores.

Operating System Concepts – 9th Edition 4.7 Silberschatz, Galvin and Gagne ©2013
 most operating-system kernels are now multithreaded.
 Several threads operate in the kernel, and each thread
performs a specific task, such as managing devices,
managing memory, or interrupt handling

Operating System Concepts – 9th Edition 4.8 Silberschatz, Galvin and Gagne ©2013
Multicore Programming

Operating System Concepts – 9th Edition 4.9 Silberschatz, Galvin and Gagne ©2013
Multicore Programming
 Multicore vs. Multi processor systems
 Multithreaded programming provides a mechanism for
more efficient use of these multiple computing cores
and improved concurrency.
 On a system with a single computing core, concurrency
merely means that the execution of the threads will be
interleaved over time.
 On a system with multiple cores, however, concurrency

means that the threads can run in parallel, because the


system can assign a separate thread to each core

Operating System Concepts – 9th Edition 4.10 Silberschatz, Galvin and Gagne ©2013
Concurrency vs. Parallelism
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:

Operating System Concepts – 9th Edition 4.11 Silberschatz, Galvin and Gagne ©2013
Programming challenges
 Multicore systems continues to place pressure on system designers and
application programmers to make better use of the multiple computing
cores.
 Designers of operating systems must write scheduling algorithms that use
multiple processing cores to allow the parallel execution
 For application programmers, the challenge is to modify existing programs
as well as design new programs that are multithreaded.
 In general, five areas present challenges in programming for multicore
systems:

Operating System Concepts – 9th Edition 4.12 Silberschatz, Galvin and Gagne ©2013
Multicore Programming

 Multicore or multiprocessor systems putting pressure on


programmers, challenges include:
 Dividing activities
 Balance
 Data splitting
 Data dependency
 Testing and debugging
 Parallelism implies a system can perform more than one task
simultaneously
 Concurrency supports more than one task making progress
 Single processor / core, scheduler providing concurrency

Operating System Concepts – 9th Edition 4.13 Silberschatz, Galvin and Gagne ©2013
Multicore Programming
 Identifying tasks. This involves examining applications to find areas
that can be divided into separate, concurrent tasks. Ideally, tasks are
independent of one another and thus can run in parallel on individual
cores.
 Balance. While identifying tasks that can run in parallel, programmers
must also ensure that the tasks perform equal work of equal value.
 Data splitting. Just as applications are divided into separate tasks, the
data accessed and manipulated by the tasks must be divided to run on
separate cores.

Operating System Concepts – 9th Edition 4.14 Silberschatz, Galvin and Gagne ©2013
 Data dependency. The data accessed by the tasks
must be examined for dependencies between two or
more tasks.
 Testing and debugging. When a program is running in
parallel on multiple cores, many different execution paths
are possible.
 Testing and debugging such concurrent programs is
inherently more difficult than testing and debugging
single-threaded applications.

Operating System Concepts – 9th Edition 4.15 Silberschatz, Galvin and Gagne ©2013
Multicore Programming (Cont.)

 Types of parallelism
 Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
 Task parallelism – distributing threads across cores, each
thread performing unique operation
 As # of threads grows, so does architectural support for threading
 Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core

Operating System Concepts – 9th Edition 4.16 Silberschatz, Galvin and Gagne ©2013
Amdahl’s Law
 Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
 S is serial portion
 N processing cores

 That is, if application is 75% parallel / 25% serial, moving from 1 to 2


cores results in speedup of 1.6 times
 As N approaches infinity, speedup approaches 1 / S

Serial portion of an application has disproportionate effect on


performance gained by adding additional cores

Operating System Concepts – 9th Edition 4.17 Silberschatz, Galvin and Gagne ©2013
Multi threading Models

Operating System Concepts – 9th Edition 4.18 Silberschatz, Galvin and Gagne ©2013
Multi threading Models
 Support for threads may be provided either at the user
level, for user threads, or by the kernel, for kernel
threads.
 User threads are supported above the kernel and are
managed without kernel support
 kernel threads are supported and managed directly by
the operating system.

Operating System Concepts – 9th Edition 4.19 Silberschatz, Galvin and Gagne ©2013
Multithreading Models

Ultimately, a relationship must exist between user threads and kernel


threads.

 Many-to-One

 One-to-One

 Many-to-Many

Operating System Concepts – 9th Edition 4.21 Silberschatz, Galvin and Gagne ©2013
Many-to-One

 Many user-level threads mapped to


single kernel thread
 One thread blocking causes all to block
 Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
 Few systems currently use this model

Operating System Concepts – 9th Edition 4.22 Silberschatz, Galvin and Gagne ©2013
One-to-One
 Each user-level thread maps to kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes
restricted due to overhead
 Examples
 Windows
 Linux
 Solaris 9 and later

Operating System Concepts – 9th Edition 4.23 Silberschatz, Galvin and Gagne ©2013
Many-to-Many Model
 Allows many user level threads to be
mapped to many kernel threads
 Allows the operating system to create
a sufficient number of kernel threads
 Solaris prior to version 9
 Windows with the ThreadFiber
package

Operating System Concepts – 9th Edition 4.24 Silberschatz, Galvin and Gagne ©2013
Two-level Model
 Similar to M:M, except that it allows a user thread to be
bound to kernel thread
 Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

Operating System Concepts – 9th Edition 4.25 Silberschatz, Galvin and Gagne ©2013
Java Threads

 Java threads are managed by the JVM


 Typically implemented using the threads model provided by
underlying OS
 Java threads may be created by:

 Extending Thread class


 Implementing the Runnable interface

Operating System Concepts – 9th Edition 4.26 Silberschatz, Galvin and Gagne ©2013
Implicit Threading

Operating System Concepts – 9th Edition 4.27 Silberschatz, Galvin and Gagne ©2013
Implicit Threading

 Growing in popularity as numbers of threads increase, program


correctness more difficult with explicit threads.
 Process synchronization, Deadlocks
 Implicit Threading: to transfer the creation and management of
threading from application developers to compilers and run-time libraries
 Creation and management of threads done by compilers and run-time
libraries rather than programmers
 Three methods explored
 Thread Pools
 OpenMP
 Grand Central Dispatch
 Other methods include Microsoft Threading Building Blocks (TBB),
java.util.concurrent package

Operating System Concepts – 9th Edition 4.28 Silberschatz, Galvin and Gagne ©2013
Thread Pools
 The general idea behind a thread pool is to create a no. of threads
at start up and place them into a pool, where they sit and wait for
work.
 When a server receives a request rather than creating a thread, it
instead submits the request to the request to the thread pool and
resumes waiting for additional requests.
 If there is an available thread in the pool, it is awakened, and
request is serviced immediately.
 If the pool contains no available thread, the task is queued until
one becomes free.
 Once the thread completes its service, it returns to the pool and
awaits max work.

Operating System Concepts – 9th Edition 4.30 Silberschatz, Galvin and Gagne ©2013
Thread Pools
 Solution is problem is to use a thread pool.
 Create a number of threads in a pool where they wait for the work. After
completion return to the pool.
 Advantages:
 Usually slightly faster to service a request with an existing thread than create a
new thread.
 Allows the number of threads in the application(s) to be bound to the size of
the pool.
 Separating task to be performed from mechanics of creating task allows
different strategies for running task
 i.e.Tasks could be scheduled to run periodically
 Windows API supports thread pools:
i.e. WiN32 provides thread pools through the: ”Poolfunction” function.
 java.util.concurrent package

Operating System Concepts – 9th Edition 4.31 Silberschatz, Galvin and Gagne ©2013
OpenMP
 OpenMP is a Set of compiler directives as well as and
an API for programs written in C, C++, FORTRAN.
 Provides support for parallel programming in shared-
memory environments
 Identifies parallel regions – blocks of code that can
run in parallel.
 Application developers insert compiler directives into
their code at parallel regions, and these directives
instruct the OpenMP run-time library to execute the
region in parallel.

Operating System Concepts – 9th Edition 4.32 Silberschatz, Galvin and Gagne ©2013
illustrates a compiler directive above the parallel region containing
the printf() statement:

Operating System Concepts – 9th Edition 4.33 Silberschatz, Galvin and Gagne ©2013
 When OpenMP encounters the directive

#pragma omp parallel


 it creates as many threads are there are processing cores in the
system.
 Thus, for a dual-core system, two threads are created, for a
quad-core system, four are created; and so forth.
 All the threads then simultaneously execute the parallel region.
 As each thread exits the parallel region, it is terminated.
 OpenMP provides several additional directives for running code
regions in parallel, including parallelizing loops.

Operating System Concepts – 9th Edition 4.34 Silberschatz, Galvin and Gagne ©2013
#pragma omp parallel
Create as many threads as there are cores.

#pragma omp parallel for


for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel

Operating System Concepts – 9th Edition 4.35 Silberschatz, Galvin and Gagne ©2013
 In addition to providing directives for parallelization,
OpenMP allows developers to choose among several
levels of parallelism.
 OpenMP is available on several open-source and
commercial compilers for Linux, Windows, and Mac OS X
systems.

Operating System Concepts – 9th Edition 4.36 Silberschatz, Galvin and Gagne ©2013
Grand Central Dispatch

 GCD is an extension to C, C++ languages Apple technology


for Mac OS X and iOS operating systems
 It is a combination of extensions to C, C++ languages, API,
and run-time library.
 Allows application programmers to identify of parallel
sections of code to run in parallel.
 Like OpenMP, manages most of the details of threading.
 GCD identifies extensions to the C and C++ languages
known as blocks.

Operating System Concepts – 9th Edition 4.37 Silberschatz, Galvin and Gagne ©2013
Grand Central Dispatch contd..
 A block is simply a self-contained unit of work.
 Block is in “^{ }” - ˆ{ printf("I am a block"); }
 GCD schedules blocks for run-time execution by placing
them on a dispatch queue.
 Blocks placed in dispatch queue
 Assigned to available thread in thread pool when
removed from queue

Operating System Concepts – 9th Edition 4.38 Silberschatz, Galvin and Gagne ©2013
Grand Central Dispatch Contd..

 Two types of dispatch queues:


 serial – blocks removed in FIFO order, queue is per
process, called main queue
 Programmers can create additional serial queues within
program
 concurrent – removed in FIFO order but several may be
removed at a time
 Three system wide queues with priorities low, default,
high

Operating System Concepts – 9th Edition 4.39 Silberschatz, Galvin and Gagne ©2013
Grand Central Dispatch Contd..
 Internally, GCD’s thread pool is composed of POSIX
threads.
 GCD actively manages the pool, allowing the number of
threads to grow and shrink according to application
demand and system capacity.

Operating System Concepts – 9th Edition 4.40 Silberschatz, Galvin and Gagne ©2013
Threading Issues

Operating System Concepts – 9th Edition 4.41 Silberschatz, Galvin and Gagne ©2013
Threading Issues
 Some of the issues to consider in designing multi-threaded programs.
 The fork() and exec() system calls
 Signal handling

--Synchronous and asynchronous


 Thread cancellation of target thread

--Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Operating System Concepts – 9th Edition 4.42 Silberschatz, Galvin and Gagne ©2013
The fork() and exec() System Calls

 The fork() system call is used to create a separate, duplicate process.


The semantics of the fork() and exec() system calls change in a
multithreaded program.
 Does fork()duplicate only the calling thread or all threads?
 Some UNIXes have two versions of fork
 exec() usually works as normal – replace the running process
including all threads.
 Which of the two versions of fork() to use depends on the
application.
 If exec() is called immediately after forking, then duplicating all
threads is unnecessary, as the program specified in the parameters to
exec() will replace the process.

Operating System Concepts – 9th Edition 4.43 Silberschatz, Galvin and Gagne ©2013
Signal Handling
 Signals are used in UNIX systems to notify a process that a particular
event has occurred.
 A signal may be received either synchronously or asynchronously,
depending on the source of and the reason for the event being signaled.
 All signals, whether synchronous or asynchronous, follow the same
pattern:
1. A signal is generated by the occurrence of a particular event.
2. The signal is delivered to a process.
3. Once delivered, the signal must be handled.
 Eg: Synchronous signals: illegal memory access, division by 0.
 If a running program performs either of these actions, a signal is
generated.
 Synchronous signals are delivered to the same process that performed
the operation that caused the signal.

Operating System Concepts – 9th Edition 4.44 Silberschatz, Galvin and Gagne ©2013
Signal Handling (Cont.)
 When a signal is generated by an event external to a running process, that
process receives the signal asynchronously.
Ex: Terminating a process with specific keystrokes (such as <control><C>) and
having a timer expire.
 A signal may be handled by one of two possible handlers:

1. A default signal handler


2. A user-defined signal handler

 Every signal has a default signal handler that the kernel runs when handling
that signal.
 This default action can be overridden by a user-defined signal handler that is
called to handle the signal.
 Signals are handled in different ways. Some signals may be ignored, while others
(for example, an illegal memory access) are handled by terminating the program.

Operating System Concepts – 9th Edition 4.45 Silberschatz, Galvin and Gagne ©2013
Signal Handling (Cont.)

 For single-threaded, signal delivered to process


 Where should a signal be delivered for multi-threaded?
 Deliver the signal to the thread to which the signal applies
 Deliver the signal to every thread in the process
 Deliver the signal to certain threads in the process
 Assign a specific thread to receive all signals for the process

Operating System Concepts – 9th Edition 4.46 Silberschatz, Galvin and Gagne ©2013
Thread Cancellation
 Terminating a thread before it has finished.
 Thread to be canceled is target thread
 Two general approaches:
 Asynchronous cancellation terminates the target thread
immediately
 Deferred cancellation allows the target thread to periodically
check if it should be cancelled
 Pthread code to create and cancel a thread:

Operating System Concepts – 9th Edition 4.47 Silberschatz, Galvin and Gagne ©2013
Thread Cancellation (Cont.)
 Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state

 If thread has cancellation disabled, cancellation remains pending


until thread enables it
 Default type is deferred
 Cancellation only occurs when thread reaches cancellation
point
 I.e. pthread_testcancel()
 Then cleanup handler is invoked
 On Linux systems, thread cancellation is handled through signals

Operating System Concepts – 9th Edition 4.48 Silberschatz, Galvin and Gagne ©2013
Thread-Local Storage

 Threads belonging to a process share the data of the process.


However in some circumstances, each thread might need its
own copy of certain data.
 Thread-local storage (TLS) allows each thread to have its
own copy of data.
 It is Similar to static data than local variables.
 Different from local variables
 Local variables visible only during single function
invocation
 TLS visible across function invocations
 TLS is unique to each thread

Operating System Concepts – 9th Edition 4.49 Silberschatz, Galvin and Gagne ©2013
Scheduler Activations
 Many systems implementing either the many-
to-many (M:M) and Two-level models require
communication to maintain the appropriate
number of kernel threads allocated to the
application
 Typically use an intermediate data structure
between user and kernel threads – lightweight
process (LWP)
 To the user thread library the LWP appears
to be a virtual processor on which process
can schedule user thread to run.
 There is 1-1 correspondence b/w LWPs
and kernel threads.
 Each LWP attached to kernel thread.
 Kernal threads are scheduled onto the real
processor by OS.

Operating System Concepts – 9th Edition 4.50 Silberschatz, Galvin and Gagne ©2013
 Scheduler activations provide upcalls - a communication
mechanism from the kernel to the upcall handler in the
thread library
 This communication allows an application to maintain the
correct number kernel threads

Operating System Concepts – 9th Edition 4.51 Silberschatz, Galvin and Gagne ©2013
End of Chapter 4

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013

You might also like