0% found this document useful (0 votes)
7 views54 pages

06 ch04

Chapter 4 discusses threads as fundamental units of CPU utilization essential for multithreaded systems, highlighting their benefits such as responsiveness, resource sharing, and economy. It covers various threading models, including user-level and kernel-level threads, and their respective advantages and disadvantages. Additionally, the chapter explores multithreaded programming in multicore environments, emphasizing the importance of efficient resource management and concurrency.

Uploaded by

good man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views54 pages

06 ch04

Chapter 4 discusses threads as fundamental units of CPU utilization essential for multithreaded systems, highlighting their benefits such as responsiveness, resource sharing, and economy. It covers various threading models, including user-level and kernel-level threads, and their respective advantages and disadvantages. Additionally, the chapter explores multithreaded programming in multicore environments, emphasizing the importance of efficient resource management and concurrency.

Uploaded by

good man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 54

Chapter 4: Threads

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 4: Threads
 Overview
 Multicore Programming
 Multithreading Models
 Thread Libraries
 Implicit Threading
 Threading Issues
 Operating System Examples

Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne ©2013
Objectives
 To introduce the notion of a thread—a fundamental unit of CPU
utilization that forms the basis of multithreaded computer
systems
 To discuss the APIs for the Pthreads, Windows, and Java
thread libraries
 To explore several strategies that provide implicit threading
 To examine issues related to multithreaded programming
 To cover operating system support for threads in Windows and
Linux

Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne ©2013
Motivation

 Most modern applications are multithreaded


 Threads run within application
 Multiple tasks with the application can be implemented by
separate threads
 Update display
 Fetch data
 Spell checking
 Answer a network request
 Process creation is heavy-weight while thread creation is light-
weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne ©2013
Definitions
• A thread is a basic unit of CPU utilization;.
• Alternatively, A thread is a sequence of programmed instructions that the
CPU can execute independently.
• each thread has its own program counter ,thread ID, a register set, and a
stack
• threads belonging to the same process share the same code section,
data section, and other operating-system resources, such as open files
and signals.
• Threads are also termed as lightweight processes as the share common
resources.
• traditional (or heavyweight) process has a single thread of control. If a
process has multiple threads of control, it can perform more than one
task at a time

Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes

Operating System Concepts – 9th Edition 4.6 Silberschatz, Galvin and Gagne ©2013
Motivation
• Most software applications that run on modern computers are
multithreaded. An application typically is implemented as a separate
process with several threads of control.

• Examples:
• A web browser might have one thread display images or
text while another thread retrieves data from the network, for example.
• Another example, word processor may have a thread for displaying
graphics, another thread for responding to keystrokes from the user,
and a third thread for performing spelling and grammar checking in the
background.
• Additionally, applications can be created to take use of multicore systems'
processing power. These programs have the capacity to execute many CPU-
intensive activities concurrently across multiple processing cores.

Operating System Concepts – 9th Edition 4.7 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture
- A web-server process is usually multithreaded, the server will
create a separate thread that listens for client requests.
- When a request is made, rather than creating another process, the
server creates a new thread to service the request and resume
listening for
additional requests.

Operating System Concepts – 9th Edition 4.8 Silberschatz, Galvin and Gagne ©2013
Benefits of Using Threads

 Responsiveness – may allow continued execution if part of process is


blocked, especially important for user interfaces
 Resource Sharing – Processes can only share resources through
techniquessuch as shared memory and message passing. Such
techniques must be explicitly arranged by the programmer.
However, threads share the memory and the resources of the
process to which they belong by default

 Economy – cheaper than process creation, thread switching lower


overhead than context switching
 Scalability – The benefits of multithreading can be even greater
in a multiprocessor architecture, where threads may be
running in parallel on different processing cores. A single-
threaded process can run on only one processor, regardless
how many are available.

Operating System Concepts – 9th Edition 4.9 Silberschatz, Galvin and Gagne ©2013
Multicore Programming
 in response to the need for more computing performance, single-CPU
systems evolved into multi-CPU systems(multiprocessor ).
 A later yet similar trend in system design is to place multiple computing
cores on a single chip(Multicore ) where each core appears as a separate
processor to the operating system .
 Multithreaded programming provides a mechanism for more efficient use of
these multiple computing cores and improved concurrency.

 Parallelism implies a system can perform more than one task simultaneously
 Concurrency (‫ )التزامن‬is the ability of a system to deal with multiple tasks or
operations simultaneously in an overlapping manner. This does not
necessarily mean that tasks are executed at the exact same time but rather
that the system can manage and switch between tasks efficiently.
Concurrency vs. Parallelism
 Multithreaded programming provides a mechanism for more
efficient use
of these multiple computing cores and improved concurrency.
 Consider an application with four threads. On a system with a
single computing core, concurrency merely means that the
execution of the threads will be interleaved over time, because
the processing core is capable of executing only one thread at a
time.

 Concurrent execution on single-core system:

Operating System Concepts – 9th Edition 4.11 Silberschatz, Galvin and Gagne ©2013
- On a system with multiple cores, however, concurrency means that
the threads can run in parallel, because the system can assign a
separate thread to each core

Parallelism on a multi-core system:

Parallelism can be made only on multicore or multiprocessor systems

Operating System Concepts – 9th Edition 4.12 Silberschatz, Galvin and Gagne ©2013
- The trend toward Multicore systems continues to put pressure on system
designers and application programmers to make better use of multiple
computing cores. Designers of OS must write scheduling algorithms that
use multiple processing cores to allow the parallel execution.
- For application programmers, the challenge is to modify existing programs
as well as design new programs that are multithreaded
- In general, five areas present challenges in programming for multicore
systems:
1. Dividing activities
2. Balance
3. Data splitting
4. Data dependency
5. Testing and debugging

Operating System Concepts – 9th Edition 4.13 Silberschatz, Galvin and Gagne ©2013
Multicore Programming (Cont.)
 Types of parallelism
 Data parallelism – distributes subsets of the same data across multiple
cores and performing the same operation on each core

 Task parallelism – involves distributing not data but tasks(threads)


across cores, each thread performing a unique operation. Different
threads may be operating on the same data, or they may be operating on
different data.
 As # of threads grows, so does architectural support for threading
 CPUs have cores as well as hardware threads
 Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per
core

Operating System Concepts – 9th Edition 4.14 Silberschatz, Galvin and Gagne ©2013
Amdahl’s Law
 Identifies performance gains from adding additional cores to an
application that has both serial(none parallel) and parallel components
 S is serial portion
 N processing cores

 That is, if application is 75% parallel / 25% serial, moving from 1 to 2


cores results in speedup of 1.6 times
 As N approaches infinity, speedup approaches 1 / S

Serial portion of an application has disproportionate effect on


performance gained by adding additional cores

Operating System Concepts – 9th Edition 4.15 Silberschatz, Galvin and Gagne ©2013
User Threads and Kernel Threads
 Support for threads may be provided either at the user level, for user threads, or by the
kernel, for kernel threads.
 User threads – are supported (created and managed) above the kernel and are managed
by user-level threads library (without kernel support)
 Three primary thread libraries:
 POSIX Pthreads
 Windows threads
 Java threads
 Kernel threads - are supported and managed directly by the operating system
 virtually kernel threads are supported by all general purpose operating systems, including:
 Windows
 Solaris
 Linux
 Tru64 UNIX
 Mac OS X

Operating System Concepts – 9th Edition 4.16 Silberschatz, Galvin and Gagne ©2013
USER Level Thread
• User level threads are implemented and managed by the user and the
kernel is not aware of it
• User level threads are implemented using User level libraries and OS dose
not recognize these threads
• User level thread is faster to create and manage compared to kernel-level
thread
• Context switching is User level threads is faster
• If one User level thread performs a blocking operation then the entire
process gets blocked
• The reason for this is that In a user-level threading model, threads are
managed by a runtime library in the user space rather than the
operating system (OS) kernel, so the OS is only aware of the main
process, not the individual user-level threads within it and it cannot
selectively suspend and resume them.

Operating System Concepts – 9th Edition 4.17 Silberschatz, Galvin and Gagne ©2013
Kernel Level Thread
• Kernel level threads are implemented and managed by OS
• Kernel level threads are implemented by System calls and they are
recognized by the OS
• Kernel level thread is slower to create and manage compared to user-level
thread
• Context switching in Kernel level threads is slower
• Even If one Kernel level thread performs a blocking operation, it does not
affect other threads

- Kernel threads may be created automatically by the OS based on system


demands, such as load balancing across CPUs in a multiprocessor system,
kernel modules, and interrupt handling routines
- Also Kernel threads may be created due to a request of a user-level
thread (ULT) library.

Operating System Concepts – 9th Edition 4.18 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th Edition 4.19 Silberschatz, Galvin and Gagne ©2013
Multithreading Models

 a relationship exist between user threads and kernel threads. The


three common ways of establishing such a relationship are:

 Many-to-One

 One-to-One

 Many-to-Many

Operating System Concepts – 9th Edition 4.20 Silberschatz, Galvin and Gagne ©2013
Many-to-One

 Many user-level threads mapped to single kernel


thread
 Thread management is done by the thread
library in user space

 However, the entire process will block if a


thread makes a blocking system call.
 Also, because only one thread can access
the kernel at a time, multiple threads are
unable to run in parallel on multicore
systems.
 Few systems currently use this model
 Examples:
 Solaris Green Threads
 GNU Portable Threads

Operating System Concepts – 9th Edition 4.21 Silberschatz, Galvin and Gagne ©2013
One-to-One
 Maps each user-level thread to a kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 It also allows multiple threads to run in parallel on
multiprocessors.
 The disadvantage is the that creation of a user thread requires a
corresponding kernel thread. since a lot of kernel threads burden
the system, there is restriction on the number of threads in the
system
 Number of threads per process sometimes restricted due to overhead
 Examples of OS that implement one-to-one
 Windows
 Linux
 Solaris 9 and later

Operating System Concepts – 9th Edition 4.22 Silberschatz, Galvin and Gagne ©2013
Many-to-Many Model
 Allows many user level threads to be
mapped to many(less or equal ) kernel
threads
 Allows the operating system to create
a sufficient number of kernel threads
 developers can create as many user
threads as necessary, and the
corresponding kernel threads can run
in parallel on a multiprocessor. Also,
when a thread performs a blocking
system call, the kernel can schedule
another thread for execution.
 Solaris prior to version 9
 Windows with the ThreadFiber
package

Operating System Concepts – 9th Edition 4.23 Silberschatz, Galvin and Gagne ©2013
Two-level Model
 Similar to M:M, except that it allows a user thread to be
bound to kernel thread
 Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

Operating System Concepts – 9th Edition 4.24 Silberschatz, Galvin and Gagne ©2013
Thread Libraries
 Thread library provides programmer with API for creating and managing
threads
 Two primary ways of implementing
 Library entirely in user space:
 All code and data structures for the library exist in user space.
This means that invoking a function in the library results in a
local function call in user space and not a system call.

 Kernel-level library supported by the OS:


 code and data structures for the library exist in kernel space.
Invoking a function in the API for the library typically results in
a system call to the kernel.
 Three main thread libraries are in use today: POSIX Pthreads,
Windows, and Java.

Operating System Concepts – 9th Edition 4.25 Silberschatz, Galvin and Gagne ©2013
Pthreads
 May be provided either as user-level or kernel-level
 Pthreads refers to the POSIX standard (IEEE 1003.1c) API
for thread creation and synchronization
 Specification, not implementation:
 Different operating systems (such as Linux, macOS,
and various Unix variants) provide their own
implementation of the pthreads specification,
which adheres to the rules and APIs defined by
POSIX but may differ in underlying code.
 API specifies behavior of the thread library, implementation is
up to development of the library
 Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Operating System Concepts – 9th Edition 4.26 Silberschatz, Galvin and Gagne ©2013
Pthreads Example

Operating System Concepts – 9th Edition 4.27 Silberschatz, Galvin and Gagne ©2013
Pthreads
Pthreads Example
Example (Cont.)
(Cont.)

Operating System Concepts – 9th Edition 4.28 Silberschatz, Galvin and Gagne ©2013
Pthreads Code for Joining 10 Threads

Operating System Concepts – 9 th Edition 4. 21 Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition 4.29 Silberschatz, Galvin and Gagne ©2013
Windows Multithreaded C Program

Operating System Concepts – 9th Edition 4.30 Silberschatz, Galvin and Gagne ©2013
Windows Multithreaded C Program (Cont.)

Operating System Concepts – 9th Edition 4.31 Silberschatz, Galvin and Gagne ©2013
Java Threads

 Java threads are managed by the JVM


 Typically implemented using the threads model provided by
underlying OS
 Java threads may be created by:

 Extending Thread class


 Implementing the Runnable interface

Operating System Concepts – 9th Edition 4.32 Silberschatz, Galvin and Gagne ©2013
Java Multithreaded Program

Operating System Concepts – 9th Edition 4.33 Silberschatz, Galvin and Gagne ©2013
Java Multithreaded Program (Cont.)

Operating System Concepts – 9th Edition 4.34 Silberschatz, Galvin and Gagne ©2013
Implicit Threading

 Implicit Threading is the transfer of Creation and management of


threads from application developers(programmer) to compilers
and run-time libraries
 This approach simplifies multithreaded programming because
developers do not need to manage individual threads explicitly, thus
avoiding many common issues related to thread synchronization and
concurrency.
 Three methods explored
 Thread Pools
 OpenMP
 Grand Central Dispatch
 Other methods include Microsoft Threading Building Blocks (TBB),
java.util.concurrent package

Operating System Concepts – 9th Edition 4.35 Silberschatz, Galvin and Gagne ©2013
Thread Pools
 Create a number of threads in a pool where they await work
 Advantages:
 Usually slightly faster to service a request with an existing thread
than create a new thread
 Allows the number of threads in the application(s) to be bound to the
size of the pool
 Separating task to be performed from mechanics of creating task
allows different strategies for running task
 i.e. in multithreaded programming, a developer can define a task
that needs to be performed, such as processing data or handling
user requests, without having to specify exactly how it will be
scheduled or executed. This task can then be submitted to a
thread pool or other scheduling mechanism
 Windows API supports thread pools:

Operating System Concepts – 9th Edition 4.36 Silberschatz, Galvin and Gagne ©2013
OpenMP (Open Multi-Processing)
 OpenMP is a Set of compiler directives as
well as an API for C, C++, FORTRAN
 Provides support for parallel programming in
shared-memory environments
 Identifies parallel regions – blocks of code that
can run in parallel
 Application developers insert
compiler
directives into their code at parallel
regions, and these directives instruct
the
OpenMP run-time library to execute
#pragma omp parallel
the region in parallel
the directive above creates as many threads are there are processing
cores in the system. for example for a dual-core system, two
threads are created, for a quad-core system, four are created; and
so forth . each of these will execute the code within the parallel block
independently and concurrently.

#pragma omp parallel for


Grand Central Dispatch(GCD)

 a technology for Apple’s Mac OS X and iOS operating systems


 is a combination of Extensions to C, C++ languages, an API, and run-
time library
 Allows identification of parallel sections
 Manages most of the details of threading
 GCD identifies extensions to the C and C++ languages known
as blocks. A block is simply a self-contained unit of work.

 Block is specified by “^{ }” - ˆ{ printf("I am a block"); }


 Blocks placed in dispatch queue
 Assigned to available thread in thread pool when removed from
queue

Operating System Concepts – 9th Edition 4.38 Silberschatz, Galvin and Gagne ©2013
Grand Central Dispatch
 GCD identifies Two types of dispatch queues:
 serial – blocks removed in FIFO order,
 Once a block(task) has been removed from the queue, it
must complete execution before another block is removed.
 Each process has its own serial queue (known as its main
queue).
Programmers can create additional serial queues within program
 concurrent – removed in FIFO order but several blocks may be
removed at a time , thus allowing multiple blocks to execute in
parallel.
 There are three system-wide concurrent dispatch queues,
and they are distinguished according to priority: low,
default, and high.

Operating System Concepts – 9th Edition 4.39 Silberschatz, Galvin and Gagne ©2013
Threading Issues
 Semantics of fork() and exec() system calls
 Signal handling
 Synchronous and asynchronous
 Thread cancellation of target thread
 Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Operating System Concepts – 9th Edition 4.40 Silberschatz, Galvin and Gagne ©2013
Semantics of fork() and exec()
 In Chapter 3, we described how the fork() system call is used to
create a separate, duplicate process. The semantics of the
fork() and exec() system calls change in a multithreaded
program.
 If one thread in a program calls fork(), does the new process
duplicate all threads, or is the new process single-threaded?
 Some UNIX systems have chosen to have two versions of
fork(), one that duplicates all threads and another that
duplicates only the thread that invoked the fork() system
call
 exec() usually works as normal – That is, if a thread invokes the
exec() system call, the program specified in the parameter to
exec() will replace the entire process—including all threads.

Recall from chapter 3: After a fork() system call, one of the two
processes typically uses the exec() system call to replace the process’s
memory space with a new program. The exec() system call loads a
binary file into memory (destroying the memory image of the program
containing the exec() system call) and starts
Signal Handling
 Signals are used in UNIX systems to notify a process that a
particular event has occurred.
 A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. A default signal handler
2. A user-defined signal handler

• Every signal has default handler that kernel runs when


handling signal
 User-defined signal handler can override default
 For single-threaded, signal delivered to process

Operating System Concepts – 9th Edition 4.42 Silberschatz, Galvin and Gagne ©2013
Signal Handling (Cont.)
 Where should a signal be delivered for multi-threaded?
 Deliver the signal to the thread to which the signal
applies
 Deliver the signal to every thread in the process
 Deliver the signal to certain threads in the process
 Assign a specific thread to receive all signals for the
process

Operating System Concepts – 9th Edition 4.43 Silberschatz, Galvin and Gagne ©2013
Signal Handling (Cont.)

• A signal may be received either synchronously or asynchronously,


• Synchronous signals are delivered to the same process that performed the operation that
caused the signal (that is the reason they are considered synchronous).
• Examples of synchronous signal include illegal memory access and division by 0. If a
running program performs either of these actions, a signal
is generated.
• When a signal is generated by an event external to a running process, that
process receives the signal asynchronously. Examples of such signals include
terminating a process with specific keystrokes (such as <control><C>) and
having a timer expire. Typically, an asynchronous signal is sent to another
process.
• The method for delivering a signal depends on the type of signal generated.
For example, synchronous signals need to be delivered to the thread causing
the signal and not to other threads in the process. However, the situation with
asynchronous signals is not as clear. Some asynchronous signals—such as a
signal that terminates a process (<control><C>, for example)—should be
sent to all threads.

Operating System Concepts – 9th Edition 4.44 Silberschatz, Galvin and Gagne ©2013
Thread Cancellation
 Thread cancellation is Terminating a thread before it has finished
 Thread to be canceled is called target thread
 Two general approaches for Cancellation of a target thread:
 Asynchronous cancellation terminates the target thread immediately
 Deferred cancellation allows the target thread to periodically check if it
should terminate, allowing it an opportunity to terminate itself
in an
orderly fashion.

 Pthread code to create and cancel a thread:

Operating System Concepts – 9th Edition 4.45 Silberschatz, Galvin and Gagne ©2013
Thread Cancellation (Cont.)
 Invoking thread cancellation requests cancellation, but actual cancellation
depends on thread state.
 Pthreads supports three cancellation modes. Each mode is defined
as a state and a type. As the table illustrates, Pthreads allows
threads to disable or enable cancellation.

 If thread has cancellation disabled, cancellation remains pending until


thread enables it
 Default type is deferred(‫ )مؤجل‬cancellation
 Cancellation only occurs when thread reaches cancellation point
 I.e. pthread_testcancel()
 Then cleanup handler is invoked
 On Linux systems, thread cancellation is handled through signals
Operating System Concepts – 9th Edition 4.46 Silberschatz, Galvin and Gagne ©2013
Thread-Local Storage

 Thread-local storage (TLS) allows each thread to have its


own copy of data
 Useful when you do not have control over the thread creation
process (i.e., when using a thread pool)
 Different from local variables
 Local variables visible only during single function invocation
(They are only accessible and valid during the execution of
that specific function call)
 TLS visible across function invocations (the ability of
TLS to store data that is unique to each thread and
remains accessible for the duration of the thread's
execution, across multiple function calls within that thread.)
 Similar to static data
 TLS is unique to each thread

Operating System Concepts – 9th Edition 4.47 Silberschatz, Galvin and Gagne ©2013
Scheduler Activations
Both M:M and Two-level models require communication to maintain the appropriate
number of kernel threads allocated to the application
Many systems that implements the M:M or Two-level models Typically use an
intermediate data structure between user and kernel threads – lightweight process
(LWP)
The LWP Appears to the user threads as a virtual processor on which process can
schedule user thread to run
Each LWP is attached to kernel thread(which is scheduled by OS to run on a
physical processor)
Now if a kernel thread blocks (such as waiting for an I/O operation to
complete), the LWP blocks as well, and the user-level thread
attached to the LWP also blocks.
An application may require any number of LWPs to run efficiently
How many LWPs to create?
. Typically, an LWP is required for each concurrent blocking system call.
Suppose, for example, that an application 5 blocking operations that can
simultaneously, then Five LWPs are needed, If a process is assigned
only four LWPs, then the fifth request must wait for one of the LWPs to
return from the kernel.
Scheduler Activations-continue
The communication between the user level thread and the kernel level tread is often done
by Scheduler activation process
The Scheduler activations(also known as an upcall) works as follows:
The kernel provides an application with a set of virtual processors (LWPs),
and the application can schedule user threads onto an available virtual
processor(alternatively LWPs)).
the kernel must inform an application about certain events. This
procedure is known as an upcall.
Upcalls are handled by the thread library by a procedure called (aex a
thread is blocked) upcall handler, and upcall handlers must run on a
virtual processor.

Scheduler activations provide upcalls - a communication mechanism from the kernel to the
upcall handler in the thread library
This communication allows an application to maintain the correct number kernel threads

NOTE: Lightweight Processes (LWPs) can be allocated to a process dynamically in


Scheduler Activations. The system adjusts the number of LWPs assigned to a
process based on its workload and threading requirements.
Operating System Examples

 Windows Threads
 Linux Threads

Operating System Concepts – 9th Edition 4.50 Silberschatz, Galvin and Gagne ©2013
Windows Threads
 Windows implements the Windows API – primary API for Win
98, Win NT, Win 2000, Win XP, and Win 7
 Implements the one-to-one mapping
 Each thread contains
 A thread id
 Register set representing state of processor
 Separate user and kernel stacks for when thread runs in
user mode or kernel mode
 Private data storage area used by run-time libraries and
dynamic link libraries (DLLs)
 The register set, stacks, and private storage area are known as
the context of the thread

Operating System Concepts – 9th Edition 4.51 Silberschatz, Galvin and Gagne ©2013
Windows Threads (Cont.)

 The primary data structures of a thread include:


 ETHREAD (executive thread block) – includes pointer to
process to which thread belongs and to KTHREAD

 KTHREAD (kernel thread block) – scheduling and


synchronization info, kernel-mode stack(used when the
thread is running in kernel mode) pointer to TEB
 TEB (thread environment block) – thread id, user-mode
stack, thread-local storage
 The ETHREAD and the KTHREAD exist entirely in
kernel space; this means that only the kernel can
access them.
 The TEB is a user-space data structure that is
accessed when the thread is running in user mode.

Operating System Concepts – 9th Edition 4.52 Silberschatz, Galvin and Gagne ©2013
Windows Threads Data Structures

Operating System Concepts – 9th Edition 4.53 Silberschatz, Galvin and Gagne ©2013
Linux Threads
 Linux refers to them as tasks rather than threads
 Thread creation is done through clone() system call
 clone() allows a child task to share the address space of the
parent task (process)
 Flags control behavior

 struct task_struct points to process data structures


(shared or unique)

Operating System Concepts – 9th Edition 4.54 Silberschatz, Galvin and Gagne ©2013

You might also like