0% found this document useful (0 votes)
11 views13 pages

Unit 4

Detailed unit on process synchronisation

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views13 pages

Unit 4

Detailed unit on process synchronisation

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Threads

A thread is a basic unit of CPU utilization.


It comprises a
thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging
to the same process its code section, data section, and other operating-system resources, such as open
files and signals.

A traditional (or heavyweight) process has a single thread of control. If a process has multiple threads of control, it
can perform more than one task at a time.
Threads

▪ Most software applications that run on modern computers are multithreaded. For example. A word processor
may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third
thread for performing spelling and grammar checking in the background.
▪ In certain situations, a single application may be required to perform several similar tasks. For example, a web
server accepts client requests for web pages, images, sound, and so forth.
▪ One solution is to have the server run as a single process that accepts requests. When the server receives a
request, it creates a separate process to service that request.

Threads also play a vital role in remote procedure call (RPC) systems. RPC servers are multithreaded. When a server
receives a message, it services the message using a separate thread. This allows the server to service several
concurrent requests.
Most operating-system kernels are now multithreaded. Several threads operate in the kernel, and each thread
performs a specific task, such as managing devices, managing memory, or interrupt handling. For example,
Solaris has a set of threads in the kernel specifically for interrupt handling; Linux uses a kernel thread for managing
the amount of free memory in the system.
Threads
Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness. Multithreading an interactive application may allow a program to continue running even if
part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. This
quality is especially useful in designing user interfaces.
• A single-threaded application would be unresponsive to the user until the operation had completed.
• In contrast, if the time-consuming operation is performed in a separate thread, the application remains
responsive to the user.

2. Resource sharing. Processes can only share resources through techniques such as shared memory and message
passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and
the resources of the process to which they belong by default. The benefit of sharing code and data is that it allows
an application to have several different threads of activity within the same address space.

3. Economy. Allocating memory and resources for process creation is costly. Because threads share the resources
of the process to which they belong, it is more economical to create and context-switch threads.

4. Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads
may be running in parallel on different processing cores. A single-threaded process can run on only one processor,
regardless how many are available.
Threads

Multithreading Models
Threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads.

User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are
supported and managed directly by the operating system.

Virtually all contemporary operating systems—including Windows, Linux, Mac OS X, and Solaris— support kernel
threads.

Ultimately, a relationship must exist between user threads and kernel threads.

Three common ways of establishing such a relationship:


the many-to-one model, the one-to-one model, and the many to many model.
Threads
Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread.

Thread management is done by the thread library in user space, so it is efficient.

However, the entire process will block if a thread makes a blocking system call. Also, because only one thread can
access the kernel at a time, multiple threads are unable to run in parallel on multicore systems.

Green threads—a thread library available for Solaris systems and adopted in early versions of Java—used the many-to-
one model.

However, very few systems continue to use the model because of its inability to take advantage of multiple processing
cores.
Threads

One-to-One Model
The one-to-one model maps each user thread to a kernel thread.

It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes
a blocking system call.

It also allows multiple threads to run in parallel on multiprocessors.

The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the performance of an application, most
implementations of this model restrict the number of threads supported by the system.

Linux, along with the family of Windows operating systems, implement the one-to-one model.
Threads

Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a
particular application or a particular machine.

Related to concurrency, developers can create as many user threads as necessary,


and the corresponding

kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a
blocking system call, the kernel can schedule another thread for execution.

One variation on the many-to-many model still multiplexes many user level
threads to a smaller or equal number of kernel threads but also allows a user-
level thread to be bound to a kernel thread. This variation is sometimes referred
to as the two-level model .

The Solaris operating system supported the two-level model in versions older
than Solaris 9. However, beginning with Solaris 9, this system uses the one-to-
one model.
Threads
Scheduler Activations
A final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models.

Such coordination allows the number of kernel threads to be dynamically


adjusted to help ensure the best performance.

Many systems implementing either the many-to-many or the two-level model


place an intermediate data structure between the user and kernel threads. This
data structure—typically known as a lightweight process, or LWP.

To the user-thread library, the LWP appears to be a virtual processor on which


the application can schedule a user thread to run.

Each LWP is attached to a kernel thread, and it is kernel threads that the blocks,
the LWP blocks as well. Up the chain, the user-level thread attached to the LWP
also blocks.
Threads

An application may require any number of LWPs to run efficiently.

A CPU-bound application running on a single processor. In this scenario, only one thread can run at a time, so one
LWP is sufficient.

An application that is I/O-intensive may require multiple LWPs to execute.

Typically, an LWP is required for each concurrent blocking system call.

Suppose, for example, that five different file-read requests occur simultaneously. Five LWPs are needed, because all
could be waiting for I/O completion in the kernel. If a process has only four LWPs, then the fifth request must wait
for one of the LWPs to return from the kernel.

One scheme for communication between the user-thread library and the kernel is known as scheduler activation.
It works as:
• The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user
threads onto an available virtual processor.
• The kernel must inform an application about certain events. This procedure is known as an upcall.
• Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual
processor.
Threads

One event that triggers an upcall occurs when an application thread is about to block. In this scenario, the kernel
makes an upcall to the application informing it that a thread is about to block and identifying the specific thread.
The kernel then allocates a new virtual processor to the application. The application runs an upcall handler on this
new virtual processor, which saves the state of the blocking thread and relinquishes the virtual processor on which
the blocking thread is running. The upcall handler then schedules another thread that is eligible to run on the new
virtual processor. When the event that the blocking thread was waiting for occurs, the kernel makes another upcall
to the thread library informing it that the previously blocked thread is now eligible to run.

The upcall handler for this event also requires a virtual processor, and the kernel may allocate a new virtual
processor or preempt one of the user threads and run the upcall handler on its virtual processor. After marking the
unblocked thread as eligible to run, the application schedules an eligible thread to run on an available virtual
processor.
Threads

Windows Threads
Windows implements the Windows API, which is the primary API for the family of Microsoft operating systems
(Windows 98, NT, 2000, and XP, as well as Windows 7).

A Windows application runs as a separate process, and each process may contain one or more threads where each
user-level thread maps to an associated kernel thread.

The general components of a thread include:


• A thread ID uniquely identifying the thread

• A register set representing the status of the processor

• A user stack, employed when the thread is running in user mode, and a kernel stack, employed when the thread
is running in kernel mode

• A private storage area used by various run-time libraries and dynamic link libraries (DLLs)

The register set, stacks, and private storage area are known as the context of the thread.
Threads

The primary data structures of a thread include:


• ETHREAD—executive thread block
• KTHREAD—kernel thread block
• TEB—thread environment block

The key components of the ETHREAD include a pointer to the process


to which the thread belongs and the address of the routine in which
the thread starts control. The ETHREAD also contains a pointer to the
corresponding KTHREAD.

The KTHREAD includes scheduling and synchronization information for


the thread. In addition, the KTHREAD includes the kernel stack (used
when the thread is running in kernel mode) and a pointer to the TEB.

The ETHREAD and the KTHREAD exist entirely in kernel space; this
means that only the kernel can access them.

The TEB is a user-space data structure that is accessed when the


thread is running in user mode. Among other fields, the TEB contains
the thread identifier, a user-mode stack, and an array for thread-local
storage.
Threads

Linux Threads
Linux provides the fork() system call with the traditional functionality of duplicating a process.

Linux also provides the ability to create threads using the clone() system call.

Linux does not distinguish between processes and threads.

In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a
program.

When clone() is invoked, it is passed a set of flags that determine how much sharing is to take place between the
parent and child tasks.
For example, suppose that clone() is passed the flags CLONE_FS, CLONE_VM, CLONE_SIGHAND, and CLONE_FILES.
The parent and child tasks will then share the same file-system information (such as the current working
directory), the same memory space, the same signal handlers structures of the parent process. A new task is also
created when the clone() system call is made. However, rather than copying all data structures, the new task
points to the data structures of the parent task, depending on the set of flags passed to clone().

You might also like