Unit 4
Unit 4
A traditional (or heavyweight) process has a single thread of control. If a process has multiple threads of control, it
can perform more than one task at a time.
Threads
▪ Most software applications that run on modern computers are multithreaded. For example. A word processor
may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third
thread for performing spelling and grammar checking in the background.
▪ In certain situations, a single application may be required to perform several similar tasks. For example, a web
server accepts client requests for web pages, images, sound, and so forth.
▪ One solution is to have the server run as a single process that accepts requests. When the server receives a
request, it creates a separate process to service that request.
Threads also play a vital role in remote procedure call (RPC) systems. RPC servers are multithreaded. When a server
receives a message, it services the message using a separate thread. This allows the server to service several
concurrent requests.
Most operating-system kernels are now multithreaded. Several threads operate in the kernel, and each thread
performs a specific task, such as managing devices, managing memory, or interrupt handling. For example,
Solaris has a set of threads in the kernel specifically for interrupt handling; Linux uses a kernel thread for managing
the amount of free memory in the system.
Threads
Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness. Multithreading an interactive application may allow a program to continue running even if
part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. This
quality is especially useful in designing user interfaces.
• A single-threaded application would be unresponsive to the user until the operation had completed.
• In contrast, if the time-consuming operation is performed in a separate thread, the application remains
responsive to the user.
2. Resource sharing. Processes can only share resources through techniques such as shared memory and message
passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and
the resources of the process to which they belong by default. The benefit of sharing code and data is that it allows
an application to have several different threads of activity within the same address space.
3. Economy. Allocating memory and resources for process creation is costly. Because threads share the resources
of the process to which they belong, it is more economical to create and context-switch threads.
4. Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads
may be running in parallel on different processing cores. A single-threaded process can run on only one processor,
regardless how many are available.
Threads
Multithreading Models
Threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads.
User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are
supported and managed directly by the operating system.
Virtually all contemporary operating systems—including Windows, Linux, Mac OS X, and Solaris— support kernel
threads.
Ultimately, a relationship must exist between user threads and kernel threads.
However, the entire process will block if a thread makes a blocking system call. Also, because only one thread can
access the kernel at a time, multiple threads are unable to run in parallel on multicore systems.
Green threads—a thread library available for Solaris systems and adopted in early versions of Java—used the many-to-
one model.
However, very few systems continue to use the model because of its inability to take advantage of multiple processing
cores.
Threads
One-to-One Model
The one-to-one model maps each user thread to a kernel thread.
It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes
a blocking system call.
The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the performance of an application, most
implementations of this model restrict the number of threads supported by the system.
Linux, along with the family of Windows operating systems, implement the one-to-one model.
Threads
Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a
particular application or a particular machine.
kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a
blocking system call, the kernel can schedule another thread for execution.
One variation on the many-to-many model still multiplexes many user level
threads to a smaller or equal number of kernel threads but also allows a user-
level thread to be bound to a kernel thread. This variation is sometimes referred
to as the two-level model .
The Solaris operating system supported the two-level model in versions older
than Solaris 9. However, beginning with Solaris 9, this system uses the one-to-
one model.
Threads
Scheduler Activations
A final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models.
Each LWP is attached to a kernel thread, and it is kernel threads that the blocks,
the LWP blocks as well. Up the chain, the user-level thread attached to the LWP
also blocks.
Threads
A CPU-bound application running on a single processor. In this scenario, only one thread can run at a time, so one
LWP is sufficient.
Suppose, for example, that five different file-read requests occur simultaneously. Five LWPs are needed, because all
could be waiting for I/O completion in the kernel. If a process has only four LWPs, then the fifth request must wait
for one of the LWPs to return from the kernel.
One scheme for communication between the user-thread library and the kernel is known as scheduler activation.
It works as:
• The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user
threads onto an available virtual processor.
• The kernel must inform an application about certain events. This procedure is known as an upcall.
• Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual
processor.
Threads
One event that triggers an upcall occurs when an application thread is about to block. In this scenario, the kernel
makes an upcall to the application informing it that a thread is about to block and identifying the specific thread.
The kernel then allocates a new virtual processor to the application. The application runs an upcall handler on this
new virtual processor, which saves the state of the blocking thread and relinquishes the virtual processor on which
the blocking thread is running. The upcall handler then schedules another thread that is eligible to run on the new
virtual processor. When the event that the blocking thread was waiting for occurs, the kernel makes another upcall
to the thread library informing it that the previously blocked thread is now eligible to run.
The upcall handler for this event also requires a virtual processor, and the kernel may allocate a new virtual
processor or preempt one of the user threads and run the upcall handler on its virtual processor. After marking the
unblocked thread as eligible to run, the application schedules an eligible thread to run on an available virtual
processor.
Threads
Windows Threads
Windows implements the Windows API, which is the primary API for the family of Microsoft operating systems
(Windows 98, NT, 2000, and XP, as well as Windows 7).
A Windows application runs as a separate process, and each process may contain one or more threads where each
user-level thread maps to an associated kernel thread.
• A user stack, employed when the thread is running in user mode, and a kernel stack, employed when the thread
is running in kernel mode
• A private storage area used by various run-time libraries and dynamic link libraries (DLLs)
The register set, stacks, and private storage area are known as the context of the thread.
Threads
The ETHREAD and the KTHREAD exist entirely in kernel space; this
means that only the kernel can access them.
Linux Threads
Linux provides the fork() system call with the traditional functionality of duplicating a process.
Linux also provides the ability to create threads using the clone() system call.
In fact, Linux uses the term task —rather than process or thread— when referring to a flow of control within a
program.
When clone() is invoked, it is passed a set of flags that determine how much sharing is to take place between the
parent and child tasks.
For example, suppose that clone() is passed the flags CLONE_FS, CLONE_VM, CLONE_SIGHAND, and CLONE_FILES.
The parent and child tasks will then share the same file-system information (such as the current working
directory), the same memory space, the same signal handlers structures of the parent process. A new task is also
created when the clone() system call is made. However, rather than copying all data structures, the new task
points to the data structures of the parent task, depending on the set of flags passed to clone().