0% found this document useful (0 votes)
133 views15 pages

Thread and Issues OS

Uploaded by

ansh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views15 pages

Thread and Issues OS

Uploaded by

ansh kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Introduction to Threads

Program, Process and Threads are three basic concepts of the operating systems.

 Program is an executable file containing the set of instructions written to perform a specific job on your
computer. Programs are not stored on the primary memory in your computer. They are stored on a disk or a
secondary memory on your computer. They are read into the primary memory and executed by the kernel. A
program is sometimes referred as passive entity as it resides on a secondary memory.
 Process is an executing instance of a program. A process is sometimes referred as active entity as it resides
on the primary memory and leaves the memory if the system is rebooted. Several
 Thread is the smallest executable unit of a process. A thread is often referred to as a lightweight process
due to its ability to run sequences of instructions independently while sharing the same memory space and
resources of a process. A process can have multiple threads. Each thread will have their own task and own path
of execution in a process. Threads are popularly used to improve the application through parallelism . Actually
only one thread is executed at a time by the CPU, but the CPU switches rapidly between the threads to give an
illusion that the threads are running parallelly.

 For example, in a browser, multiple tabs can be different threads or While the movie plays on the
device, various threads control the audio and video in the background.
 Another example is a web server - Multiple threads allow for multiple requests to be satisfied
simultaneously, without having to service requests sequentially or to fork off separate processes for
every incoming request.
Components of Threads in Operating System
The Threads in Operating System have the following three components.
 Stack Space
 Register Set
 Program Counter
The given below figure shows the working of a single-threaded and a multithreaded process:
A single-threaded process is a process with a single thread. A multi-threaded process is a process with
multiple threads. As the diagram clearly shows that the multiple threads in it have its own registers,
stack, and counter but they share the code and data segment.

Process simply means any program in execution while the thread is a segment of a process. The main
differences between process and thread are mentioned below:
Process Thread

A Process simply means any program in Thread simply means a segment of a process.
execution.
The process consumes more resources Thread consumes fewer resources.

Thread requires comparatively less time for


The process requires more time for creation.
creation than process.
Thread is known as a lightweight process
The process is a heavyweight process
The process takes more time to terminate The thread takes less time to terminate.
Processes have independent data and code A thread mainly shares the data segment, code
segments segment, files, etc. with its peer threads.
The process takes more time for context
The thread takes less time for context switching.
switching.
Communication between processes needs Communication between threads needs less time
more time as compared to thread. as compared to processes.
For some reason, if a process gets blocked then
In case if a user-level thread gets blocked, all of its
the remaining processes can continue their
peer threads also get blocked.
execution
Eg: Opening two tabs in the same browser.
g: Opening two different browsers.
Benefits

The benefits of multithreaded programming can be broken down into four major categories:

 Resource Sharing

Processes may share resources only through techniques such as-Message Passing,Shared Memory
Such techniques must be explicitly organized by programmer. However, threads share the memory and the
resources of the process to which they belong by default. A single application can have different threads
within the same address space using resource sharing.
 Responsiveness
Program responsiveness allows a program to run even if part of it is blocked using multithreading. This can
also be done if the process is performing a lengthy operation. For example - A web browser with
multithreading can use one thread for user contact and another for image loading at the same time.
 Utilization of Multiprocessor Architecture
In a multiprocessor architecture, each thread can run on a different processor in parallel using
multithreading. This increases concurrency of the system. This is in direct contrast to a single processor
system, where only one process or thread can run on a processor at a time.
 Economy

It is more economical to use threads as they share the process resources. Comparatively, it is more
expensive and time-consuming to create processes as they require more memory and resources. The
overhead for process creation and management is much higher than thread creation and management.
Thread Types
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread

Threads

User Level Threads


User Level Thread is a type of thread that is not created using system calls. The kernel has no work in
the management of user-level threads. User-level threads can be easily implemented by the user. In
case when user-level threads are single-handed processes, kernel-level thread manages them.
examples: Java thread, POSIX threads, etc.

Advantages of User-Level Threads


 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level Threads
has its own thread table where it keeps track of the system. The operating System Kernel helps in managing
threads. Kernel Threads have somehow longer context switching time. Kernel helps in the management of
threads. Example: Window Solaris.

Advantages of Kernel-Level Threads


 It has up-to-date information on all threads.
 Applications that block frequently are to be handled by the Kernel-Level Threads.
 Whenever any process requires more time to process, Kernel-Level Thread provides more
time to it.

Disadvantages of Kernel-Level threads


 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-level thread.
User Level threads Vs Kernel Level Threads

User level Threads Kernel Level Threads

User thread are implemented by Users


Kernal threads are implemented by OS

OS doesn’t recognise user level threads


Kernel threads are recognized by OS

Implementation is easy
Implementation is complicated

Context switch time is less


Context switch time is more

Context switch – no hardware support


hardware support is needed

If one kernel level thread perform blocking


If one user level thread perform blocking operation
operation then another thread can continue
then entire process will be blocked
execution.

There are also hybrid models that combine elements of both user-level and kernel-level threads. For example,
some operating systems use a hybrid model called the “two-level model”, where each process has one or more
user-level threads, which are mapped to kernel-level threads by the operating system.

Advanatges
 Hybrid models combine the advantages of user-level and kernel-level threads, providing greater flexibility and
control while also improving performance.
 Hybrid models can scale to larger numbers of threads and processors, which allows for better use of available
resources.
Disadvantages:
 Hybrid models are more complex than either user-level or kernel-level threading, which can make them more
difficult to implement and maintain.
 Hybrid models require more resources than either user-level or kernel-level threading, as they require both a
thread library and kernel-level support.

User threads are mapped to kernel threads by the threads library. The way this mapping is done is called
the thread model. Multi threading model are of three types.
Many to Many Model
In this model, we have multiple user threads multiplex to same or lesser number of kernel level threads.
Number of kernel level threads are specific to the machine, advantage of this model is if a user thread is blocked
we can schedule others user thread to other kernel thread. Thus, System doesn’t block if a particular thread is
blocked. It is the best multi threading model.

Many to One Model


In this model, we have multiple user threads mapped to one kernel thread. In this model when a user
thread makes a blocking system call entire process blocks. As we have only one kernel thread and
only one user thread can access kernel at a time, so multiple threads are not able access
multiprocessor at the same time. The thread management is done on the user level so it is more
efficient.

One to One Model


In this model, one to one relationship between kernel and user thread. In this model multiple thread
can run on multiple processor. Problem with this model is that creating a user thread requires the
corresponding kernel thread. As each user thread is connected to different kernel , if any user thread
makes a blocking system call, the other user threads won’t be blocked.

Threading Issues in OS
In a multithreading environment, there are many threading-related problems. Such as
 System Call
 Thread Cancellation
 Signal Handling
 Thread Specific Data
 Thread Pool
 Schedular Activation
fork() and exec() System Calls
 Discussing fork() system call, Let us assume that one of the threads belonging to a multi-
threaded program has instigated a fork() call. Therefore, the new process is a duplication of fork().
Here, the question is as follows; will the new duplicate process made by fork() be multi-threaded like
all threads of the old process or it will be a unique thread?
 Now, certain UNIX systems have two variants of fork(). fork can either duplicate all threads of
the parent process to a child process or just those that were invoked by the parent process. The
application will determine which version of fork() to use.
 When the next system call, namely exec() system call is issued, it replaces the whole
programming with all its threads by the program specified in the exec() system call’s parameters.
Ordinarily, the exec() system call goes into queue after the fork() system call.
 However, this implies that the exec() system call should not be queued immediately after the
fork() system call because duplicating all the threads of the parent process into the child process will
be superfluous. Since the exec() system call will overwrite the whole process with the one given in
the arguments passed to exec().This means that in cases like this; a fork() which only replicates one
invoking thread will do.

Thread Cancellation
 Thread Cancellation is the task of terminating a thread before it has completed.
 For example, if multiple threads are concurrently searching through a database and one
thread returns the result, the remaining threads might be cancelled.
 Another situation might occur when a user presses a button on a Web browser that stops a
Web page from loading any further. Often, a Web page is loaded using several threads-each image
is loaded in a separate thread. When a user presses the stop button on the browser, all threads
loading the page are cancelled.
 A thread that is to be cancelled is often referred to as target thread.The Cancellation of a target
thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target thread.
Deferred Cancellation : The target thread periodically checks whether it should terminate, allowing
it an opportunity to terminate itself in an orderly fashion.
 The difficulty with cancellation occurs in situations where resources have been allocated to a
cancelled thread or where a thread is cancelled while in the midst of updating data it is sharing with
other threads. This becomes especially troublesome with asynchronous cancellation. Often, the
operating system will reclaim system resources from a cancelled thread but will not reclaim all
resources. Therefore, cancelling a thread asynchronously may not free a necessary system-wide
resource.
 With deferred cancellation, in contrast, one thread indicates that a target thread is to be
cancelled, but cancellation occurs only after the target thread has checked a flag to determine whether
or not it should be cancelled. The thread can perform this check at a point at which it can be cancelled
safely known as cancellation points.

Signal Handling
 Signal Handling is used in UNIX systems to notify a process that a particular event has occurred. A
signal may be received either synchronously or asynchronously, depending on the source of and the
reason for the event being signalled.
 All signals, whether synchronous or asynchronous, follow the same pattern:
A signal is generated by the occurrence of a particular event.
A generated signal is delivered to a process.
Once delivered, the signal must be handled.
 Examples of synchronous signals include illegal memory access and division by 0. If a running
program performs either of these actions, a signal is generated. Synchronous signals are delivered to
the same process that performed the operation that caused the signal (that is the reason they are
considered synchronous).
 When a signal is generated by an event external to a running process, that process receives the
signal asynchronously. Examples of such signals include terminating a process with specific keystrokes
(such as ) and having a timer expire. Typically, an asynchronous signal is sent to another process.
 A signal may be handled by one of two possible handlers:
A default signal handler
A user-defined signal handler
 Every signal has a default signal handler that is run by the kernel when handling that
signal. This default action can be overridden by a user defined signal handler that is called to
handle the signal.
 Signals are handled in different ways. Some signals (such as changing the size of a window)
are simply ignored; others (such as an illegal memory access) are handled by terminating the
program.
 Handling signals in single-threaded programs is straightforward: signals are always
delivered to a process. However, delivering signals is more complicated in multithreaded
programs, where a process may have several threads.
 In general the following options exist:
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process.
Deliver the signal to certain threads in the process.
Assign a specific thread to receive all signals for the process.

Thread-Specific Data
 Threads belonging to a process share the data of the process. Indeed, this sharing of data
provides one of the benefits of multithreaded programming. However, in some circumstances, each
thread need its own copy of certain data. We will call such data thread specific data.
 For example, in a transaction-processing system, we might service each transaction in a separate
thread. Furthermore, each transaction might be assigned a unique identifier. To associate each thread
with its unique identifier, we could use thread-specific data.

Thread Pool

 The server develops an independent thread every time an individual attempts to access a
page on it. However, the server also has certain challenges. Bear in mind that no limit in the number
of active threads in the system will lead to exhaustion of the available system resources because we
will create a new thread for each request.
 The establishment of a fresh thread is another thing that worries us. The creation of a new
thread should not take more than the amount of time used up by the thread in dealing with the
request and quitting after because this will be wasted CPU resources.
 Hence, thread pool could be the remedy for this challenge. The notion is that as many fewer
threads as possible are established during the beginning of the process. A group of threads that forms
this collection of threads is referred as a thread pool. There are always threads that stay on the
thread pool waiting for an assigned request to service.

 A new thread is spawned from the pool every time an incoming request reaches the server,
which then takes care of the said request. Having performed its duty, it goes back to the pool and
awaits its second order.
 Whenever the server receives the request, and fails to identify a specific thread at the ready
thread pool, it may only have to wait until some of the threads are available at the ready thread pool.
It is better than starting a new thread whenever a request arises because this system works well with
machines that cannot support multiple threads at once.
Schedular Activation
 A final issue to be considered with multithreaded programs concerns communication between
the kernel and the thread library, which may be required by the many-to-many and two-level models .
Many systems implementing either the many-to-many or the two-level model place an intermediate
data structure between the user and kernel threads. This data structure-typically known as a
lightweight process, or LWP-is shown in Figure.

 To the user-thread library, the LWP appears to be a virtual processor on which the application
can schedule a user thread to run. Each LWP is attached to a kernel thread, and it is kernel threads that
the operating system schedules to run on physical processors.
 If a kernel thread blocks (such as while waiting for an I/0 operation to complete), the LWP blocks
as well. Up the chain, the user-level thread attached to the LWP also blocks.
 An application may require any number of LWPs to run efficiently. Consider a CPU-bound
application running on a single processor. In this scenario, only one thread can run at once, so one LWP
is sufficient. An application that is I/O intensive may require multiple LWPs to execute. Typically, an LWP
is required for each concurrent blocking system call. Suppose, for example, that five different file-read
requests occur simultaneously. Five LWPs are needed, because all could be waiting for I/0 completion
in the kernel. If a process has only four LWPs, then the fifth request must wait for one of the LWPs to
return from the kernel.
 Furthermore, the kernel must inform an application about certain events. This procedure
is known as an Upcall. Upcalls are handled by the thread library with an upcall handlers and upcall
handlers must run on a virtual processor. One event that triggers an upcall, occurs when an
application thread is about to block. In this scenario, the kernel makes an upcall to the application
informing it that a thread is about to block and identifying the specific thread.
 The kernel then allocates a new virtual processor to the application. The application runs an
upcall handler on this new virtual processor, which saves the state of the blocking thread and
relinquishes the virtual processor on which the blocking thread is running. The upcall handler then
schedules another thread that is eligible to run on the new virtual processor.
 When the event that the blocking thread was waiting for occurs, the kernel makes another
upcall to the thread library informing it that the previously blocked thread is now eligible to run. The
up call handler for this event also requires a virtual processor, and the kernel may allocate a new
virtual processor or preempt one of the user threads and run the upcall handler on its virtual
processor.

You might also like