U2 c1 OS Threads SBM
U2 c1 OS Threads SBM
THREADS
Def: A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. Threads are a popular way to improve
application performance through parallelism. A thread is sometimes called a light
weight process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also provide
a suitable foundation for parallel execution of applications on shared memory
multiprocessors.
Threads have been successfully used in implementing network servers. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors.
Many operating system kernels are now multithreaded: several threads operate in
the kernel, and each thread performs a specific task, such as managing services or
interrupt handling.
Threads also play a vital role in remote procedure call (RPC) systems. RPCs allow
interposes communication by providing a communication mechanism similar to
ordinary function or procedure calls. Typically, RPC servers are multithreaded. When
a server receives a message, it services the message using a separate thread.
Components of Threads
Stack Space: Stores local variables, function calls, and return addresses specific to
the thread.
Register Set: Hold temporary data and intermediate results for the thread’s
execution.
Program Counter: Tracks the current instruction being executed by the thread.
Code means that a specific piece of code is executing as part of a thread, which is a
unit of execution within a program.
Files typically refers to processing files using multiple threads concurrently.
This means that instead of processing a file sequentially (one after another), different
parts or aspects of the file processing can be handled by different threads running at
the same time, potentially improving performance and efficiency.
Advantages of Threading
Disadvantages
Complexity: Threading can make programs more complicated to write and
debug because threads need to synchronize their actions to avoid conflicts.
Resource Overhead: Each thread consumes memory and processing power, so
having too many threads can slow down a program and use up system
resources.
Difficulty in Optimization: It can be challenging to optimize threaded
programs for different hardware configurations, as thread performance can vary
based on the number of cores and other factors.
Debugging Challenges: Identifying and fixing issues in threaded programs can
be more difficult compared to single-threaded programs, making
troubleshooting complex.
Types of Thread
1. User Level thread 2. Kernel Level thread
1 .User Level Thread
In a user thread, all of the work of thread management is done by the application
and the kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.
The application begins with a single thread and begins running in that thread. User
level threads are generally fast to create and manage.
In Kernel level thread, thread management done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded.
All of the threads within an application are supported within a single process. The
Kernel maintains context information for the process as a whole and for individuals
threads within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages of Kernel level thread:
1. Kernel can simultaneously schedule multiple threads from the same process on
multiple process.
2. If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
3. Kernel routines themselves can multithreaded.
5. The kernel can distribute threads across CPUs, ensuring optimal load balancing and
system performance.
Disadvantages: 1. Kernel threads are generally slower to create and manage than
the user threads.
2. Transfer of control from one thread to another within same process requires a
mode switch to the Kernel.
5. A large number of threads may overload the kernel scheduler, leading to potential
performance degradation in systems with many threads.
Multi-Threading
Multitasking is a general term for doing many tasks at the same time. On the other
hand, multi-threading is the ability of a process to execute multiple threads at the
same time. Again, the MS Word example is appropriate in multi-threading scenarios.
The process can check spelling, auto-save, and read files from the hard drive, all
while you are working on a document.
Multithreading makes your computer work better by using its resources more
effectively, leading to quicker and smoother performance for applications like web
browsers, games, and many other programs you use every day.
For example, in the banking system, many users perform day-to-day activities using
bank servers like transfers, payments, deposits, `opening a new account, etc. All these
activities are performed instantly without having to wait for another user to finish. In
this, all the activities get executed simultaneously as and when they arise. This is
where multithreading comes into the picture, wherein several threads perform
different activities without interfering with others.
Multithreading vs Multitasking
Feature Multithreading Multitasking
Benefits:
Drawbacks of Multithreading
Multithreading is complex and many times difficult to handle.
If you don’t make use of the locking mechanisms properly, while investigating
data access issues there is a chance of problems arising like data inconsistency
and dead-lock.
If many threads try to access the same data, then there is a chance that the
situation of thread starvation may arise. Resource contention issues are
another problem that can trouble the user.
Display issues may occur if threads lack coordination when displaying data.
Multithreading Models(impt)
In this model any number of user threads can be multiplied into equal or smaller
numbers of kernel threads. Developers are capable of creating multiple number of
user threads and their corresponding Kernel threads can run in parallel on a
multiprocessor machine.
The many to many model not only provides the best accuracy on concurrency but
also when a thread performs a blocking system call, the kernel schedules another
thread for execution.
This model maps multiple user-level threads to one Kernel-level thread. When a
blocking system call is made, the thread library carries out thread management in the
user library blocking the entire process. Multiple threads cannot run in parallel only
one thread can access the Kernel at a time.
3. One to One Multithreading Model
In this model, the user-level thread and the kernel-level thread share a one-to-one
relationship. The concurrency provided by this model is higher than the many-to-one
model. It allows the parallel execution of multiple threads on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship
model.
Thread Libraries
If the thread library is implemented at the user space then code and data of the
thread library would reside in user space.
If the thread library is implemented at the kernel space then code and data of the
library would reside in the kernel space and would be supported by the operating
system.
Pthreads
The POSIX standard ( IEEE 1003.1c ) defines the specification for pThreads, not
the implementation.
pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
Global variables are shared amongst all threads.
One thread can wait for the others to rejoin before continuing.
pThreads begin execution in a specified function, in this example the runner( )
function:
Win32 Thread
Win32 thread is a part of Windows operating system and it is also called as
Windows Thread. It is a kernel space library.
In this thread we can also achieve parallelism and concurrency in same manner
as in pthread.
Win32 thread are created with the help of createThread() function. Window
thread support Thread Local Storage (TLS) as allow each thread to have its own
unique data, and these threads can easily share data as they declared globally.
They providing native and low level support for multi-threading. It means they
are tightly integrated with window OS and offer efficient creation and thread
management.
Java Threads
Thread Cancellation
• Cancellation allows one thread to terminate another. One reason to cancel a
thread is to save system resources such as CPU time. When your program
determines that the thread's activity is no longer necessary then thread is
terminated.
• Thread cancellation is a task of terminating a thread before it has completed.
• The thread cancellation mechanism allows a thread to terminate the execution
of any other thread in the process in a controlled manner. Each thread maintains
its own cancelability state. Cancellation may only occur at cancellation points or
when the thread is asynchronously cancelable.
• The target thread can keep cancellation requests pending and can perform
application-specific cleanup when it acts upon the cancellation notice.
A thread's initial cancelability state is enabled. Cancelability state determines
whether a thread can receive a cancellation request. If the cancelability state is
disabled, the thread does not receive any cancellation requests.
• Target thread cancellation occurs in two different situations:
1. Asynchronous cancellation
2. Deferred cancellation
Asynchronous cancellation: Target thread is immediately terminated. With the
help of another thread, target thread is cancelled. When a thread holds no locks
and has no resources allocated, asynchronous cancellation is a valid option.
• Deferred cancellation: When a thread has enabled cancellation and is using
deferred cancellation, time can elapse between the time it's asked to cancel itself
exes and the time it's actually terminated.
Signal Handling
Signal is used to notify a process that a particular event has occurred. Signal may
be synchronous or asynchronous. All types of signals are based on the following
pattern:
1. At a specific event, signal is generated.
2. Generated signal is sent to a process/thread.
3. Signal handler is used for handling the delivered signal.
• Synchronous signals: An illegal memory access, division by zero is the example
of synchronous signals. These signals are delivered to the same process which
performed the operation for generating the signal.
• Asynchronous signals: Terminating a process with certain keystrokes are signals
that are generated by an event external to the running process.
Signals may be handled in different ways:
1. Some signals may be ignored. For example changing the windows size.
2. Other signals may be handled by terminating the program. For example illegal
access of memory.
• Delivery of signals in multithreaded programs is more complex than single
thread.
• Following are the condition where/ how should the signals be delivered to
threads/process:
a. Thread applies the signal are received the signal.
b. Every thread in the process received the signal.
c. Deliver the signal to limited threads in the process
d. All the signals are received to particular thread for the process.
The method for delivering a signal depends on the type of signals generated.
1. Synchronous signals is sent to the thread which causes the signal.
2. All the thread received asynchronous signals.
Thread Pool
• A thread pool offers a solution to both the problem of thread life-cycle
overhead and the problem of resource thrashing. By reusing threads for multiple
tasks, the thread-creation overhead is spread over many tasks.
• Thread pools group CPU resources, and contain threads. used to execute tasks
associated with that thread pool. Threads host engines that execute user tasks,
run specific jobs such as signal handling and process requests from a work
queue.
• Multithreaded server has potential problems and these problems are solved by
using thread pool. Problems with multithreaded server :
1. Time for creating thread
2. Time for discarding thread
3. Excess use of system resources.
The challenge here will be when every thread in that process must have its own copy
of the same data. Consequently, any data uniquely related to a particular thread is
referred to as thread-specific data. Thread-specific data are void pointers, which
allows referencing any kind of data, such as dynamically allocated strings or structures.
For example, a transaction processing system can process a transaction in individual
threads for each one. Each transaction we perform shall be assigned with a special
identifier which in turn, shall uniquely identify that particular transaction to us.
The system would then be in a position to distinguish every transaction among
others.
Because we render each transaction separately on a thread. In this way, thread-
specific datas will allow associating every thread with a definite transaction and some
transaction ID.