0% found this document useful (0 votes)
25 views17 pages

U2 c1 OS Threads SBM

This document discusses the concept of multithreading in operating systems, defining threads as lightweight processes that allow concurrent execution within a single process. It outlines the advantages and disadvantages of threading, types of threads (user-level and kernel-level), and various multithreading models. Additionally, it covers thread libraries, their importance, and specific implementations like POSIX Pthreads, Win32 threads, and Java threads.

Uploaded by

safasiyan3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views17 pages

U2 c1 OS Threads SBM

This document discusses the concept of multithreading in operating systems, defining threads as lightweight processes that allow concurrent execution within a single process. It outlines the advantages and disadvantages of threading, types of threads (user-level and kernel-level), and various multithreading models. Additionally, it covers thread libraries, their importance, and specific implementations like POSIX Pthreads, Win32 threads, and Java threads.

Uploaded by

safasiyan3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT 2: CHAPTER 01- MULTI THREADING

THREADS

In an operating system, a process is a job or a program that can be executed by the


computer. Think of MS Word application, which is a process that runs on computer.
But an application can do more than one thing at a time, which means that a given
process in an operating system can have one or more threads. Threads represent the
actual processing of the code.

Def: A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. Threads are a popular way to improve
application performance through parallelism. A thread is sometimes called a light
weight process.

Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also provide
a suitable foundation for parallel execution of applications on shared memory
multiprocessors.

Threads have been successfully used in implementing network servers. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors.
Many operating system kernels are now multithreaded: several threads operate in
the kernel, and each thread performs a specific task, such as managing services or
interrupt handling.
Threads also play a vital role in remote procedure call (RPC) systems. RPCs allow
interposes communication by providing a communication mechanism similar to
ordinary function or procedure calls. Typically, RPC servers are multithreaded. When
a server receives a message, it services the message using a separate thread.

Why Do We Need Thread?


 Threads run in concurrent manner that improves the application performance. Each
such thread has its own CPU state and stack, but they share the address space of
the process and the environment. For example, when we work on Microsoft
Word or Google Docs, we notice that while we are typing, multiple things happen
together (formatting is applied, page is changed and auto save happens).
 Threads can share common data so they do not need to use inter-process
communication. Like the processes, threads also have states like ready, executing,
blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority
thread is scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the process, a context
switch occurs for the thread, and register contents are saved in (TCB). As threads
share the same address space and resources, synchronization is also required for
the various activities of the thread.

Components of Threads

 Stack Space: Stores local variables, function calls, and return addresses specific to
the thread.
 Register Set: Hold temporary data and intermediate results for the thread’s
execution.
 Program Counter: Tracks the current instruction being executed by the thread.
 Code means that a specific piece of code is executing as part of a thread, which is a
unit of execution within a program.
 Files typically refers to processing files using multiple threads concurrently.
This means that instead of processing a file sequentially (one after another), different
parts or aspects of the file processing can be handled by different threads running at
the same time, potentially improving performance and efficiency.

Difference between Process and Thread

Advantages of Threading

 Responsiveness: A multithreaded application increases responsiveness to the


user.
 Resource Sharing: Resources like code and data are shared between threads,
thus allowing a multithreaded application to have several threads of activity
within the same address space.
 Increased Concurrency: Threads may be running parallelly on different
processors, increasing concurrency in a multiprocessor machine.
 Lesser Cost: It costs less to create and context-switch threads than processes.
 Lesser Context-Switch Time: Threads take lesser context-switch time than
processes.

Disadvantages
 Complexity: Threading can make programs more complicated to write and
debug because threads need to synchronize their actions to avoid conflicts.
 Resource Overhead: Each thread consumes memory and processing power, so
having too many threads can slow down a program and use up system
resources.
 Difficulty in Optimization: It can be challenging to optimize threaded
programs for different hardware configurations, as thread performance can vary
based on the number of cores and other factors.
 Debugging Challenges: Identifying and fixing issues in threaded programs can
be more difficult compared to single-threaded programs, making
troubleshooting complex.

Types of Thread
1. User Level thread 2. Kernel Level thread
1 .User Level Thread

In a user thread, all of the work of thread management is done by the application
and the kernel is not aware of the existence of threads.

The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.

The application begins with a single thread and begins running in that thread. User
level threads are generally fast to create and manage.

Advantage of user level thread over Kernel level thread:


1. Thread switching does not require Kernel mode privileges.
2. User level thread can run on any operating system.

3. Scheduling can be application specific.


4. User level threads are fast to create and manage.

Disadvantages of user level thread:

1. In a typical operating system, most system calls are blocking.


2. Multithreaded application cannot take advantage of multiprocessing.
3. The operating system is unaware of user-level threads, so kernel-level
optimizations, like load balancing across CPUs, are not utilized.
4. If a user-level thread makes a blocking system call, the entire process
(and all its threads) is blocked, reducing efficiency.
5. User-level thread scheduling is managed by the application, which can
become complex and may not be as optimized as kernel-level
scheduling.

2. Kernel Level Threads

In Kernel level thread, thread management done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded.
All of the threads within an application are supported within a single process. The
Kernel maintains context information for the process as a whole and for individuals
threads within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages of Kernel level thread:

1. Kernel can simultaneously schedule multiple threads from the same process on
multiple process.

2. If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
3. Kernel routines themselves can multithreaded.

4. Applications that block frequency are to be handled by the Kernel-Level Threads.

5. The kernel can distribute threads across CPUs, ensuring optimal load balancing and
system performance.

Disadvantages: 1. Kernel threads are generally slower to create and manage than
the user threads.

2. Transfer of control from one thread to another within same process requires a
mode switch to the Kernel.

3. Context switching between kernel-level threads is slower compared to user-level


threads because it requires mode switching between user and kernel space.

4. Managing kernel-level threads involves frequent system calls and kernel


interactions, leading to increased CPU overhead.

5. A large number of threads may overload the kernel scheduler, leading to potential
performance degradation in systems with many threads.

6. Implementation of this type of thread is a little more complex than a user-level


thread.
Difference between User-Level & Kernel-Level Thread

S.No. USER-LEVEL THREAD KERNEL-LEVEL THREAD

1. Faster to create and manage Slower to create and manage

Implemented with the help of a thread OS helps in the creation of


2.
library at user level Kernel threads

3. Generic and can run on any OS Specific to an OS

Unable to take advantage of


4. Can be multithreaded
multiprocessing

Multi-Threading

Multitasking is a general term for doing many tasks at the same time. On the other
hand, multi-threading is the ability of a process to execute multiple threads at the
same time. Again, the MS Word example is appropriate in multi-threading scenarios.
The process can check spelling, auto-save, and read files from the hard drive, all
while you are working on a document.

The term multithreading means a process and a thread. Process means a


program that is being executed. Processes are further divided into independent
units also known as threads, also known as collections of threads. It is a process
that is small and light-weighted residing inside a process.

Multithreading makes your computer work better by using its resources more
effectively, leading to quicker and smoother performance for applications like web
browsers, games, and many other programs you use every day.

How Does Multithreading Work?


Multithreading works by allowing a computer’s processor to handle multiple tasks at
the same time. Even though the processor can only do one thing at a time, it switches
between different threads from various programs so quickly that it looks like
everything is happening all at once.
 Processor Handling : The processor can execute only one instruction at a time,
but it switches between different threads so fast that it gives the illusion of
simultaneous execution.
 Thread Synchronization : Each thread is like a separate task within a program.
They share resources and work together smoothly, ensuring programs run
efficiently.
 Efficient Execution : Threads in a program can run independently or wait for
their turn to process, making programs faster and more responsive.
 Programming Considerations : Programmers need to be careful about
managing threads to avoid problems like conflicts or situations where threads
get stuck waiting for each other.

For example, in the banking system, many users perform day-to-day activities using
bank servers like transfers, payments, deposits, `opening a new account, etc. All these
activities are performed instantly without having to wait for another user to finish. In
this, all the activities get executed simultaneously as and when they arise. This is
where multithreading comes into the picture, wherein several threads perform
different activities without interfering with others.

Multithreading vs Multitasking
Feature Multithreading Multitasking

Running multiple threads within a Running multiple programs or


Definition single program simultaneously. tasks concurrently.

Web browser loading a page, Listening to music, browsing the


handling user input, and web, and typing a document at
Example downloading files simultaneously. the same time.

Scope Within a single program. Across multiple programs.

Manages system resources to


Utilizes CPU resources more
Resource allocate time and memory to
efficiently within a program.
Use different programs.
Feature Multithreading Multitasking

Enhances the performance and Improves overall system


responsiveness of a single efficiency by allowing concurrent
Purpose application. execution of multiple programs.

Programs are managed by the


Threads are managed by the
operating system, which
program itself.
Switching switches between them.

Benefits:
Drawbacks of Multithreading
 Multithreading is complex and many times difficult to handle.
 If you don’t make use of the locking mechanisms properly, while investigating
data access issues there is a chance of problems arising like data inconsistency
and dead-lock.
 If many threads try to access the same data, then there is a chance that the
situation of thread starvation may arise. Resource contention issues are
another problem that can trouble the user.
 Display issues may occur if threads lack coordination when displaying data.

Multithreading Models(impt)

1. Many to Many Multithreading Model

In this model any number of user threads can be multiplied into equal or smaller
numbers of kernel threads. Developers are capable of creating multiple number of
user threads and their corresponding Kernel threads can run in parallel on a
multiprocessor machine.

The many to many model not only provides the best accuracy on concurrency but
also when a thread performs a blocking system call, the kernel schedules another
thread for execution.

2.Many to One Multithreading Model

This model maps multiple user-level threads to one Kernel-level thread. When a
blocking system call is made, the thread library carries out thread management in the
user library blocking the entire process. Multiple threads cannot run in parallel only
one thread can access the Kernel at a time.
3. One to One Multithreading Model

In this model, the user-level thread and the kernel-level thread share a one-to-one
relationship. The concurrency provided by this model is higher than the many-to-one
model. It allows the parallel execution of multiple threads on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship
model.

Thread Libraries

Thread Libraries has a collection of functions that useful in creating,managing and


controlling threads. Programmers can access these thread libraries using an
application programming interface (API). Thread libraries can be the user level
library or kernel level library.

If the thread library is implemented at the user space then code and data of the
thread library would reside in user space.

If the thread library is implemented at the kernel space then code and data of the
library would reside in the kernel space and would be supported by the operating
system.

Need of Thread Library


 Even in andorid development their is a concept called Kotlin Coroutines which
also work on same concept for performing multiple task at the same time with
the help of thread library (dependency) for efficient execution of tasks.
 Thread libraries provide standard way to work with thread over various operating
systems and platforms.
 They have ability to create new thread within the applications and execute
separated by thread.

There are three main thread libraries in use today(impt)

1. POSIX Pthreads - may be provided as either a user or kernel library, as an


extension to the POSIX standard.
2. Win32 threads - provided as a kernel-level library on Windows systems.
3. Java threads - Since Java generally runs on a Java Virtual Machine, the
implementation of threads is based upon whatever OS and hardware the JVM
is running on, i.e. either Pthreads or Win32 threads depending on the system.

Pthreads

 The POSIX standard ( IEEE 1003.1c ) defines the specification for pThreads, not
the implementation.
 pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
 Global variables are shared amongst all threads.
 One thread can wait for the others to rejoin before continuing.
 pThreads begin execution in a specified function, in this example the runner( )
function:

Win32 Thread
 Win32 thread is a part of Windows operating system and it is also called as
Windows Thread. It is a kernel space library.
 In this thread we can also achieve parallelism and concurrency in same manner
as in pthread.
 Win32 thread are created with the help of createThread() function. Window
thread support Thread Local Storage (TLS) as allow each thread to have its own
unique data, and these threads can easily share data as they declared globally.
 They providing native and low level support for multi-threading. It means they
are tightly integrated with window OS and offer efficient creation and thread
management.

Java Threads

 ALL Java programs use Threads - even "common" single-threaded ones.


 The creation of new Threads requires Objects that implement the Runnable
Interface, which means they contain a method "public void run( )" . Any
descendant of the Thread class will naturally contain such a method. ( In
practice the run( ) method must be overridden / provided for the thread to
have any practical functionality. )
 Creating a Thread Object does not start the thread running - To do that the
program must call the Thread's "start( )" method. Start( ) allocates and
initializes memory for the Thread, and then calls the run( ) method. (
Programmers do not call run( ) directly. )
 Because Java does not support global variables, Threads must be passed a
reference to a shared Object in order to share data, in this example the "Sum"
Object.
 It provide built-in support for multi-threading through java.lang.Thread and
it also provide high level thread management.
Threading Issues
The issue includes fork and exec system call, thread cancellation, signal handling,
thread pool etc.

The fork ( ) and exec () system call


In a multithreaded programming environment, fork and exec system calls are changed.
UNIX operating system uses two versions of fork system calls.
1. Fork call duplicates all threads.
2. In second option, only thread created by fork duplicates.
• Based on the application, system uses fork system calls. Sometimes, duplicating
all the threads are unnecessary. If we call exec( ) immediately after fork then there
is no use of duplicating threads.
• Forking provides a way for an existing process to start a new one, but what
about the case where the new process is not part of the same program as parent
process? This is the case in the shell; when a user starts a command it needs to
run in a new process, but it is unrelated to the shell.
• This is where the exec system call comes into play. An exec will replace the
contents of the currently running process with the information from a program
binary. Thus the process the shell follows when launching a new program is to
firstly fork, creating a new process, and then exec the program binary it is
supposed to run.

Thread Cancellation
• Cancellation allows one thread to terminate another. One reason to cancel a
thread is to save system resources such as CPU time. When your program
determines that the thread's activity is no longer necessary then thread is
terminated.
• Thread cancellation is a task of terminating a thread before it has completed.
• The thread cancellation mechanism allows a thread to terminate the execution
of any other thread in the process in a controlled manner. Each thread maintains
its own cancelability state. Cancellation may only occur at cancellation points or
when the thread is asynchronously cancelable.
• The target thread can keep cancellation requests pending and can perform
application-specific cleanup when it acts upon the cancellation notice.
A thread's initial cancelability state is enabled. Cancelability state determines
whether a thread can receive a cancellation request. If the cancelability state is
disabled, the thread does not receive any cancellation requests.
• Target thread cancellation occurs in two different situations:
1. Asynchronous cancellation
2. Deferred cancellation
 Asynchronous cancellation: Target thread is immediately terminated. With the
help of another thread, target thread is cancelled. When a thread holds no locks
and has no resources allocated, asynchronous cancellation is a valid option.
• Deferred cancellation: When a thread has enabled cancellation and is using
deferred cancellation, time can elapse between the time it's asked to cancel itself
exes and the time it's actually terminated.

 Signal Handling
 Signal is used to notify a process that a particular event has occurred. Signal may
be synchronous or asynchronous. All types of signals are based on the following
pattern:
1. At a specific event, signal is generated.
2. Generated signal is sent to a process/thread.
3. Signal handler is used for handling the delivered signal.
• Synchronous signals: An illegal memory access, division by zero is the example
of synchronous signals. These signals are delivered to the same process which
performed the operation for generating the signal.
• Asynchronous signals: Terminating a process with certain keystrokes are signals
that are generated by an event external to the running process.
Signals may be handled in different ways:
1. Some signals may be ignored. For example changing the windows size.
2. Other signals may be handled by terminating the program. For example illegal
access of memory.
• Delivery of signals in multithreaded programs is more complex than single
thread.
• Following are the condition where/ how should the signals be delivered to
threads/process:
a. Thread applies the signal are received the signal.
b. Every thread in the process received the signal.
c. Deliver the signal to limited threads in the process
d. All the signals are received to particular thread for the process.
The method for delivering a signal depends on the type of signals generated.
1. Synchronous signals is sent to the thread which causes the signal.
2. All the thread received asynchronous signals.

 Thread Pool
• A thread pool offers a solution to both the problem of thread life-cycle
overhead and the problem of resource thrashing. By reusing threads for multiple
tasks, the thread-creation overhead is spread over many tasks.
• Thread pools group CPU resources, and contain threads. used to execute tasks
associated with that thread pool. Threads host engines that execute user tasks,
run specific jobs such as signal handling and process requests from a work
queue.
• Multithreaded server has potential problems and these problems are solved by
using thread pool. Problems with multithreaded server :
1. Time for creating thread
2. Time for discarding thread
3. Excess use of system resources.

Thread Specific Data

The challenge here will be when every thread in that process must have its own copy
of the same data. Consequently, any data uniquely related to a particular thread is
referred to as thread-specific data. Thread-specific data are void pointers, which
allows referencing any kind of data, such as dynamically allocated strings or structures.
For example, a transaction processing system can process a transaction in individual
threads for each one. Each transaction we perform shall be assigned with a special
identifier which in turn, shall uniquely identify that particular transaction to us.
The system would then be in a position to distinguish every transaction among
others.
Because we render each transaction separately on a thread. In this way, thread-
specific datas will allow associating every thread with a definite transaction and some
transaction ID.

You might also like