0% found this document useful (0 votes)
78 views10 pages

Threads and Its Types in Operating System

1. A thread is a single sequence of execution within a process that shares resources like code and data with other threads in the process. 2. There are two main types of threads: user-level threads that are managed by user-level libraries, and kernel-level threads that are managed by the operating system kernel. 3. Multithreading allows a program to execute multiple threads concurrently to improve performance and responsiveness by enabling parallelism and better resource utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views10 pages

Threads and Its Types in Operating System

1. A thread is a single sequence of execution within a process that shares resources like code and data with other threads in the process. 2. There are two main types of threads: user-level threads that are managed by user-level libraries, and kernel-level threads that are managed by the operating system kernel. 3. Multithreading allows a program to execute multiple threads concurrently to improve performance and responsiveness by enabling parallelism and better resource utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Threads and its types in Operating System

Thread is a single sequence stream within a process. Threads have same properties as of
the process so they are called as light weight processes. Threads are executed one after
another but gives the illusion as if they are executing in parallel. Each thread has different
states. Each thread has

1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.

Similarity between Threads and Processes –


 Only one thread or process is active at a time
 Within process both execute sequential
 Both can create children
 Both can be scheduled by the operating system: Both threads and processes can be
scheduled by the operating system to execute on the CPU. The operating system is
responsible for assigning CPU time to the threads and processes based on various
scheduling algorithms.
 Both have their own execution context: Each thread and process has its own execution
context, which includes its own register set, program counter, and stack. This allows
each thread or process to execute independently and make progress without
interfering with other threads or processes.
 Both can communicate with each other: Threads and processes can communicate with
each other using various inter-process communication (IPC) mechanisms such as
shared memory, message queues, and pipes. This allows threads and processes to
share data and coordinate their activities.

Page 1 of 10
 Both can be preempted: Threads and processes can be preempted by the operating
system, which means that their execution can be interrupted at any time. This allows
the operating system to switch to another thread or process that needs to execute.
 Both can be terminated: Threads and processes can be terminated by the operating
system or by other threads or processes. When a thread or process is terminated, all of
its resources, including its execution context, are freed up and made available to other
threads or processes.
Differences between Threads and Processes –
 Resources: Processes have their own address space and resources, such as memory
and file handles, whereas threads share memory and resources with the program that
created them.
 Scheduling: Processes are scheduled to use the processor by the operating system,
whereas threads are scheduled to use the processor by the operating system or the
program itself.
 Creation: The operating system creates and manages processes, whereas the program
or the operating system creates and manages threads.
 Communication: Because processes are isolated from one another and must rely on
inter-process communication mechanisms, they generally have more difficulty
communicating with one another than threads do. Threads, on the other hand, can
interact with other threads within the same programme directly.
Threads, in general, are lighter than processes and are better suited for concurrent
execution within a single programme. Processes are commonly used to run separate
programmes or to isolate resources between programmes.

Types of Threads:
1. User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and to cause
interrupt to Kernel. Kernel doesn’t know about the user level thread and manages
them as if they were single-threaded processes.

Page 2 of 10
 Advantages of ULT –
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register
set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
 Limitations of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one) that keeps
track of all the threads in the system. In addition kernel also maintains the traditional
process table to keep track of the processes. OS kernel provides system call to create
and manage threads.
 Advantages of KLT –
 Since kernel has full knowledge about the threads in the system,
scheduler may decide to give more time to processes having large
number of threads.
 Good for applications that frequently block.
 Limitations of KLT –
 Slow and inefficient.
 It requires thread control block so it is an overhead.
Summary:
1. Each ULT has a process that keeps track of the thread using the Thread table.
2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).

Multithreading in Operating System



Page 3 of 10
A thread is a path which is followed during a program’s execution. Majority of programs
written now a days run as a single thread.Lets say, for example a program is not capable
of reading keystrokes while making drawings. These tasks cannot be executed by the
program at the same time. This problem can be solved through multitasking so that two
or more tasks can be executed simultaneously. Multitasking is of two types: Processor
based and thread based. Processor based multitasking is totally managed by the OS,
however multitasking through multithreading can be controlled by the programmer to
some extent. The concept of multi-threading needs proper understanding of these two
terms – a process and a thread. A process is a program being executed. A process can
be further divided into independent units known as threads. A thread is like a small light-
weight process within a process. Or we can say a collection of threads is what is known
as a process.

Applications – Threading is used widely in almost every field. Most widely it is seen
over the internet nowadays where we are using transaction processing of every type like
recharges, online transfer, banking etc. Threading is a segment which divide the code into
small parts that are of very light weight and has less burden on CPU memory so that it
can be easily worked out and can achieve goal in desired field. The concept of threading
is designed due to the problem of fast and regular changes in technology and less the
work in different areas due to less application. Then as says “need is the generation of
creation or innovation” hence by following this approach human mind develop the
concept of thread to enhance the capability of programming.

Lifecycle of a thread
There are various stages in the lifecycle of a thread. Following are the stages a thread
goes through in its whole life.
 New: The lifecycle of a born thread (new thread) starts in this state. It remains in this
state till a program starts.

Page 4 of 10
 Runnable: A thread becomes runnable after it starts. It is considered to be executing
the task given to it.
 Waiting: While waiting for another thread to perform a task, the currently running
thread goes into the waiting state and then transitions back again after receiving a
signal from the other thread.
 Timed Waiting: A runnable thread enters into this state for a specific time interval
and then transitions back when the time interval expires or the event the thread was
waiting for occurs.
 Terminated (Dead): A thread enters into this state after completing its task.

Types of execution in OS
There are two types of execution:
1. Concurrent Execution: This occurs when a processor is successful in switching
resources between threads in a multithreaded process on a single processor.
2. Parallel Execution: This occurs when every thread in the process runs on a separate
processor at the same time and in the same multithreaded process
Drawbacks of Multithreading
Multithreading is complex and many times difficult to handle. It has a few drawbacks.
These are:
 If you don’t make use of the locking mechanisms properly, while investigating data
access issues there is a chance of problems arising like data inconsistency and dead-
lock.
 If many threads try to access the same data, then there is a chance that the situation of
thread starvation may arise. Resource contention issues are another problem that can
trouble the user.
 Display issues may occur if threads lack coordination when displaying data.
Benefits of Multithreading:

Page 5 of 10
 Multithreading can improve the performance and efficiency of a program by utilizing
the available CPU resources more effectively. Executing multiple threads
concurrently, it can take advantage of parallelism and reduce overall execution time.
 Multithreading can enhance responsiveness in applications that involve user
interaction. By separating time-consuming tasks from the main thread, the user
interface can remain responsive and not freeze or become unresponsive.
 Multithreading can enable better resource utilization. For example, in a server
application, multiple threads can handle incoming client requests simultaneously,
allowing the server to serve more clients concurrently.
 Multithreading can facilitate better code organization and modularity by dividing
complex tasks into smaller, manageable units of execution. Each thread can handle a
specific part of the task, making the code easier to understand and maintain.

Introduction of Process Synchronization

Introduction:

Process Synchronization is the coordination of execution of multiple processes in a multi-


process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization
issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.

In a multi-process system, synchronization is necessary to ensure data consistency and


integrity, and to avoid the risk of deadlocks and other synchronization problems. Process

Page 6 of 10
synchronization is an important aspect of modern operating systems, and it plays a crucial
role in ensuring the correct and efficient functioning of multi-process systems.

On the basis of synchronization, processes are categorized as one of the following two
types:

 Independent Process: The execution of one process does not affect the execution of
other processes.
 Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes doing the race to say that my
output is correct this condition known as a race condition. Several processes access and
process the manipulations over the same data concurrently, then the outcome depends on
the particular order in which the access takes place. A race condition is a situation that may
occur inside a critical section. This happens when the result of multiple thread execution
in the critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can
prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time.
The critical section contains shared variables that need to be synchronized to maintain the

Page 7 of 10
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:

 Mutual Exclusion: If a process is executing in its critical section, then no other


process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter in the critical
section next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem.


In Peterson’s solution, we have two shared variables:

 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the


critical section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
Page 8 of 10
 Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s solution:
 It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j]
&& turn == j);” is responsible for this. Busy waiting is not favored because it wastes
CPU cycles that could be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can


be signaled by another thread. This is different than a mutex as the mutex can be signaled
only by the thread that is called the wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.

 Binary Semaphores: They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialized to 1. Then, a process has to wait until the lock
becomes 0. Then, the process can make the mutex semaphore 1 and start its critical
section. When it completes its critical section, it can reset the value of the mutex
semaphore to 0 and some other process can enter its critical section.
 Counting Semaphores: They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e., the process has an instance

Page 9 of 10
available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance
of the resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.

Advantages and Disadvantages:


Advantages of Process Synchronization:

 Ensures data consistency and integrity


 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization:

 Adds overhead to the system


 Can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlocks if not implemented properly.
Last Updated : 01 Feb, 2023

Race Condition Vulnerability



Race condition occurs when multiple threads read and write the same variable i.e. they
have access to some shared data and they try to change it at the same time. In such a
scenario threads are “racing” each other to access/change the data.

Page 10 of 10

You might also like