0% found this document useful (0 votes)
43 views10 pages

Operating System Term Paper

This document discusses threads and multithreading in operating systems. It defines a thread as a single sequence of execution within a process that shares resources. Threads allow for parallelism to improve performance through concurrent execution. The document also covers the history and benefits of multithreading, such as responsiveness, resource sharing, and utilizing multiprocessor architectures.

Uploaded by

Jomaina Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views10 pages

Operating System Term Paper

This document discusses threads and multithreading in operating systems. It defines a thread as a single sequence of execution within a process that shares resources. Threads allow for parallelism to improve performance through concurrent execution. The document also covers the history and benefits of multithreading, such as responsiveness, resource sharing, and utilizing multiprocessor architectures.

Uploaded by

Jomaina Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Exploring the Evolution of Threads and Benefits of

Multithreading and Scheduling in Operating


Systems
Jomaina Hafiz Ahmed Phagwara
Prince Divya Bharti
School of Computer Science and Punjab, India
School of Computer Science and School of Computer Science and
Engineering [email protected]
Engineering Engineering
Lovely Professional University, Lovely Professional University, Lovely Professional University,
Phagwara Phagwara
Abstract— This research paper focuses on the topic of Punjab, India Punjab,
Each thread has its program counter, India
register set, and
[email protected]
stack space, allowing [email protected]
independent execution. Threads
threads and multithreading in operating systems. The
paper begins by defining a thread as a single sequence share the same code, data, and resources within a process,
stream within a process, also known as a thread of allowing for efficient communication and resource
execution or thread of control. Threads are used to utilization.
increase the performance of applications by allowing a Threads can be used to improve application performance
process to be split into multiple threads, each with its through parallelism, allowing for multiple tasks to be
own program counter, stack, and set of registers. The executed concurrently within a single process. This can lead
paper also discusses the motivation for the development to faster response times, reduced context switching time, and
and implementation of threads and multithreading and more efficient use of resources.
the history behind the same, different thread models in
A. Motivation Behind Thread Development
operating systems, including user-level and kernel-level
models, and their respective advantages and A traditional process can be likened to a container housing
disadvantages. all necessary resources and control mechanisms within a
The paper then delves into the benefits of singular framework. Historically, a process was conceived
multithreading, such as responsiveness, resource as the fundamental unit governing the flow of control within
sharing, economy, scalability, better communication, and a system, intended to function autonomously. However, this
utilization of multiprocessor architecture. model proved inadequate for applications requiring the
Multithreading enhances concurrency on a multi-CPU collaboration of multiple tasks interacting with each other.
machine and minimizes system resource usage. The Consequently, the subsequent section delineates the
paper also highlights the disadvantages of limitations inherent in the process model, necessitating the
multithreading, such as the complexity of managing emergence of threads.
multiple threads and the potential for increased context- 1) Cost of Process Management: Processes come with a
switching time. hefty load of data, such as file information, memory maps,
The paper describes some of the more important and accounting details. In systems like UNIX, creating a
implementations of multithreading and concludes by
process involves copying everything from its parent,
summarizing the key points and emphasizing the
resulting in significant effort and costs. Even in systems like
importance of understanding threads and
the Win32 subsystem of Windows NT, where this copying is
multithreading in operating systems for efficient and
effective application development. avoided, substantial setup is still required, consuming both
time and resources. Consequently, the creation and handling
of processes incur notable expenses.
Keywords—Threads, Multithreading, Program Counter, Conversely, tasks cooperating within a program don't
Stack, Scheduling require all this baggage. They do not need to carry the
extensive information and resources that processes do, such
INTRODUCTION TO THREADS IN OPERATING as map translations, file details, and working directories
SYSTEM associated with every other process. Utilizing processes for
Threads are a fundamental concept in operating systems, these tasks makes them more cumbersome and expensive,
representing a single sequential flow of execution of tasks adding unnecessary costs to task management.
within a process. They are also known as lightweight In contrast, tasks that collaborate within a program
processes due to their lower overhead compared to traditional operate differently. They do not require the extensive data
processes. Threads share the same memory and resources and resources that processes do, such as map translations,
within a process, allowing for faster communication and
file specifics, and working directories. Consequently,
context switching. Threads can be created and managed by
the operating system or by the application itself. The utilizing processes for these tasks adds complexity and cost,
operating system can provide a system call to create and which are unnecessary for efficient task management.
manage threads, while the application can use a thread 2) Resource Cost: Processes utilize inherently limited
library to manage user-level threads. kernel resources. The kernel, serving as the core of the
operating system, remains permanently resident in physical
memory and is never swapped out to disk in most operating

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


systems. Consequently, the memory available to the kernel scheduling, threaded programs can promptly react to
is constrained by the physical memory present. Each sporadic events demanding real-time responsiveness. This
process requires a portion of kernel memory for process feature is particularly crucial in graphical user interfaces,
management tasks. Furthermore, many data structures where the program's responsiveness to user input is
employed by the kernel to govern processes, such as the paramount.
process table, are constructed as fixed-length arrays, Unlike traditional processes confined to a single
lacking the capability for dynamic extension. processor, threaded programs utilizing kernel threads can
Consequently, the number of active processes at any given seamlessly harness multiple processors if available.
time is restricted. This limitation hampers applications from Consequently, the same executable can operate on both
leveraging a large number of processes as threads of single-processor and multi-processor systems, provided the
control, even if performance considerations are operating system supports both configurations.
disregarded. In summary, threaded programming not only enhances
Processes are associated with a multitude of resources, responsiveness through efficient time-slicing and priority-
including a virtual address space, a root directory, a working based scheduling but also offers versatility by enabling the
directory, a user identifier, accounting information, and transparent utilization of multiple processors. This flexibility
signal masks. Tasks within the same application can ensures optimal performance across varying hardware
effectively share all of these resources. The separate configurations, making threaded programming a valuable
instantiation of each resource for every task, as facilitated by asset in diverse computing environments.
using distinct processes, leads to unnecessary duplication HISTORY.
and resource wastage.
Despite common misconceptions, the concept of threads
B. Introduction of Threads as a solution to the above is not a recent innovation. Threads have existed since at least
problems 1965, demonstrated by the Berkeley Timesharing system,
The concept of a thread emerged as a solution to these which featured a mechanism akin to modern threads. This
challenges. A thread essentially represents a single path of system protected resources, including memory, on a per-user
execution or control flow within a program. In a basis. Each user had access to 128K words of memory, with
multithreaded process, multiple threads share the same each word comprising 24 bits. Threads in this system were
memory space. This simplifies memory sharing and referred to as "processes," with each process capable of
synchronization because threads operate within the same addressing 16K words and freely mapping within the user's
memory area. Threads can share much of the state associated address space. Multiple threads could be created, and through
with a process, making them lightweight and relatively easy appropriate memory mappings, they could share memory
to manage. efficiently. These threads were lightweight, requiring only 15
words for storage, and were managed by the kernel, which
Each thread also possesses communication capabilities scheduled them independently.
similar to separate processes. Threads can communicate
independently with other processes through common By around 1970, a form of multithreading involving
mechanisms like semaphores, sockets, and pipes. A thread multiple stacks within a single process was implemented on
carries only the necessary state information and resources systems like Multics, primarily to support background
required for its operation, typically consisting of a thread compilations. The threads as we understand them today
control block and a user-level stack. The thread control block emerged in the early 1980s, initially appearing in research
retains the thread's context when it's not active, along with microkernel-based systems such as the V kernel, Chorus, and
other management details. RIG. Commercial versions of multithreaded operating
systems began to emerge around 1983, with examples like
As a result, threads can be created and managed with the VAX ELN, a real-time operating system from DEC,
minimal overhead, and threads within the same process can which supported multithreaded processes.
share data without needing intervention from the operating
system kernel. Threads typically share various resources such Today, most modern operating systems incorporate some
as file descriptors, kernel process data structures, signal form of multithreading support. Additionally, user-level
masks, virtual address translation maps, root directories, and thread libraries are available for older systems lacking kernel
working directories. This efficient resource utilization support for multithreading. While thread packages vary
eliminates unnecessary duplication. widely in functionality and interface, they generally share
common attributes. The subsequent subsection aims to
Threads offer a natural approach to programming multiple categorize various thread packages based on different
streams of control within a process. While a similar outcome criteria.
could be achieved with a single-threaded process, the
resulting control flow would be complex and challenging to CLASSIFICATION
maintain. Threads serve as an effective structuring
1) User-level Threads: User Level Threads (ULTs) are
mechanism for cleanly programming independent tasks
within a single application. Operating system-supported implemented within user-level libraries, bypassing the need
threads enable computation and I/O operations to overlap on for system calls. This means that thread switching doesn't
both single and multi-processor systems, leading to involve calling the operating system or causing interrupts to
significant performance enhancements. the kernel.
Consequently, ULTs are managed by the user-level library
A threaded program can enhance its responsiveness
and appear to the kernel as single-threaded processes.
through time slicing, even if it primarily engages in
computational tasks. Moreover, with priority-based Despite the kernel's lack of awareness, ULTs offer several
advantages. They can be implemented on operating systems

Several
that lack native support for multithreading, making them
versatile in diverse environments.
Additionally, ULTs are straightforward to represent and

commercial
create since they only require a program counter, register
set, and stack space. Moreover, thread switching is fast
because no OS calls need to be made.

operating systems
However, ULTs have limitations. They exhibit limited
coordination between threads and the kernel. For instance, if
one thread encounters a page fault, the entire process is

offer kernel-level
blocked, affecting the responsiveness of the application.

2) Kernel-level threads: On the other hand, kernel-level

threads including
threads (KLTs) are managed directly by the kernel, which
maintains a master thread table tracking all threads in the
system. Unlike ULTs, the kernel is fully aware of and

Windows
manages KLTs, providing better coordination and resource
allocation. This makes KLTs suitable for applications that
require frequent blocking, as the kernel can prioritize
processes with a larger number of threads. However, KLTs

NT, Digital
come with their own set of limitations. They tend to be
slower and less efficient than ULTs due to the involvement
of the kernel in thread operations.

UNIX 3.x and


Additionally, KLTs require a thread control block, adding
overhead to thread management. Despite these drawbacks,
KLTs offer efficient resource management and improved

Linux 2.x
handling of blocking scenarios, making them valuable in
certain contexts where responsiveness and coordination are
critical. Several commercial operating systems offer kernel-

Several
level threads including WindowsNT, Digital UNIX 3. x, and
Linux 2. x

Several commercial
commercial operating systems
operating systems offer kernel-level
offer kernel-level threads including
threads including Windows
Windows NT, Digital
NT, Digital UNIX 3.x and
UNIX 3.x and Linux 2.x
Linux 2.x The comparison between User Level Threads (ULTs) and
kernel-level threads (KLTs) reveals the nuanced trade-offs
involved in thread management strategies. ULTs offer
flexibility by enabling threads to be managed entirely within Moreover, both threads and processes can be preempted by
user-level libraries, making them suitable for environments the operating system and terminated as deemed necessary.
with limited resources or where system calls are expensive.
In contrast, KLTs provide enhanced coordination and However, disparities exist between threads and processes.
resource allocation by involving the kernel directly, but this Processes possess individual address spaces and resources,
comes at the cost of increased overhead. The choice between such as memory and file handles, whereas threads share
ULTs and KLTs depends on factors such as application these resources with the program originator.
requirements, operating system support, and overall Processes are subject to scheduling by the operating system,
performance considerations. ULTs may be preferable for while threads may be scheduled either by the operating
lightweight and fast thread switching, while KLTs may be system or the program itself.
more appropriate for applications requiring robust resource Additionally, process creation and management are typically
management and frequent blocking scenarios. within the purview of the operating system, whereas threads
Ultimately, selecting the optimal threading model can be created and managed by either the program or the
necessitates a careful evaluation of the specific needs and operating system.
constraints of the application environment. Furthermore, inter-process communication typically
necessitates dedicated mechanisms, while threads can
3) Hybrid thread model: The hybrid thread model communicate directly within the same program.
represents a synthesis of user-level and kernel-level
threading approaches, wherein one or more user-level In conclusion, threads, being lighter than processes, excel in
threads are multiplexed on top of one or more kernel-level concurrent execution within a single program, whereas
threads within a process. This architecture allows for processes are conventionally utilized for running separate
multiple kernel threads per process, each of which can be programs or isolating resources between programs.
independently scheduled by the kernel.
By leveraging both user-level and kernel-level threads, the D. Advantages of Threading in Operating System
hybrid model combines the advantages of both approaches.
However, in hybrid systems, scheduling occurs at two Threading is a programming technique that allows multiple
distinct levels: user-level threads are managed by a user- tasks to be executed concurrently within the same process,
level threads library, while kernel-level threads are sharing resources and memory. This technique has several
advantages over traditional single-threaded programming,
scheduled by the kernel, with neither scheduler being aware
including:
of the decisions made by the other. This lack of
synchronization between schedulers can potentially lead to  Concurrent Execution: Threads allow for concurrent
scheduling conflicts, which may degrade performance. execution and enable multitasking in a single
To address this issue, a modification known as scheduler application. Threads share the same memory space and
activations has been proposed, but as of now, no commercial resources of the process they belong to, allowing for
operating system offers this feature. efficient communication and resource utilization.
 Improved Performance: Multithreading can help
C. Comparison of Process and Thread increase the overall performance of an application,
Threads, representing single sequential streams within a especially on systems with multiple processors or cores.
process, exhibit properties akin to processes, thus earning It allows multiple tasks to run concurrently, utilizing the
the designation of "lightweight processes." Each thread is available CPU resources more efficiently.
endowed with its own:  Responsiveness: In a single-threaded environment, if a
 Program counter long-running task blocks the main thread, the entire
 Register set application becomes unresponsive. Multithreading can
 Stack space prevent this issue by running such tasks in separate
facilitating independent execution. Despite their sequential threads, ensuring the application remains responsive.
nature, threads provide the illusion of parallelism.  Better Resource Utilization: Multithreading allows
However, they are not entirely isolated entities, as they share better utilization of system resources by keeping the
code, data, and operating system resources with other CPU busy while waiting for I/O operations or other
threads within the same process. tasks.
 Simplified Modeling: Some problems can be more
Similarities between threads and processes encompass the naturally modeled using multiple threads. This
singular activity of one thread or process at a time and the makes the program easier to design, understand, and
capability to generate child threads or processes. maintain.
Additionally, both threads and processes are subject to  Parallelism: Multithreading enables parallelism,
scheduling by the operating system, which allocates CPU which can lead to significant performance
time using diverse scheduling algorithms. improvements in applications that can be divided
Each thread and process maintains its unique execution into smaller, independent tasks.
context, enabling autonomous execution and communication  Faster Context Switching: Context switching
with other threads or processes through inter-process between threads is faster than between processes,
communication (IPC) mechanisms. allowing for more efficient use of CPU resources.
 Faster Communication: Threads within the same scenarios: 1) Asynchronous cancellation, where one thread
process can communicate more efficiently than immediately terminates the target thread, and
processes, as they share the same memory space. 2) Deferred cancellation, where the target thread
 Efficient Use of Multiprocessor Architecture: Threads periodically checks whether it should terminate, allowing it
enable the utilization of the multiprocessor architecture an opportunity to terminate itself in an orderly fashion.
to a greater extent, increasing efficiency and
throughput. Thread-local storage is a mechanism that allows each thread
to have its copy of certain data. This is useful in situations
However, multithreading also introduces complexity and where each thread needs to maintain its state or context,
potential issues related to synchronization and concurrency. such as in a transaction-processing system where each
Developers need to be aware of synchronization, deadlocks, transaction is processed in a separate thread.
race conditions, and other concurrency-related issues.
Synchronization overhead and context switching can also Scheduler activations are a scheme for communication
result in additional overhead and reduced performance if not between the user-thread library and the kernel. The kernel
managed efficiently. provides an application with a set of virtual processors
The behavior of the program can be hard to predict and (LWPs), and the application can schedule user threads onto
reproduce, especially when it comes to debugging. The an available virtual processor. This allows for efficient
performance benefits of multithreading are limited by the communication and coordination between the user-thread
number of available cores or processors in the system. In library and the kernel, improving the performance and
some cases, excessive use of threads can lead to responsiveness of the system. Multithreaded programming is
performance degradation instead of improvement. a powerful technique for improving the performance and
responsiveness of applications, but it
POTENTIAL CHALLENGES AND ISSUES IN also introduces several issues that can affect the
MULTITHREADED PROGRAMMING performance and reliability of the system. These issues
Threading is a programming technique that allows multiple include the use of the fork () and exec () system calls, signal
tasks to be executed concurrently within the same process, handling, thread cancellation, thread-local storage, and
sharing resources and memory. This technique has several scheduler activations. Understanding these issues and their
advantages, including increased responsiveness, resource implications is essential for developing reliable and efficient
sharing, improved performance, and better code multithreaded applications.
organization. However, it also introduces several issues that
MULTITHREADING IN THE OPERATING SYSTEM
can affect the performance and reliability of the system.
Multithreading in operating systems is a technique that
One of the issues with multithreaded programming is the use allows multiple threads to run concurrently within a single
of the fork () and exec () system calls. In a multithreaded process. A thread is a lightweight sub-process, the smallest
program, if one thread calls fork (), it is unclear whether the unit of processing, and is a separate path of execution that
new process should duplicate all threads or be single- shares the same memory area with other threads in the same
threaded. Some UNIX systems have chosen to have two process.
versions of fork (), one that duplicates all threads and The concept of multithreading involves understanding two
another that duplicates only the thread that invoked the fork fundamental terms: a process and a thread. A process is a
() system call. Similarly, if a thread invokes the exec () program being executed, and a thread is a small lightweight
system call, the program specified in the parameter exec () process within a process. In a multithreaded program,
will replace the entire process, including all threads. multiple threads can run simultaneously, allowing for
improved responsiveness and resource utilization.
Signal handling is another issue in multithreaded There are different multithreading models, including many-
programming. A signal is used in UNIX systems to notify a to-one, one-to-one, and many-to-many, each with its
process that a particular event has occurred. In a advantages and disadvantages. For example, the many-to-
multithreaded program, a signal may be received either one model maps many user-level threads to one kernel
synchronously or asynchronously, depending on the source thread, facilitating an effective context-switching
of and the reason for the event being signaled. All signals, environment but not taking advantage of hardware
whether synchronous or asynchronous, follow the same acceleration in multithreaded processes or multi-processor
pattern: 1) A signal is generated by the occurrence of a systems. The one-to-one model maps each user-level thread
particular event, 2) The signal is delivered to a process, and to a single kernel-level thread, allowing for parallel
3) Once delivered, the signal must be handled. A signal may execution but causing overhead when generating new user
be handled by one of two possible handlers: 1) A default threads. The many-to-many model is a compromise between
signal handler, or 2) A user-defined signal handler. Every the other two models, allowing for multiple user-level and
signal has a default signal handler that the kernel runs when kernel-level threads with the ability to schedule another
handling that signal, which can be overridden by a user- thread for execution when one thread blocks.
defined signal handler that is called to handle the signal. E. Multithreading Models
Thread cancellation is the process of terminating a thread There are different multithreading models, including many-
before it has been completed. This can occur in two different to-one, one-to-one, and many-to-many. Each model has its
advantages and disadvantages, and the choice of model i. POSIX STYLE THREADS:
depends on the specific requirements of the application.  Defined by the POSIX standard 1003.1c
established in 1995.
Three established multithreading models classify the
relationship between user-level threads and kernel-level  Specifies a portable thread programming
threads: interface without defining the underlying
implementation.
 Many-to-One Multithreading Model:
 Includes user-level thread packages like
In the many-to-one model, multiple user-level threads are
Provenzano’s Pthreads1, kernel-level threads
mapped to a single kernel-level thread. This setup facilitates
like linuxthreads, and hybrid thread packages
efficient context switching, making it easy to implement
like Solaris threads.
even in simple kernels without native thread support. It is
the best multi-threading model. However, a significant
drawback of this model is its inability to fully utilize ii. MICROSOFT STYLE THREADS:
hardware acceleration in multithreaded processes or multi-
processor systems. Since there's only one kernel-level thread  Includes Win32 threads and OS/2 threads.
scheduled at any given time, the entire system can be  Win32 threads are available on Windows NT
blocked if any thread is blocked. Moreover, all thread and Windows 95 and are kernel-level threads.
management is handled in the user space, further limiting its  OS/2 threads, initially developed by Microsoft
scalability. and later reimplemented by IBM, resemble
Win32 threads.
 One-to-One Multithreading Model:
The one-to-one model assigns a single user-level thread to a iii. UNIX INTERNATIONAL THREADS:
corresponding kernel-level thread. This approach allows for
true parallel execution of multiple threads. However,  Also known as the Solaris thread interface.
creating a new user thread necessitates creating a new kernel  Offered on Solaris 2. x from Sun Microsystems
thread, which introduces overhead and can potentially and UNIX Ware 2 from SCO.
hinder the performance of the parent process. To mitigate  Closely resembles the POSIX interface.
this, operating systems like Windows and Linux impose
limits on the number of threads to control resource
iv. DCE THREADS:
consumption.

 Many-to-Many Multithreading Model:  Corresponds to Draft 4 of the POSIX standard


In the many-to-many model, there are multiple user-level and is quite similar to the POSIX standard.
threads and multiple kernel-level threads, and the number of  Provided by several operating systems such as
kernel threads can vary based on the application's Digital UNIX 3.2x, HP/UX from HP, and AIX
requirements. Developers have the flexibility to create from IBM.
threads at both levels independently. This model strikes a  Kits are available that implement this interface
balance between the other two models. When a thread in this on top of native Win32 threads on Windows
model makes a blocking system call, the kernel can schedule NT.
another thread for execution, thus avoiding system-wide
blocking. Although this model offers greater flexibility and v. JAVA THREADS:
avoids the complexities of the other models, it still faces
limitations in achieving true concurrency because the kernel  Java, a programming language with integrated
can only schedule one process at a time. support for multithreading, provides an
interface that is quite different from both the
PROGRAMMING INTERFACE POSIX interface and the Win32 interface.

The programming interface for multithreaded applications


EXPLORING THE BENEFITS AND DETRIMENTS OF
can be broadly categorized into two classes: POSIX style
MULTITHREADING
threads, similar to the POSIX standard interface found in
UNIX variants, and Microsoft style threads, closely related The benefits of multithreading in an operating system can be
to the Win32 thread interface. POSIX threads adhere to the categorized as follows:
POSIX standard 1003.1c, offering a portable thread
programming interface without specifying the underlying  Scalability: Multithreading allows for efficient
implementation. On the other hand, Win32 threads, utilization of multiprocessor architecture,
available on Windows NT and Windows 95, differ enabling parallel execution of tasks on multiple
significantly from the POSIX interface, operating as kernel- processors. This enhances concurrency and
level threads. Java threads, integrated into the Java improves overall system performance,
programming language, provide a distinct multithreading particularly in environments with multiple
interface compared to both POSIX and Win32 threads. CPUs.
 Economy: Multithreading reduces the when multiple threads access shared resources
overhead of resource allocation and context simultaneously without proper
switching compared to process creation. As synchronization, resulting in unexpected
threads share memory and resources with the outcomes.
process they belong to, the system can manage
and switch between threads more efficiently,  Difficulty in Debugging: Debugging
leading to cost savings in terms of time and multithreaded programs is notoriously
space. challenging due to the concurrent and non-
deterministic nature of thread execution.
 Resource Sharing: Threads can share code and Identifying the root cause of issues,
data within the same address space, allowing reproducing bugs, and understanding the
for more efficient communication and interactions between threads require advanced
synchronization between threads. This debugging techniques and tools.
eliminates the need for explicit inter-process
communication strategies, such as message  Deadlocks: Deadlocks represent a critical issue
passing or shared memory, and facilitates in multithreading where two or more threads
seamless collaboration between threads. are blocked indefinitely, waiting for resources
held by each other. Deadlocks can cause the
 Responsiveness: Multithreading enables a entire program to freeze or crash, leading to
program to continue running while a section is significant disruptions in operation.
blocked or executing a lengthy process,  Complexity of Synchronization: Synchronizing
improving user responsiveness. By allowing access to shared resources adds complexity to
different tasks to be executed concurrently, multithreaded programs. Managing locks,
multithreaded applications can provide a more semaphores, and other synchronization
interactive and responsive user experience. mechanisms can introduce overhead and
increase the likelihood of bugs if not
 Better Communication: Thread implemented correctly.
synchronization functions can enhance inter-
thread communication, allowing for efficient  Increased Memory Consumption:
data exchange and coordination between Multithreaded programs often consume more
threads. Additionally, sharing large amounts of memory compared to single-threaded ones due
data across multiple threads within the same to the overhead associated with each thread's
address space can provide high-bandwidth, stack and program counter. This increased
low-latency communication, improving overall memory consumption can impact the
application performance. scalability and performance of the program,
particularly on systems with limited resources.
 Utilization of multiprocessor architecture: In a
multiprocessor environment, multithreading These five are particularly prominent due to their significant
can maximize the utilization of available impact on the reliability, performance, and maintainability
processors by allowing multiple threads to of multithreaded applications. Addressing these challenges
execute in parallel. This can significantly requires careful design, thorough testing, and adherence to
improve the performance of multi-CPU best practices in multithreaded programming.
machines by increasing the level of
concurrency and parallelism.
THREAD SCHEDULING
 Minimized system resource usage: F. Lifecycle of a thread:
Multithreading has a lower impact on system
resources compared to process creation. The A thread in a computer program undergoes various stages
overhead of creating, maintaining, and throughout its lifecycle. These stages are as follows:
managing threads is lower than that of  New: The lifecycle of a thread commences in the
processes, making multithreading a more ‘New’ state. A thread remains in this state until the
efficient solution for resource-constrained program initiates its start.
environments.  Runnable: Once started, a thread transitions to the
‘Runnable’ state. In this state, the thread is
Some disadvantages stand out as particularly prominent due considered to be executing its assigned task.
to their significant impact on multithreaded programming:  Waiting: A thread enters the ‘Waiting’ state when it
is dependent on another thread to complete a
 1. Race Conditions: Race conditions are specific task. The thread remains in this state until
among the most prominent disadvantages of it receives a signal from the other thread indicating
multithreading. They can lead to unpredictable that the task has been completed, at which point it
behavior and bugs that are notoriously difficult transitions back to the ‘Runnable’ state.
to reproduce and debug. Race conditions occur
 Timed Waiting: A thread in the ‘Runnable’ state  User-Level Threads to Kernel-Level Threads: The
can enter the ‘Timed Waiting’ state for a specified first level of scheduling involves the mapping of
time interval. The thread transitions back to the user-level threads (ULTs) to kernel-level threads
‘Runnable’ state when either the time interval (KLTs) via a lightweight process (LWP). This
expires or the event for which the thread was scheduling is typically managed by the application
waiting occurs. developer.
 Terminated: Upon completion of its task, a thread  Kernel-Level Threads Scheduling: The second
transitions to the ‘Terminated’ state, marking the level of scheduling involves the allocation of
end of its lifecycle. This state is also colloquially kernel-level threads by the system scheduler to
referred to as the ‘Dead’ state. perform distinct operating system functions.

G. Execution of Thread in Operating System Lightweight Process (LWP): LWPs are threads in the user
space that serve as an interface for ULTs to access physical
Thread execution refers to the process by which a thread in a CPU resources. The thread library determines which thread
computer program performs the instructions it has been of a process should run on which LWP and for how long.
assigned. This process begins when the thread is moved The number of LWPs created by the thread library is
from the “new” or “ready” state to the “running” state by the contingent on the type of application.
scheduler.

During execution, the thread may cycle through various In an I/O-bound application, the number of LWPs is
states including “running”, “waiting”, and “blocked”, equivalent to the number of ULTs. This is because when an
depending on the program’s requirements and the LWP is blocked on an I/O operation, the thread library
availability of resources. For instance, a thread may enter a needs to create and schedule another LWP to invoke the
“waiting” state if it requires data from another thread or a other ULT.
hardware device, and then return to the “running” state once
the data is available. However, in a CPU-bound application, the number of LWPs
depends solely on the application. Each LWP is associated
The execution of a thread continues until the thread has with a separate kernel-level thread. This dual-level
completed its task, at which point it enters the “terminated” scheduling mechanism allows for efficient utilization of
state. In a multithreaded environment, multiple threads may system resources and enhances the overall performance of
be executed concurrently or in parallel, depending on the multithreaded applications.
capabilities of the system.
 Concurrent Execution: Concurrent execution refers In real-time systems, the first boundary of thread scheduling
to a scenario where a single processor successfully extends beyond merely specifying the scheduling policy and
manages resources among multiple threads within a priority. It necessitates the specification of two controls for
multithreaded process. In this case, although there User-Level Threads (ULTs): Contention Scope and
is only one processor, it gives the illusion of Allocation Domain. These are elaborated as follows:
simultaneous execution by rapidly switching
between different threads. This is achieved through Contention Scope: Contention, in this context, refers to the
a process known as context switching, where the competition among User-Level Threads (ULTs) for access
state of a thread is saved and restored, allowing to kernel resources. This control delineates the extent of
execution to resume from the same point at a later such contention and is defined by the application developer
time. Concurrent execution can significantly using the thread library. Depending on the extent of
improve the utilization of computational resources contention, it is classified into: -
and enhance the overall performance of the system.  Process Contention Scope (PCS): Here, contention
 Parallel Execution: Parallel execution, on the other occurs among threads within the same process. The thread
hand, occurs when each thread within a library schedules the high-priority PCS thread to access
multithreaded process runs simultaneously on a resources via available Lightweight Processes (LWPs), with
separate processor. This type of execution is the priority specified by the application developer during
possible in multi-processor or multi-core systems. thread creation.
In parallel execution, multiple threads are executed  System Contention Scope (SCS): In this case,
at the same time, leading to a significant reduction contention occurs among all threads in the system. Each
in the total execution time. This is particularly SCS thread is associated with an LWP by the thread library
beneficial for tasks that can be divided into and scheduled by the system scheduler to access kernel
independent subtasks and executed concurrently. resources.

In Java, thread scheduling is a fundamental component In LINUX and UNIX operating systems, the POSIX Pthread
that determines which thread should execute or gain library provides a function, `Pthread_attr_setscope`, to
access to system resources. This process involves two define the type of contention scope for a thread during its
levels of scheduling: creation.
Allocation Domain: This refers to a set of one or more  All threads have equal priority, chosen for
resources for which a thread is competing. In a multicore execution on a FIFO or LIFO basis.
system, there may be one or more allocation domains, each  LIFO strategy may lead to starvation.
consisting of one or more cores. A ULT can be part of one  Not useful for user-level thread packages due to
or more allocation domains. Due to the high complexity unnecessary context switching and the absence of
involved in interfacing with hardware and software priorities.
architectural interfaces, this control is not explicitly  Not commonly used in general-purpose operating
specified. systems but may be used in conjunction with other
However, by default, the multicore system will have an scheduling policies.
interface that influences the allocation domain of a thread.
Consider a scenario where an operating system has three  PREEMPTIVE PRIORITY TIME SLICED
processes (P1, P2, P3) and ten User-Level Threads (T1 to SCHEDULING:
T10) within a single allocation domain. The CPU resources  Commonly used for kernel-level threads.
are distributed among all three processes. The amount of  Each process associated with a priority and has a
CPU resources allocated to each process and each thread separate run queue.
depends on the contention scope, scheduling policy, and  Priorities adjusted based on CPU time obtained.
priority of each thread as defined by the application  Fair and prevents low priority process starvation.
developer using the thread library. It also depends on the  Complex scheduling algorithm due to constant
system scheduler. These User-Level Threads have different priority precomputation.
contention scopes.  Generally advisable for kernel-level threads; may
impose unnecessary overhead for user-level
threads.
H. Thread Scheduling Models
 Used in operating systems like Windows NT,
Digital UNIX, and Solaris for scheduling kernel
Thread scheduling can be classified based on the scheduling threads.
policy, which influences the performance aspects of
multithreading. Understanding these models is crucial for Each scheduling model offers distinct advantages and
designing efficient and responsive multithreaded systems. disadvantages, influencing the suitability for different types
of applications and environments. Choosing the appropriate
 NONPREEMPTIVE SCHEDULING: scheduling policy is essential for achieving optimal
 Threads run until they block for a resource or performance and responsiveness in multithreaded systems.
voluntarily yield the processor.
 Also known as coroutines, practical at the user
level but not kernel level due to potential
unfairness. CONCLUSION
 Excellent performance with minimal scheduling
overhead.
 Reduces dependence on locks, resulting in reduced
overhead.
 Disadvantages include the inability to implement
preemptive priorities and lack of time slicing,
limiting real-time and GUI applications.
 Example: Windows 3. x operating system.
Citations:
[1] https://www.geeksforgeeks.org/threading-issues/
 PREEMPTIVE PRIORITY NON-TIME SLICED [2]
SCHEDULING: http://www.nic.uoregon.edu/~khuck/ts/acumem-report/manu
 The highest priority thread runs until voluntary al_html/multithreading_problems.html
yield, resource blockage, or pre-emption by a [3] https://www.tutorialspoint.com/major-issues-with-multi-
higher priority thread. threaded-programs
 Priorities are generally fixed, and suitable for user- [4] https://www.geeksforgeeks.org/multithreading-in-
level thread packages. operating-system/
 Allows real-time and GUI threads to ensure prompt [5] https://www.geeksforgeeks.org/lifecycle-and-states-of-a-
response. thread-in-java/
 Inappropriate for kernel-level threads due to
potential unfairness and starvation. ACKNOWLEDGMENTS
“Acknowledgment(s)” is spelled without an “e” after the
“g” in American English.
 PREEMPTIVE TIME SLICED (ROUND
As you can see, the formatting ensures that the text ends
ROBIN) SCHEDULING:
in two equal-sized columns rather than only displaying one
 Threads allotted time slices and run until blockage, column on the last page.
voluntary yield, or time slice exhaustion.
This template was adapted from those provided by the [2] B. Rieder, Engines of Order: A Mechanology of Algorithmic
IEEE on their website. Techniques. Amsterdam, Netherlands: Amsterdam Univ. Press, 2020.
[3] I. Boglaev, “A numerical method for solving nonlinear integro-
REFERENCES differential equations of Fredholm type,” J. Comput. Math., vol. 34,
no. 3, pp. 262–284, May 2016, doi: 10.4208/jcm.1512-m2015-0241.
[1] D. V. Lindberg and H. K. H. Lee, “Optimization under constraints by
applying an asymmetric entropy measure,” J. Comput. Graph. Statist.,
vol. 24, no. 2, pp. 379–393, Jun. 2015, doi:
10.1080/10618600.2014.901225.

You might also like