Operating System Term Paper
Operating System Term Paper
Several
that lack native support for multithreading, making them
versatile in diverse environments.
Additionally, ULTs are straightforward to represent and
commercial
create since they only require a program counter, register
set, and stack space. Moreover, thread switching is fast
because no OS calls need to be made.
operating systems
However, ULTs have limitations. They exhibit limited
coordination between threads and the kernel. For instance, if
one thread encounters a page fault, the entire process is
offer kernel-level
blocked, affecting the responsiveness of the application.
threads including
threads (KLTs) are managed directly by the kernel, which
maintains a master thread table tracking all threads in the
system. Unlike ULTs, the kernel is fully aware of and
Windows
manages KLTs, providing better coordination and resource
allocation. This makes KLTs suitable for applications that
require frequent blocking, as the kernel can prioritize
processes with a larger number of threads. However, KLTs
NT, Digital
come with their own set of limitations. They tend to be
slower and less efficient than ULTs due to the involvement
of the kernel in thread operations.
Linux 2.x
handling of blocking scenarios, making them valuable in
certain contexts where responsiveness and coordination are
critical. Several commercial operating systems offer kernel-
Several
level threads including WindowsNT, Digital UNIX 3. x, and
Linux 2. x
Several commercial
commercial operating systems
operating systems offer kernel-level
offer kernel-level threads including
threads including Windows
Windows NT, Digital
NT, Digital UNIX 3.x and
UNIX 3.x and Linux 2.x
Linux 2.x The comparison between User Level Threads (ULTs) and
kernel-level threads (KLTs) reveals the nuanced trade-offs
involved in thread management strategies. ULTs offer
flexibility by enabling threads to be managed entirely within Moreover, both threads and processes can be preempted by
user-level libraries, making them suitable for environments the operating system and terminated as deemed necessary.
with limited resources or where system calls are expensive.
In contrast, KLTs provide enhanced coordination and However, disparities exist between threads and processes.
resource allocation by involving the kernel directly, but this Processes possess individual address spaces and resources,
comes at the cost of increased overhead. The choice between such as memory and file handles, whereas threads share
ULTs and KLTs depends on factors such as application these resources with the program originator.
requirements, operating system support, and overall Processes are subject to scheduling by the operating system,
performance considerations. ULTs may be preferable for while threads may be scheduled either by the operating
lightweight and fast thread switching, while KLTs may be system or the program itself.
more appropriate for applications requiring robust resource Additionally, process creation and management are typically
management and frequent blocking scenarios. within the purview of the operating system, whereas threads
Ultimately, selecting the optimal threading model can be created and managed by either the program or the
necessitates a careful evaluation of the specific needs and operating system.
constraints of the application environment. Furthermore, inter-process communication typically
necessitates dedicated mechanisms, while threads can
3) Hybrid thread model: The hybrid thread model communicate directly within the same program.
represents a synthesis of user-level and kernel-level
threading approaches, wherein one or more user-level In conclusion, threads, being lighter than processes, excel in
threads are multiplexed on top of one or more kernel-level concurrent execution within a single program, whereas
threads within a process. This architecture allows for processes are conventionally utilized for running separate
multiple kernel threads per process, each of which can be programs or isolating resources between programs.
independently scheduled by the kernel.
By leveraging both user-level and kernel-level threads, the D. Advantages of Threading in Operating System
hybrid model combines the advantages of both approaches.
However, in hybrid systems, scheduling occurs at two Threading is a programming technique that allows multiple
distinct levels: user-level threads are managed by a user- tasks to be executed concurrently within the same process,
level threads library, while kernel-level threads are sharing resources and memory. This technique has several
advantages over traditional single-threaded programming,
scheduled by the kernel, with neither scheduler being aware
including:
of the decisions made by the other. This lack of
synchronization between schedulers can potentially lead to Concurrent Execution: Threads allow for concurrent
scheduling conflicts, which may degrade performance. execution and enable multitasking in a single
To address this issue, a modification known as scheduler application. Threads share the same memory space and
activations has been proposed, but as of now, no commercial resources of the process they belong to, allowing for
operating system offers this feature. efficient communication and resource utilization.
Improved Performance: Multithreading can help
C. Comparison of Process and Thread increase the overall performance of an application,
Threads, representing single sequential streams within a especially on systems with multiple processors or cores.
process, exhibit properties akin to processes, thus earning It allows multiple tasks to run concurrently, utilizing the
the designation of "lightweight processes." Each thread is available CPU resources more efficiently.
endowed with its own: Responsiveness: In a single-threaded environment, if a
Program counter long-running task blocks the main thread, the entire
Register set application becomes unresponsive. Multithreading can
Stack space prevent this issue by running such tasks in separate
facilitating independent execution. Despite their sequential threads, ensuring the application remains responsive.
nature, threads provide the illusion of parallelism. Better Resource Utilization: Multithreading allows
However, they are not entirely isolated entities, as they share better utilization of system resources by keeping the
code, data, and operating system resources with other CPU busy while waiting for I/O operations or other
threads within the same process. tasks.
Simplified Modeling: Some problems can be more
Similarities between threads and processes encompass the naturally modeled using multiple threads. This
singular activity of one thread or process at a time and the makes the program easier to design, understand, and
capability to generate child threads or processes. maintain.
Additionally, both threads and processes are subject to Parallelism: Multithreading enables parallelism,
scheduling by the operating system, which allocates CPU which can lead to significant performance
time using diverse scheduling algorithms. improvements in applications that can be divided
Each thread and process maintains its unique execution into smaller, independent tasks.
context, enabling autonomous execution and communication Faster Context Switching: Context switching
with other threads or processes through inter-process between threads is faster than between processes,
communication (IPC) mechanisms. allowing for more efficient use of CPU resources.
Faster Communication: Threads within the same scenarios: 1) Asynchronous cancellation, where one thread
process can communicate more efficiently than immediately terminates the target thread, and
processes, as they share the same memory space. 2) Deferred cancellation, where the target thread
Efficient Use of Multiprocessor Architecture: Threads periodically checks whether it should terminate, allowing it
enable the utilization of the multiprocessor architecture an opportunity to terminate itself in an orderly fashion.
to a greater extent, increasing efficiency and
throughput. Thread-local storage is a mechanism that allows each thread
to have its copy of certain data. This is useful in situations
However, multithreading also introduces complexity and where each thread needs to maintain its state or context,
potential issues related to synchronization and concurrency. such as in a transaction-processing system where each
Developers need to be aware of synchronization, deadlocks, transaction is processed in a separate thread.
race conditions, and other concurrency-related issues.
Synchronization overhead and context switching can also Scheduler activations are a scheme for communication
result in additional overhead and reduced performance if not between the user-thread library and the kernel. The kernel
managed efficiently. provides an application with a set of virtual processors
The behavior of the program can be hard to predict and (LWPs), and the application can schedule user threads onto
reproduce, especially when it comes to debugging. The an available virtual processor. This allows for efficient
performance benefits of multithreading are limited by the communication and coordination between the user-thread
number of available cores or processors in the system. In library and the kernel, improving the performance and
some cases, excessive use of threads can lead to responsiveness of the system. Multithreaded programming is
performance degradation instead of improvement. a powerful technique for improving the performance and
responsiveness of applications, but it
POTENTIAL CHALLENGES AND ISSUES IN also introduces several issues that can affect the
MULTITHREADED PROGRAMMING performance and reliability of the system. These issues
Threading is a programming technique that allows multiple include the use of the fork () and exec () system calls, signal
tasks to be executed concurrently within the same process, handling, thread cancellation, thread-local storage, and
sharing resources and memory. This technique has several scheduler activations. Understanding these issues and their
advantages, including increased responsiveness, resource implications is essential for developing reliable and efficient
sharing, improved performance, and better code multithreaded applications.
organization. However, it also introduces several issues that
MULTITHREADING IN THE OPERATING SYSTEM
can affect the performance and reliability of the system.
Multithreading in operating systems is a technique that
One of the issues with multithreaded programming is the use allows multiple threads to run concurrently within a single
of the fork () and exec () system calls. In a multithreaded process. A thread is a lightweight sub-process, the smallest
program, if one thread calls fork (), it is unclear whether the unit of processing, and is a separate path of execution that
new process should duplicate all threads or be single- shares the same memory area with other threads in the same
threaded. Some UNIX systems have chosen to have two process.
versions of fork (), one that duplicates all threads and The concept of multithreading involves understanding two
another that duplicates only the thread that invoked the fork fundamental terms: a process and a thread. A process is a
() system call. Similarly, if a thread invokes the exec () program being executed, and a thread is a small lightweight
system call, the program specified in the parameter exec () process within a process. In a multithreaded program,
will replace the entire process, including all threads. multiple threads can run simultaneously, allowing for
improved responsiveness and resource utilization.
Signal handling is another issue in multithreaded There are different multithreading models, including many-
programming. A signal is used in UNIX systems to notify a to-one, one-to-one, and many-to-many, each with its
process that a particular event has occurred. In a advantages and disadvantages. For example, the many-to-
multithreaded program, a signal may be received either one model maps many user-level threads to one kernel
synchronously or asynchronously, depending on the source thread, facilitating an effective context-switching
of and the reason for the event being signaled. All signals, environment but not taking advantage of hardware
whether synchronous or asynchronous, follow the same acceleration in multithreaded processes or multi-processor
pattern: 1) A signal is generated by the occurrence of a systems. The one-to-one model maps each user-level thread
particular event, 2) The signal is delivered to a process, and to a single kernel-level thread, allowing for parallel
3) Once delivered, the signal must be handled. A signal may execution but causing overhead when generating new user
be handled by one of two possible handlers: 1) A default threads. The many-to-many model is a compromise between
signal handler, or 2) A user-defined signal handler. Every the other two models, allowing for multiple user-level and
signal has a default signal handler that the kernel runs when kernel-level threads with the ability to schedule another
handling that signal, which can be overridden by a user- thread for execution when one thread blocks.
defined signal handler that is called to handle the signal. E. Multithreading Models
Thread cancellation is the process of terminating a thread There are different multithreading models, including many-
before it has been completed. This can occur in two different to-one, one-to-one, and many-to-many. Each model has its
advantages and disadvantages, and the choice of model i. POSIX STYLE THREADS:
depends on the specific requirements of the application. Defined by the POSIX standard 1003.1c
established in 1995.
Three established multithreading models classify the
relationship between user-level threads and kernel-level Specifies a portable thread programming
threads: interface without defining the underlying
implementation.
Many-to-One Multithreading Model:
Includes user-level thread packages like
In the many-to-one model, multiple user-level threads are
Provenzano’s Pthreads1, kernel-level threads
mapped to a single kernel-level thread. This setup facilitates
like linuxthreads, and hybrid thread packages
efficient context switching, making it easy to implement
like Solaris threads.
even in simple kernels without native thread support. It is
the best multi-threading model. However, a significant
drawback of this model is its inability to fully utilize ii. MICROSOFT STYLE THREADS:
hardware acceleration in multithreaded processes or multi-
processor systems. Since there's only one kernel-level thread Includes Win32 threads and OS/2 threads.
scheduled at any given time, the entire system can be Win32 threads are available on Windows NT
blocked if any thread is blocked. Moreover, all thread and Windows 95 and are kernel-level threads.
management is handled in the user space, further limiting its OS/2 threads, initially developed by Microsoft
scalability. and later reimplemented by IBM, resemble
Win32 threads.
One-to-One Multithreading Model:
The one-to-one model assigns a single user-level thread to a iii. UNIX INTERNATIONAL THREADS:
corresponding kernel-level thread. This approach allows for
true parallel execution of multiple threads. However, Also known as the Solaris thread interface.
creating a new user thread necessitates creating a new kernel Offered on Solaris 2. x from Sun Microsystems
thread, which introduces overhead and can potentially and UNIX Ware 2 from SCO.
hinder the performance of the parent process. To mitigate Closely resembles the POSIX interface.
this, operating systems like Windows and Linux impose
limits on the number of threads to control resource
iv. DCE THREADS:
consumption.
G. Execution of Thread in Operating System Lightweight Process (LWP): LWPs are threads in the user
space that serve as an interface for ULTs to access physical
Thread execution refers to the process by which a thread in a CPU resources. The thread library determines which thread
computer program performs the instructions it has been of a process should run on which LWP and for how long.
assigned. This process begins when the thread is moved The number of LWPs created by the thread library is
from the “new” or “ready” state to the “running” state by the contingent on the type of application.
scheduler.
During execution, the thread may cycle through various In an I/O-bound application, the number of LWPs is
states including “running”, “waiting”, and “blocked”, equivalent to the number of ULTs. This is because when an
depending on the program’s requirements and the LWP is blocked on an I/O operation, the thread library
availability of resources. For instance, a thread may enter a needs to create and schedule another LWP to invoke the
“waiting” state if it requires data from another thread or a other ULT.
hardware device, and then return to the “running” state once
the data is available. However, in a CPU-bound application, the number of LWPs
depends solely on the application. Each LWP is associated
The execution of a thread continues until the thread has with a separate kernel-level thread. This dual-level
completed its task, at which point it enters the “terminated” scheduling mechanism allows for efficient utilization of
state. In a multithreaded environment, multiple threads may system resources and enhances the overall performance of
be executed concurrently or in parallel, depending on the multithreaded applications.
capabilities of the system.
Concurrent Execution: Concurrent execution refers In real-time systems, the first boundary of thread scheduling
to a scenario where a single processor successfully extends beyond merely specifying the scheduling policy and
manages resources among multiple threads within a priority. It necessitates the specification of two controls for
multithreaded process. In this case, although there User-Level Threads (ULTs): Contention Scope and
is only one processor, it gives the illusion of Allocation Domain. These are elaborated as follows:
simultaneous execution by rapidly switching
between different threads. This is achieved through Contention Scope: Contention, in this context, refers to the
a process known as context switching, where the competition among User-Level Threads (ULTs) for access
state of a thread is saved and restored, allowing to kernel resources. This control delineates the extent of
execution to resume from the same point at a later such contention and is defined by the application developer
time. Concurrent execution can significantly using the thread library. Depending on the extent of
improve the utilization of computational resources contention, it is classified into: -
and enhance the overall performance of the system. Process Contention Scope (PCS): Here, contention
Parallel Execution: Parallel execution, on the other occurs among threads within the same process. The thread
hand, occurs when each thread within a library schedules the high-priority PCS thread to access
multithreaded process runs simultaneously on a resources via available Lightweight Processes (LWPs), with
separate processor. This type of execution is the priority specified by the application developer during
possible in multi-processor or multi-core systems. thread creation.
In parallel execution, multiple threads are executed System Contention Scope (SCS): In this case,
at the same time, leading to a significant reduction contention occurs among all threads in the system. Each
in the total execution time. This is particularly SCS thread is associated with an LWP by the thread library
beneficial for tasks that can be divided into and scheduled by the system scheduler to access kernel
independent subtasks and executed concurrently. resources.
In Java, thread scheduling is a fundamental component In LINUX and UNIX operating systems, the POSIX Pthread
that determines which thread should execute or gain library provides a function, `Pthread_attr_setscope`, to
access to system resources. This process involves two define the type of contention scope for a thread during its
levels of scheduling: creation.
Allocation Domain: This refers to a set of one or more All threads have equal priority, chosen for
resources for which a thread is competing. In a multicore execution on a FIFO or LIFO basis.
system, there may be one or more allocation domains, each LIFO strategy may lead to starvation.
consisting of one or more cores. A ULT can be part of one Not useful for user-level thread packages due to
or more allocation domains. Due to the high complexity unnecessary context switching and the absence of
involved in interfacing with hardware and software priorities.
architectural interfaces, this control is not explicitly Not commonly used in general-purpose operating
specified. systems but may be used in conjunction with other
However, by default, the multicore system will have an scheduling policies.
interface that influences the allocation domain of a thread.
Consider a scenario where an operating system has three PREEMPTIVE PRIORITY TIME SLICED
processes (P1, P2, P3) and ten User-Level Threads (T1 to SCHEDULING:
T10) within a single allocation domain. The CPU resources Commonly used for kernel-level threads.
are distributed among all three processes. The amount of Each process associated with a priority and has a
CPU resources allocated to each process and each thread separate run queue.
depends on the contention scope, scheduling policy, and Priorities adjusted based on CPU time obtained.
priority of each thread as defined by the application Fair and prevents low priority process starvation.
developer using the thread library. It also depends on the Complex scheduling algorithm due to constant
system scheduler. These User-Level Threads have different priority precomputation.
contention scopes. Generally advisable for kernel-level threads; may
impose unnecessary overhead for user-level
threads.
H. Thread Scheduling Models
Used in operating systems like Windows NT,
Digital UNIX, and Solaris for scheduling kernel
Thread scheduling can be classified based on the scheduling threads.
policy, which influences the performance aspects of
multithreading. Understanding these models is crucial for Each scheduling model offers distinct advantages and
designing efficient and responsive multithreaded systems. disadvantages, influencing the suitability for different types
of applications and environments. Choosing the appropriate
NONPREEMPTIVE SCHEDULING: scheduling policy is essential for achieving optimal
Threads run until they block for a resource or performance and responsiveness in multithreaded systems.
voluntarily yield the processor.
Also known as coroutines, practical at the user
level but not kernel level due to potential
unfairness. CONCLUSION
Excellent performance with minimal scheduling
overhead.
Reduces dependence on locks, resulting in reduced
overhead.
Disadvantages include the inability to implement
preemptive priorities and lack of time slicing,
limiting real-time and GUI applications.
Example: Windows 3. x operating system.
Citations:
[1] https://www.geeksforgeeks.org/threading-issues/
PREEMPTIVE PRIORITY NON-TIME SLICED [2]
SCHEDULING: http://www.nic.uoregon.edu/~khuck/ts/acumem-report/manu
The highest priority thread runs until voluntary al_html/multithreading_problems.html
yield, resource blockage, or pre-emption by a [3] https://www.tutorialspoint.com/major-issues-with-multi-
higher priority thread. threaded-programs
Priorities are generally fixed, and suitable for user- [4] https://www.geeksforgeeks.org/multithreading-in-
level thread packages. operating-system/
Allows real-time and GUI threads to ensure prompt [5] https://www.geeksforgeeks.org/lifecycle-and-states-of-a-
response. thread-in-java/
Inappropriate for kernel-level threads due to
potential unfairness and starvation. ACKNOWLEDGMENTS
“Acknowledgment(s)” is spelled without an “e” after the
“g” in American English.
PREEMPTIVE TIME SLICED (ROUND
As you can see, the formatting ensures that the text ends
ROBIN) SCHEDULING:
in two equal-sized columns rather than only displaying one
Threads allotted time slices and run until blockage, column on the last page.
voluntary yield, or time slice exhaustion.
This template was adapted from those provided by the [2] B. Rieder, Engines of Order: A Mechanology of Algorithmic
IEEE on their website. Techniques. Amsterdam, Netherlands: Amsterdam Univ. Press, 2020.
[3] I. Boglaev, “A numerical method for solving nonlinear integro-
REFERENCES differential equations of Fredholm type,” J. Comput. Math., vol. 34,
no. 3, pp. 262–284, May 2016, doi: 10.4208/jcm.1512-m2015-0241.
[1] D. V. Lindberg and H. K. H. Lee, “Optimization under constraints by
applying an asymmetric entropy measure,” J. Comput. Graph. Statist.,
vol. 24, no. 2, pp. 379–393, Jun. 2015, doi:
10.1080/10618600.2014.901225.