Threads
Threads
Table of Contents
1. Introduction to Threads
2. Types of Threads
• User-Level Threads (ULT)
• Kernel-Level Threads (KLT)
• Lightweight Processes (LWPs)
• Hybrid Threads
• Fiber Threads
• Green Threads
3. Thread Models
• Many-to-One Model
• One-to-One Model
• Many-to-Many Model
• Two-Level Model
4. OS-Level Thread Scheduling
• Preemptive vs. Cooperative Scheduling
• Scheduling Policies
– First-Come, First-Served (FCFS)
– Round Robin (RR)
– Priority Scheduling
– Multilevel Queue Scheduling
– Multilevel Feedback Queue Scheduling
– Real-Time Scheduling Policies
• Context Switching
5. Thread Synchronization and Concurrency Control
6. Advantages and Disadvantages of Different Thread Types
7. Conclusion
Introduction to Threads
A thread is the smallest sequence of programmed instructions that can be man-
aged independently by a scheduler, which is typically a part of the operating
system. Threads allow a process to perform multiple tasks concurrently, im-
proving the efficiency and performance of applications, especially on multi-core
or multi-processor systems.
Key characteristics:
1
• Shared Resources: Threads within the same process share resources
like memory, file handles, and other process-specific data.
• Independent Execution: Each thread has its own execution stack and
program counter, enabling independent execution paths.
• Lightweight: Creating and managing threads typically incurs less over-
head compared to full-fledged processes.
Types of Threads
Threads can be categorized based on their management and implementation at
different levels within the OS and runtime environments.
2
• Resource Intensive: Generally more resource-heavy compared to user-
level threads.
Examples:
• Windows threads
• Linux Native POSIX Threads (NPTL)
Hybrid Threads
Hybrid Threads combine aspects of both user-level and kernel-level thread-
ing. Multiple user threads are multiplexed onto fewer kernel threads, providing
flexibility and efficiency.
Characteristics:
• Combination: Supports both ULT and KLT benefits.
• Scheduling: User-level scheduling combined with kernel-level support.
• Scalability and Efficiency: Improved over purely ULT or KLT models.
Examples:
• Many modern implementations like Windows and Linux use hybrid thread-
ing models.
Fiber Threads
Fibers are lightweight threads that are scheduled by the application rather
than the OS. They are similar to user-level threads but require explicit yielding
by the programmer.
Characteristics:
• Cooperative Scheduling: Requires explicit context switching.
• Execution: Runs within a single OS thread.
3
• Use Cases: Useful in environments where the programmer has fine-
grained control over scheduling.
Examples:
• Fibers in Windows
Green Threads
Green Threads are threads that are scheduled by a runtime library or virtual
machine (e.g., JVM) instead of natively by the underlying OS.
Characteristics:
• User-Space Implementation: Threads are managed in user space.
• Portability: Highly portable as they don’t rely on OS-specific thread
features.
• Limitations: Less efficient on multi-core systems since the OS sees them
as a single thread.
Examples:
• Early versions of Java before native thread support
• Go’s goroutines (though more advanced)
Thread Models
Thread models define how user-level threads are mapped to kernel-level threads.
The principal models include Many-to-One, One-to-One, Many-to-Many, and
Two-Level.
Many-to-One Model
Description:
• Multiple user-level threads are mapped to a single kernel-level thread.
Advantages:
• Efficient thread management since operations are handled in user space.
• Context switching is fast.
Disadvantages:
• Only one thread can access the kernel at a time, limiting concurrency on
multi-core systems.
• If one thread blocks, the entire process blocks.
Use Cases:
4
• Environments where threading is not heavily used or parallelism is not
critical.
One-to-One Model
Description:
• Each user-level thread maps to a distinct kernel-level thread.
Advantages:
• Allows true concurrency on multi-processor systems.
• If one thread blocks, others can continue execution.
Disadvantages:
• Higher overhead due to more kernel threads.
• System limits on the number of kernel threads can constrain applications.
Use Cases:
• Systems requiring high concurrency and scalability (e.g., most modern
operating systems like Linux and Windows).
Many-to-Many Model
Description:
• Multiple user-level threads are multiplexed over a smaller or equal number
of kernel-level threads.
Advantages:
• Balances between efficient user-level thread management and concurrent
execution.
• Greater flexibility in scaling threads.
Disadvantages:
• Complex to implement.
• Might not scale as well on highly parallel hardware compared to One-to-
One.
Use Cases:
• Systems needing a balance of scalability and performance.
Two-Level Model
Description:
• Combines aspects of both Many-to-Many and One-to-One by allowing
user-level threads to be bound to kernel-level threads (for some user
threads) or multiplexed (for others).
5
Advantages:
• Offers flexibility to choose between multiplexing and binding.
• Can optimize for both concurrency and resource management.
Disadvantages:
• Increased complexity in implementation.
• Potential for uneven load distribution.
Use Cases:
• Environments requiring both shared and dedicated thread execution.
6
Advantages:
• Lower overhead since context switches occur less frequently.
• Easier coordination between threads.
Disadvantages:
• Less responsive; one thread can block the entire system by not yielding.
• Not suitable for preemptive multitasking environments.
Scheduling Policies
OS-level thread schedulers use various policies to determine the order and du-
ration of thread execution. Some common scheduling policies include:
7
Priority Scheduling Description:
• Threads are assigned priorities, and the scheduler selects the thread with
the highest priority.
Characteristics:
• Can be preemptive or non-preemptive.
• May involve dynamic or static priority assignments.
Advantages:
• Critical tasks can be prioritized over less important ones.
• Flexible in managing different types of workloads.
Disadvantages:
• Risk of starvation for low-priority threads.
• May require mechanisms like aging to prevent indefinite postponement.
8
Disadvantages:
• More complex to implement.
• Requires fine-tuning of queue parameters and movement criteria.
Context Switching
Definition: Context switching is the process of storing the state of a currently
running thread or process so that it can be resumed later, allowing another
thread or process to run.
Components of Context Switching:
1. Saving State: The current thread’s CPU registers, program counter, and
other state information are saved.
2. Loading State: The next thread’s saved state is loaded into the CPU
registers.
3. Updating Queues: The scheduler updates the ready and waiting queues
accordingly.
Considerations:
• Time Overhead: Context switching incurs overhead due to the need to
save and load state information.
9
• Frequency: High-frequency context switching can degrade system per-
formance.
10
• Fast thread creation and context switching.
• No need for kernel support, enhancing portability.
• Can implement custom scheduling policies.
Disadvantages:
• Limited to single-processor execution; can’t utilize multiple cores effec-
tively.
• Blocking system calls by one thread can block the entire process.
Hybrid Threads
Advantages:
• Flexible and scalable, leveraging the strengths of both ULT and KLT.
• Improved performance across different workloads.
Disadvantages:
• Complex to implement and manage.
• Can inherit synchronization overheads from hybrid designs.
Fiber Threads
Advantages:
• Low overhead compared to full threads.
11
• Provides fine-grained control over execution.
Disadvantages:
• Requires explicit scheduling by the application.
• Not suitable for applications needing concurrent execution handled by the
OS.
Green Threads
Advantages:
• Highly portable across different platforms.
• Lightweight and efficient for certain use-cases.
Disadvantages:
• Limited concurrency on multi-core systems.
• Less integrated with OS scheduling and resource management.
Conclusion
Threads are indispensable for achieving concurrency and improving the per-
formance of applications, especially in modern multi-core and multi-processor
systems. Understanding the various types of threads—such as user-level, kernel-
level, lightweight processes, hybrid threads, fibers, and green threads—and their
respective advantages and disadvantages is crucial for selecting the right thread-
ing model for a given application.
Operating systems employ sophisticated scheduling algorithms and policies to
manage thread execution efficiently, balancing factors like responsiveness, fair-
ness, and resource utilization. Preemptive and cooperative scheduling, coupled
with diverse scheduling policies like Round Robin, Priority Scheduling, and real-
time algorithms, allow OS schedulers to cater to a wide range of application
requirements.
Effective use of threads, combined with proper synchronization and concurrency
control mechanisms, enables developers to build robust, efficient, and scalable
applications capable of leveraging the full potential of contemporary hardware
architectures.
12