0% found this document useful (0 votes)
20 views12 pages

Threads

Detailed working of thread on Operating Systems

Uploaded by

Revanth Shalon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views12 pages

Threads

Detailed working of thread on Operating Systems

Uploaded by

Revanth Shalon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Threads are fundamental units of CPU utilization within a process, enabling con-

current execution of multiple sequences of instructions. Understanding threads


involves exploring their types, models, and the mechanisms through which oper-
ating systems (OS) schedule them. This comprehensive overview delves into the
intricacies of threads, their classifications, and OS-level scheduling strategies.

Table of Contents
1. Introduction to Threads
2. Types of Threads
• User-Level Threads (ULT)
• Kernel-Level Threads (KLT)
• Lightweight Processes (LWPs)
• Hybrid Threads
• Fiber Threads
• Green Threads
3. Thread Models
• Many-to-One Model
• One-to-One Model
• Many-to-Many Model
• Two-Level Model
4. OS-Level Thread Scheduling
• Preemptive vs. Cooperative Scheduling
• Scheduling Policies
– First-Come, First-Served (FCFS)
– Round Robin (RR)
– Priority Scheduling
– Multilevel Queue Scheduling
– Multilevel Feedback Queue Scheduling
– Real-Time Scheduling Policies
• Context Switching
5. Thread Synchronization and Concurrency Control
6. Advantages and Disadvantages of Different Thread Types
7. Conclusion

Introduction to Threads
A thread is the smallest sequence of programmed instructions that can be man-
aged independently by a scheduler, which is typically a part of the operating
system. Threads allow a process to perform multiple tasks concurrently, im-
proving the efficiency and performance of applications, especially on multi-core
or multi-processor systems.
Key characteristics:

1
• Shared Resources: Threads within the same process share resources
like memory, file handles, and other process-specific data.
• Independent Execution: Each thread has its own execution stack and
program counter, enabling independent execution paths.
• Lightweight: Creating and managing threads typically incurs less over-
head compared to full-fledged processes.

Types of Threads
Threads can be categorized based on their management and implementation at
different levels within the OS and runtime environments.

User-Level Threads (ULT)


User-Level Threads are managed entirely in user space, without kernel aware-
ness. The thread library in the application handles thread creation, scheduling,
and synchronization.
Characteristics:
• Management: Managed by user-level libraries (e.g., POSIX Threads in
user space).
• Scheduling: Entirely handled by the application, not the OS.
• Context Switching: Faster as it doesn’t require kernel mode transitions.
• Blocking Issues: If one thread performs a blocking system call, the
entire process can block.
• Portability: Highly portable across different operating systems.
Examples:
• Java Green Threads (pre-Java 1.2)
• Many scripting language thread implementations before adopting native
threads

Kernel-Level Threads (KLT)


Kernel-Level Threads are managed directly by the operating system kernel.
The kernel is aware of each thread and schedules them independently.
Characteristics:
• Management: Managed by the OS kernel.
• Scheduling: The OS schedules threads individually, leveraging multiple
processors effectively.
• Context Switching: Slower due to the need for mode switching between
user and kernel space.
• Blocking: One thread’s blocking does not block other threads in the same
process.

2
• Resource Intensive: Generally more resource-heavy compared to user-
level threads.
Examples:
• Windows threads
• Linux Native POSIX Threads (NPTL)

Lightweight Processes (LWPs)


Lightweight Processes are an abstraction that sits between user-level threads
and kernel-level threads, often used in hybrid thread models.
Characteristics:
• Hybrid Approach: Combines user-level and kernel-level features.
• Mapping: Multiple user threads can be mapped to one or more LWPs.
• Scalability: Offers better scalability and flexibility in thread manage-
ment.
Examples:
• Solaris LWPs
• HP-UX Threads

Hybrid Threads
Hybrid Threads combine aspects of both user-level and kernel-level thread-
ing. Multiple user threads are multiplexed onto fewer kernel threads, providing
flexibility and efficiency.
Characteristics:
• Combination: Supports both ULT and KLT benefits.
• Scheduling: User-level scheduling combined with kernel-level support.
• Scalability and Efficiency: Improved over purely ULT or KLT models.
Examples:
• Many modern implementations like Windows and Linux use hybrid thread-
ing models.

Fiber Threads
Fibers are lightweight threads that are scheduled by the application rather
than the OS. They are similar to user-level threads but require explicit yielding
by the programmer.
Characteristics:
• Cooperative Scheduling: Requires explicit context switching.
• Execution: Runs within a single OS thread.

3
• Use Cases: Useful in environments where the programmer has fine-
grained control over scheduling.
Examples:
• Fibers in Windows

Green Threads
Green Threads are threads that are scheduled by a runtime library or virtual
machine (e.g., JVM) instead of natively by the underlying OS.
Characteristics:
• User-Space Implementation: Threads are managed in user space.
• Portability: Highly portable as they don’t rely on OS-specific thread
features.
• Limitations: Less efficient on multi-core systems since the OS sees them
as a single thread.
Examples:
• Early versions of Java before native thread support
• Go’s goroutines (though more advanced)

Thread Models
Thread models define how user-level threads are mapped to kernel-level threads.
The principal models include Many-to-One, One-to-One, Many-to-Many, and
Two-Level.

Many-to-One Model
Description:
• Multiple user-level threads are mapped to a single kernel-level thread.
Advantages:
• Efficient thread management since operations are handled in user space.
• Context switching is fast.
Disadvantages:
• Only one thread can access the kernel at a time, limiting concurrency on
multi-core systems.
• If one thread blocks, the entire process blocks.
Use Cases:

4
• Environments where threading is not heavily used or parallelism is not
critical.

One-to-One Model
Description:
• Each user-level thread maps to a distinct kernel-level thread.
Advantages:
• Allows true concurrency on multi-processor systems.
• If one thread blocks, others can continue execution.
Disadvantages:
• Higher overhead due to more kernel threads.
• System limits on the number of kernel threads can constrain applications.
Use Cases:
• Systems requiring high concurrency and scalability (e.g., most modern
operating systems like Linux and Windows).

Many-to-Many Model
Description:
• Multiple user-level threads are multiplexed over a smaller or equal number
of kernel-level threads.
Advantages:
• Balances between efficient user-level thread management and concurrent
execution.
• Greater flexibility in scaling threads.
Disadvantages:
• Complex to implement.
• Might not scale as well on highly parallel hardware compared to One-to-
One.
Use Cases:
• Systems needing a balance of scalability and performance.

Two-Level Model
Description:
• Combines aspects of both Many-to-Many and One-to-One by allowing
user-level threads to be bound to kernel-level threads (for some user
threads) or multiplexed (for others).

5
Advantages:
• Offers flexibility to choose between multiplexing and binding.
• Can optimize for both concurrency and resource management.
Disadvantages:
• Increased complexity in implementation.
• Potential for uneven load distribution.
Use Cases:
• Environments requiring both shared and dedicated thread execution.

OS-Level Thread Scheduling


Thread scheduling at the OS level is pivotal in determining how threads are
allocated CPU time and managed for execution. Understanding the scheduling
mechanisms involves recognizing scheduling types, policies, and the constraints
that influence them.

Preemptive vs. Cooperative Scheduling


Preemptive Scheduling Definition: The OS can interrupt and suspend
threads to allocate CPU time to other threads, based on scheduling policies and
priorities.
Characteristics:
• Ensures responsiveness and fairness.
• Prevents scenarios where a single thread monopolizes the CPU.
• More complex to implement due to the need for synchronization mecha-
nisms.
Advantages:
• Better responsiveness and CPU utilization.
• Suitable for general-purpose and multi-user systems.
Disadvantages:
• Requires careful management to avoid race conditions and ensure thread
safety.

Cooperative Scheduling Definition: Threads voluntarily yield control of


the CPU, allowing other threads to execute.
Characteristics:
• Simpler to implement.
• Can lead to issues if threads do not yield appropriately.

6
Advantages:
• Lower overhead since context switches occur less frequently.
• Easier coordination between threads.
Disadvantages:
• Less responsive; one thread can block the entire system by not yielding.
• Not suitable for preemptive multitasking environments.

Scheduling Policies
OS-level thread schedulers use various policies to determine the order and du-
ration of thread execution. Some common scheduling policies include:

First-Come, First-Served (FCFS) Description:


• Threads are scheduled in the order they arrive in the ready queue.
Characteristics:
• Simple and easy to implement.
• Non-preemptive.
Advantages:
• Fairness in terms of arrival order.
Disadvantages:
• Can lead to the “convoy effect,” where short tasks wait behind longer ones,
degrading performance.

Round Robin (RR) Description:


• Each thread is assigned a fixed time slice (quantum) in a cyclic order.
Characteristics:
• Preemptive.
• Time slices ensure that all threads get CPU time.
Advantages:
• Good responsiveness for interactive systems.
• Simple scheduling algorithm.
Disadvantages:
• Performance dependent on the size of the time quantum.
• Overhead due to frequent context switching with small quanta.

7
Priority Scheduling Description:
• Threads are assigned priorities, and the scheduler selects the thread with
the highest priority.
Characteristics:
• Can be preemptive or non-preemptive.
• May involve dynamic or static priority assignments.
Advantages:
• Critical tasks can be prioritized over less important ones.
• Flexible in managing different types of workloads.
Disadvantages:
• Risk of starvation for low-priority threads.
• May require mechanisms like aging to prevent indefinite postponement.

Multilevel Queue Scheduling Description:


• Ready queue is divided into multiple queues based on priority or type of
thread.
• Each queue can have its own scheduling algorithm.
Characteristics:
• Typically non-preemptive between queues.
• Fixed priority levels for queues.
Advantages:
• Organized management of different thread types.
• Efficient for systems with distinct thread categories.
Disadvantages:
• Inflexible; fixed queue assignments can lead to underutilization.
• Complex to manage multiple queues with varying policies.

Multilevel Feedback Queue Scheduling Description:


• Similar to Multilevel Queue but allows threads to move between queues
based on behavior and CPU usage.
Characteristics:
• Dynamic adjustment of priorities.
• Aims to combine responsiveness with fairness.
Advantages:
• Adapts to varying thread behaviors.
• Reduces chances of starvation.

8
Disadvantages:
• More complex to implement.
• Requires fine-tuning of queue parameters and movement criteria.

Real-Time Scheduling Policies Description:


• Designed for real-time systems where deadlines must be met.
Types:
1. Rate Monotonic Scheduling (RMS):
• Fixed-priority algorithm based on the periodicity of tasks.
• Shorter period implies higher priority.
2. Earliest Deadline First (EDF):
• Dynamic priority assignment based on impending deadlines.
• The thread with the nearest deadline is scheduled first.
Characteristics:
• Guarantee that deadlines are met if the system is schedulable.
• Preemptive to ensure real-time tasks can meet time constraints.
Advantages:
• Essential for time-critical applications like embedded systems, avionics,
and medical devices.
Disadvantages:
• Less efficient for general-purpose computing.
• Complexity in ensuring all deadlines are met.

Context Switching
Definition: Context switching is the process of storing the state of a currently
running thread or process so that it can be resumed later, allowing another
thread or process to run.
Components of Context Switching:
1. Saving State: The current thread’s CPU registers, program counter, and
other state information are saved.
2. Loading State: The next thread’s saved state is loaded into the CPU
registers.
3. Updating Queues: The scheduler updates the ready and waiting queues
accordingly.
Considerations:
• Time Overhead: Context switching incurs overhead due to the need to
save and load state information.

9
• Frequency: High-frequency context switching can degrade system per-
formance.

Thread Synchronization and Concurrency Control


While threads enable concurrent execution, they also introduce challenges re-
lated to shared resource access and data consistency. Proper synchronization
mechanisms are essential to prevent issues like race conditions, deadlocks, and
starvation.
Common Synchronization Mechanisms:
1. Mutexes (Mutual Exclusion):
• Ensure that only one thread can access a critical section at a time.
2. Semaphores:
• Signaling mechanisms that can control access based on counts, useful
for managing limited resources.
3. Monitors:
• Higher-level constructs that combine mutual exclusion and condition
variables, encapsulating synchronization logic.
4. Condition Variables:
• Allow threads to wait for certain conditions to be met before proceed-
ing.
5. Read-Write Locks:
• Enable multiple threads to read concurrently while providing exclu-
sive access for writing.
6. Barriers:
• Synchronize groups of threads at certain points in their execution,
ensuring all reach a barrier before any proceed.
Concurrency Control Challenges:
• Deadlocks: Occur when two or more threads are waiting indefinitely for
each other to release resources.
• Race Conditions: Happen when threads access shared data concurrently
without proper synchronization, leading to inconsistent or incorrect out-
comes.
• Starvation: Some threads may be perpetually denied necessary resources
to proceed.

Advantages and Disadvantages of Different Thread Types


User-Level Threads (ULT)
Advantages:

10
• Fast thread creation and context switching.
• No need for kernel support, enhancing portability.
• Can implement custom scheduling policies.
Disadvantages:
• Limited to single-processor execution; can’t utilize multiple cores effec-
tively.
• Blocking system calls by one thread can block the entire process.

Kernel-Level Threads (KLT)


Advantages:
• True concurrent execution on multi-core systems.
• Superior handling of blocking operations; one thread’s block doesn’t affect
others.
• Better integration with OS-level scheduling and management.
Disadvantages:
• Higher overhead for managing threads.
• Potential limits on the number of threads due to kernel constraints.

Lightweight Processes (LWPs)


Advantages:
• Combines benefits of ULT and KLT.
• Improved scalability and performance through flexible mappings.
Disadvantages:
• Increased complexity in implementation.
• May inherit disadvantages from both ULT and KLT depending on usage.

Hybrid Threads
Advantages:
• Flexible and scalable, leveraging the strengths of both ULT and KLT.
• Improved performance across different workloads.
Disadvantages:
• Complex to implement and manage.
• Can inherit synchronization overheads from hybrid designs.

Fiber Threads
Advantages:
• Low overhead compared to full threads.

11
• Provides fine-grained control over execution.
Disadvantages:
• Requires explicit scheduling by the application.
• Not suitable for applications needing concurrent execution handled by the
OS.

Green Threads
Advantages:
• Highly portable across different platforms.
• Lightweight and efficient for certain use-cases.
Disadvantages:
• Limited concurrency on multi-core systems.
• Less integrated with OS scheduling and resource management.

Conclusion
Threads are indispensable for achieving concurrency and improving the per-
formance of applications, especially in modern multi-core and multi-processor
systems. Understanding the various types of threads—such as user-level, kernel-
level, lightweight processes, hybrid threads, fibers, and green threads—and their
respective advantages and disadvantages is crucial for selecting the right thread-
ing model for a given application.
Operating systems employ sophisticated scheduling algorithms and policies to
manage thread execution efficiently, balancing factors like responsiveness, fair-
ness, and resource utilization. Preemptive and cooperative scheduling, coupled
with diverse scheduling policies like Round Robin, Priority Scheduling, and real-
time algorithms, allow OS schedulers to cater to a wide range of application
requirements.
Effective use of threads, combined with proper synchronization and concurrency
control mechanisms, enables developers to build robust, efficient, and scalable
applications capable of leveraging the full potential of contemporary hardware
architectures.

12

You might also like