0% found this document useful (0 votes)
4 views28 pages

OS Week 6 Threads

The document discusses threads as the smallest unit of CPU execution within a process, highlighting their benefits such as responsiveness, resource sharing, and efficiency. It explains the differences between concurrency and parallelism, types of parallelism, and the distinction between user-level and kernel-level threads. Additionally, it covers multi-threading models and Amdahl's Law, illustrating the limitations of parallelization due to sequential portions of tasks.

Uploaded by

i233029
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views28 pages

OS Week 6 Threads

The document discusses threads as the smallest unit of CPU execution within a process, highlighting their benefits such as responsiveness, resource sharing, and efficiency. It explains the differences between concurrency and parallelism, types of parallelism, and the distinction between user-level and kernel-level threads. Additionally, it covers multi-threading models and Amdahl's Law, illustrating the limitations of parallelization due to sequential portions of tasks.

Uploaded by

i233029
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Week 6 : Threads

Operating
systems
Threads vs Process
Threads

•A thread is the smallest unit of CPU execution within a process.

•Unlike a process, threads within the same process share resources like code,
data, and files but have their own registers and stack.

•Threads improve efficiency, especially in parallel applications.


Threads
Components of a Thread
A thread has the following three components:

1.Program Counter

2.Register Set

3.Stack space
Benefits of a thread

•Responsiveness: A program remains responsive even if part of it is blocked.

•Resource Sharing: Threads in a process share memory, reducing overhead.

•Efficiency: Creating and switching threads is faster than processes.

•Scalability: Multi-threading benefits from multi-core processors.


Concurrency vs Parallelism

•Concurrency: Multiple threads make progress seemingly at the same time


but may not execute simultaneously. Achieved via time-sharing on a single-core
CPU.

•Parallelism: Multiple threads execute simultaneously on multiple cores.


True parallel execution requires multi-core processors.
Concurrency in a single core system
•CPU switches between threads rapidly using time-slicing.

•Only one thread executes at a given moment, but the switching creates the
illusion of simultaneous execution.

•Requires scheduling algorithms to manage which thread runs next.


Parallelism in multi-core systems
•Threads can run truly simultaneously on separate cores.

•Increases performance for CPU-bound tasks like computation-heavy applications.

•Requires proper synchronization (mutexes, semaphores) to avoid data inconsistency.


Types of Parallelism
•Data Parallelism: Same task runs on different parts of data (e.g., matrix operations).

•Task Parallelism: Different tasks run in parallel (e.g., web browser rendering a
page while downloading files).
Data Parallelism
•In data parallelism, the same operation is performed on different chunks of
data simultaneously.

•The workload is divided across multiple processors, but each processor performs
the same task on different subsets of the data.

•This is useful in cases where a large dataset can be processed independently


in parallel.
Data Parallelism Example
Consider an image processing task where we want to apply a filter to an
image.

• The image is divided into four equal parts.

• Each processor applies the filter to its assigned portion at the same
time.
Task Parallelism
•In task parallelism, different tasks (or functions) are executed simultaneously
on the same or different data.

•Each processor performs a different part of the task.

•This is useful when a problem can be broken down into multiple subtasks
that can be performed independently.
Task Parallelism Example
Consider a web browser where different tasks are handled in parallel:

• CPU 1: Renders the webpage layout


• CPU 2: Downloads images
• CPU 3: Executes JavaScript code
• CPU 4: Handles user input

Here, different processors are performing different tasks at the same


time.
User threads and Kernel threads
User threads and Kernel threads
User-Level Threads (ULT) Kernel-Level Threads (KLT)
• Managed by the user, not the kernel. • Managed by the operating system kernel.
• Treated as single-threaded processes by the • Slower than user-level threads due to kernel
OS. overhead.
• Faster and smaller than kernel threads. • Kernel handles context switching and
• Represented by PC, stack, registers, and a small thread management.
PCB.
• No kernel involvement in synchronization.

Advantages of User-Level Threads Advantages of Kernel-Level Threads


✅ Faster and easier to create/manage than kernel ✅ Can run on multiple processors (better
threads. performance).
✅ Portable across different operating systems. ✅ Kernel routines can be multithreaded.
✅ No need for kernel mode switching. ✅ If one thread blocks, other threads in the
process can still run.

Disadvantages of User-Level Threads Disadvantages of Kernel-Level Threads


❌ Cannot take advantage of multiprocessing. ❌ Slower to create/manage than user threads.
❌ If one thread blocks, the entire process blocks. ❌ Requires mode switching, adding overhead.
User threads and Kernel threads
User threads - management done by user-level threads library
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads

Kernel threads - Supported by the Kernel


Examples – virtually all general -purpose operating systems, including:
Windows
Linux
Mac OS X
iOS
Android
Multi-threading models

•Many-to-One: Multiple user threads map to one kernel thread (not scalable).

•One-to-One: Each user thread maps to a kernel thread (better but more

resource-intensive).

•Many-to-Many: Multiple user threads map to multiple kernel threads

(best balance).
Many to one
 Many user-level threads mapped to single kernel thread

 One thread blocking causes all to block

 Multiple threads may not run in parallel on multicore system because

only one may be in kernel at a time

 Few systems currently use this model

Examples:

Solaris Green Threads, GNU Portable Threads


One to One
 Each user-level thread maps to kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to
overhead
Examples
• Windows
• Linux
Many to Many
 Allows many user level threads to be mapped to many kernel
threads
 Allows the operating system to create a sufficient number of
kernel threads
 Windows with the ThreadFiber package
 Otherwise not very common
Multiplexing in threading
In threading, multiplexing refers to multiple user-level threads sharing a limited number of
kernel threads. The OS schedules user threads onto available kernel threads dynamically.
📌 Example in Many-to-Many Model:
• If you have 10 user threads but only 4 kernel threads, the OS will map and schedule
these user threads onto the available kernel threads.
• When one user thread finishes or blocks, another user thread takes its place on the
same kernel thread.
ultiplexing in different threading models

•Multiplexing happens in Many-to-One, Many-to-Many, and partially in


Two-Level models.

•One-to-One does not use multiplexing since each thread has its own kernel
•thread.
Two-level model

Similar to M:M, except that it allows a user thread to be


bound to kernel thread
Amdahl’s law
Amdahl's Law helps us understand the maximum speedup we can achieve in a system
when we parallelize a task. It tells us that no matter how many processors we add, some
parts of a program will always be sequential, which limits the overall speedup.

Formula:

​ here:
w
• P = Fraction of the program that can be parallelized
• (1 - P) = Fraction that remains sequential
• N = Number of processors
Amdahl’s law Example

Serial portion of an application has disproportionate effect on


performance gained by adding additional cores
Amdahl’s law Example
Imagine a highway where:
• 90% of the road (parallel portion) has multiple lanes, allowing many cars to travel at once.
• 10% of the road (serial portion) has only one lane (like a toll booth or narrow bridge), where
cars must pass one at a time.
If we add more lanes

•At 4 lanes, traffic moves faster, but the toll booth still slows everyone down.
•At 64 lanes, most of the highway is ultra-fast, but cars still queue up at the bottleneck.
•With infinite lanes, the maximum speedup is only 10×, because 10% of the road (serial part) still limits the overall speed.

You might also like