0% found this document useful (0 votes)
36 views35 pages

Threads in OS: Unit 2-Chapter 1

Uploaded by

riddheshsawnt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views35 pages

Threads in OS: Unit 2-Chapter 1

Uploaded by

riddheshsawnt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Threads in OS

Unit 2- Chapter 1
Threads in OS

A thread is a flow of execution through the process


code.

Line of control coursing through code.

Also called light weight process

Thread is the smallest executable unit of a


process.
Threads in OS
• A process can have multiple threads.
• Each thread will have their own task and own
path of execution in a process.
• All threads of the same process share memory
of that process.
• As threads of the same process share the same
memory, communication between the threads
is fast.
• Example- MS Word handle concurrent tasks
such as user interface thread, document
handling thread, spell check thread, formatting
thread, printing thread, autosaving thread, etc.
Process and Thread
Process: Thread:

• Independent whole program in • Lightweight unit of execution


execution within a process
• own memory space and resources. • shares memory and resources with
Isolated from other processes other threads in the same process.
• runs independently (isolated from • run concurrently with other
other). threads
• Communicate with other • Communicates with other threads
processes via IPC via shared memory, simpler and
faster than inter-process
communication.
• Threads reside in same address space as that of process.
Single-threaded • Every thread has its own thread ID, a program counter, a
vs multi- register set, and a stack.
threaded • They share code section, data section, and other operating
system resources, such as open files.
Multithreading

One Process, One Thread Multi-Process, One Thread

• Definition: Each process has a single thread of • Definition: OS supports multiple processes, but
execution. each process has only one thread of execution.
• Example: MS-DOS • Example: Traditional UNIX systems
• Explanation: • Explanation:
• single application runs at a time • multiple processes run concurrently.
• each application runs as a single-threaded • Each process is independent and has its own
process. address space and resources.
• There is no support for concurrent execution • each process contains only one thread,
within the application. meaning tasks within the process execute
sequentially.
Multithreading

One Process, Multi-Thread Multi-Process, Multi-Thread

• Definition: A single process contains multiple • Definition: OS supports multiple processes, each
threads that execute concurrently. of which can contain multiple threads.
• Example: Java Run-time Environment • Example: Modern OS- Windows, Linux, and
• Explanation: Solaris
• single Java process can create multiple • Explanation:
threads. • running multiple processes at the same time.
• a Java program can have a main thread and • Each application can create multiple threads.
several other threads performing tasks like • web server can have multiple processes (one
handling user input, performing computations, for each service or application)
and managing background tasks • and each process can have multiple threads
simultaneously. handling different client requests concurrently.
Key benefits of threads

Improved Performance Enhanced Responsiveness: Resource Sharing: Simplified Design:


Through Parallelism: • Applications remain responsive to • Threads within the same process • structuring an application into
• Threads can run concurrently on user interactions even while share memory and resources, which modular units where each thread
multiple CPU cores, leading to better performing background tasks. facilitates efficient communication performs a specific task. E.g. a
utilization of multi-core processors E.g. word processor can check server application can use separate
and improved application spelling in the background while the threads to handle different client
performance. user continues to type. requests.

Lower Overhead: Scalability: Better Utilization of I/O


• Creating and managing threads is • Threads enable applications to scale Operations:
generally cheaper in terms of CPU effectively on systems with multiple • While one thread waits for I/O
and memory usage compared to processors. As the number of operations to complete, other
creating and managing processes. available CPU cores increases, the threads can continue executing.
This results in lower overhead and application can create more threads
faster context switching. to handle additional work.
Thread States

Key states- Ready,


No suspend state
Running, Blocked

If process is
Suspend is process level swapped out, all threads
are also swapped out.
Thread States- Operations associated with
thread state
Spawn Block Unblock Finish

• When a new process • When a thread • When the event for • When a thread
is spawned, initial needs to wait for an which a thread is completes, its
thread for that event, it will enter in blocked occurs, the register context and
process is also block state. thread is moved to stacks are
spawned • The processor may the Ready queue deallocated
• a thread within a now turn to the
process may spawn execution of another
another thread ready thread in the
within the same same or a different
process, providing an process.
instruction pointer
and arguments for
the new thread
Windows Thread states
Types of Thread

USER-LEVEL THREADS KERNEL-LEVEL THREADS HYBRID/COMBINED


Types of Thread
User Level Thread
thread management is done
by the application

threads library
Implementation of User Level Threads
By default, an application begins with a single thread.

This application and its thread are allocated to a single process managed by the kernel.

The application may spawn a new thread to run within the same process which is done by
invoking the spawn utility in the threads library.

The threads library creates a data structure for the new thread and then passes control to one of
the threads within this process that is in the Ready state, using some scheduling algorithm.

Operations such as thread creation, context switching, and scheduling are handled by the
threads library without involving the kernel.
Advantages of User level Thread
Efficiency

• Switching between user-level threads does not require transitioning to kernel mode.
• This eliminates the overhead associated with mode switching between user mode and
kernel mode.
• As a result, thread switching is faster and more efficient since it avoids the two-mode
switches (user-to-kernel and kernel-to-user)

Compatibility:

• ULTs can run on any operating system as they do not require any special support from the
kernel.
• The threads library, which provides the ULT functionality, is implemented as a set of
application-level functions. This library is shared across applications, enabling thread
management without modifying the underlying OS kernel.
Disadvantages of User Level Thread
Single Processor Utilization:

• A multithreaded application can't use multiple processors.


• Each process is assigned to only one processor by the kernel.
• Therefore, only one thread within a process can execute at any time, even if there
are multiple processors available.
• This means ULTs can't achieve true parallel execution across processors.

Processor Utilization:

• Applications needing simultaneous execution across multiple processors won't


benefit from ULTs.
• These applications are better suited to Kernel-Level Threads (KLTs) or a mix of ULTs
and KLTs (hybrid threading).
Kernel Level Threads
• Kernel-Level Threads (KLT) are threads managed entirely by
the operating system kernel.
• Unlike ULTs, where thread management is handled at the
user-space level by a threads library, KLTs operate with direct
kernel support.
• Each thread is treated as a separate entity with its own
thread control block (TCB) maintained by the kernel.
• Thread creation, scheduling, and synchronization are all
handled by the kernel, providing a robust and efficient
threading model.
• Example- windows, linux
Implementation of Kernel Level Threads
Thread Creation Thread Control Block (TCB) Context Switching

• Applications create • Each thread has its own • When the kernel switches
threads through system TCB between threads, it saves
calls provided by the • Contains information such the current thread's state
operating system's API. as thread ID, program (registers, program
counter, stack pointer, counter, etc.) into its TCB
register values, and
and loads the context of
scheduling information.
the next thread to be
• The kernel manages these
TCBs to track the state and executed.
execution context of each • This ensures that threads
thread. can run concurrently and
efficiently utilize processor
resources.
Advantages of Kernel Level Thread
True Parallelism

• KLTs allow multiple threads from the same process to run simultaneously on different
processors. This means tasks can be executed in parallel, speeding up performance.

Improved Responsiveness:

• If one thread in a process is waiting (blocked), the kernel can switch to another thread in
the same process that is ready to run. This keeps applications responsive and efficient.

Support for Multitasking:

• Operating systems can support multiple applications running multiple threads


simultaneously, enabling multitasking and efficient use of computing resources.
Disadvantages of Kernel Level Thread
Overhead:

• Switching between threads managed by the kernel requires saving and restoring thread contexts, which adds
processing time and can reduce overall performance.

Complexity:

• KLT management is more intricate than ULT. It involves careful synchronization and resource management to
prevent issues like deadlock

Flexibility:

• Applications that rely solely on KLTs may have fewer options for customizing thread management compared to
user-level thread libraries. This limits how threads can be controlled and optimized for specific tasks.

OS Dependency:

• KLTs heavily rely on specific features and APIs provided by the operating system. This reliance can restrict
portability, meaning applications may need adjustments to run on different operating systems that have different
thread management capabilities.
Combined Approach
Combined Approach
Summary

Thread - smallest Threads can be single


Thread states - Ready, Thread types - User-
executable unit of threaded or
Running and Blocked Level and Kernel-Level
process multithreaded

Combined Approach -
User-Level Thread -
Kernel-Level Thread - OS Thread created in user
Application managed,
managed, TCB space, mapped to kernel
thread library used to
maintained by kernel thread, implemented by
manage read
few OS
Concurrency
Mutual Exclusion and Synchronization
• Concurrency- Executing
Multiple processes simultaneously
• Shared resources refer to any hardware or
software components that are accessed and used
by multiple processes or threads concurrently.
Concurrency and o Memory- common portion of a memory,
Shared Resources common global variables
o Files- read/write from same file
o I/O Devices- printers, disks, keyboard, monitor,
etc
o CPU
Challenges in Concurrency

Race Condition Deadlock Starvation


when shared resources are accessed when processes are stuck waiting for some processes or threads are
simultaneously without proper each other's resources to become unfairly denied resources they need,
synchronization. available, leading to a standstill. despite being ready to use them.
• A race condition is a situation in
Race Condition concurrent programming where the
final outcome depends on the
sequence or timing of execution of
multiple processes.
Race Condition
• Several processes access and
manipulates shared data
simultaneously.
• Final value of shared data depends
upon which process runs precisely
when.
• in =variable pointing to next free
slot
• out =variable pointing to next file to
be printed
• A critical region is a specific section of code, or a segment of a
program where shared resources are accessed or manipulated.
• Mutual Exclusion- Only one process or thread can access a critical
Critical Region section of code at any given time.
Achieving Mutual
Exclusion
• Disabling Interrupts
o temporarily preventing the CPU from
responding to hardware interrupts
while executing a critical section of
code.
o Interrupts are signals sent by hardware
devices (like keyboard, mouse, timer) to
the CPU, which can preempt the current
execution flow to handle the interrupt
service routine (ISR).
o Only in single processor system
Spinlock/ Busy Waiting
• Lock variables- busy waiting/spin waiting
• Continuously testing a variable until some value appears
is called busy waiting.
• It should usually be avoided, since it wastes CPU time.
• Only when there is a reasonable expectation that the
wait will be short is busy waiting used.
• A lock that uses busy waiting is called a spin lock.
Summary
Concurrency -
Executing multiple processes simultaneously
Shared Resources- To achieve concurrency, hardware
and software resources are shared.
Challenges- Race Condition, Deadlock, Starvation

Critical Region- Where shared resources are accessed

Mutual exclusion- At a time only single process


can enter critical section.
Achieve Mutual Exclusion

Hardware Solution Software Solution


Disabling Interrupts Spinlock
• Low level technique • Spin in loop
• Works well in single processor system • Busy waiting
• CPU overhead
• Can cause deadlock
Achieve Mutual Exclusion

• Software Solution
o Spinlocks
o Semaphore
o Mutex

You might also like