OS_Module2_Unit2
OS_Module2_Unit2
Semester /
Course Code : BCS303 III / II
Year :
Academic
Course Title : Operating Systems 2024-25
Year :
Module 2
2
Thread Vs Process
Important Terms in Computer Science
Whereas a class implements Runnable ,it must define run() method .The code implementing run()
method is what runs a separate thread.
Creating a Thread object does not specifically create the new thread; rather, it is the start() method that
creates the new thread. Calling the start() method for the new object does two things:
It allocates memory and initializes a new thread in the JVM.
It calls the run() method, making the thread eligible to be run by the JVM.
Threading Issues
Threading Issues
The fork() and exec() system calls
Cancellation
Signal Handling
Thread pool
Thread- Specific Data
Scheduler Activations
The fork() and exec() System Calls
fork() system call is used to create a separate duplicate process
Semantics of fork() and exec() call changes in a multithreaded program
If one thread in a program calls fork(), does new process duplicates all threads or is the new process
single threaded?
Some UNIX system have two versions of fork
That duplicate all threads
Duplicate only thread that invoked fork() system call
exec() usually works as normal – replace the running process including all threads.
Which of the two versions of fork() to use depends on the application.
If exec() is called immediately after forking, then duplicating all threads is unnecessary, as the
program specified in the parameters to exec() will replace the process.
In this instance, duplicating only the calling thread is appropriate.
If, however, the separate process does not call exec () after forking, the separate process should
duplicate all threads.
Cancellation
Task of Terminating a thread before it has completed
If multiple threads are concurrently searching through a database ,if one thread returns the result
remaining threads might be cancelled
Thread to be canceled is target thread
Two general approaches of cancellation
Asynchronous cancellation one thread immediately terminates the target thread
Deferred cancellation allows the target thread to periodically check if it should be
cancelled ,allowing it an opportunity to terminate itself
Cancellation
Difficulty in cancellation :
Resources have been allocated to cancelled thread
Thread cancelled in the middle of updating data it is sharing with other threads .
This becomes especially troublesome with asynchronous cancellation.
Often, the operating system will reclaim system resources from a canceled thread but will not
reclaim all resources.
Therefore, canceling a thread asynchronously may not free a necessary system-wide resource.
With deferred cancellation, in contrast,
one thread indicates that a target thread is to be canceled, but cancellation occurs only after the
target thread has checked a flag to determine whether or not it should be canceled.
The thread can perform this check at a at point which it can be canceled safely
Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred.
Signal may be received either synchronously or Asynchronously depending on source of and the
reason for the event being signaled
All signals whether synchronous or asynchronous follow the same pattern
1. A Signal is generated by the occurrence of a particular event
2. A generated Signal is delivered to a process
3. Once delivered ,the Signal must be handled.
Synchronous Signal include Illegal memory access and Division by 0.
If running program performs these actions a signal is generated
Synchronous signal delivered to same process that performed operation that caused the signal
Signals is generated by event external to running process that process receives signal Asynchronously.
Examples of such signals include terminating a process with specific keystrokes (such as <control>
<C>) and having a timer expire.
Typically, an asynchronous signal is sent to another process.
Signal Handling
Signal is handled by one of two signal handlers:
1. A Default Signal Handler
2. A User-defined Signal Handler
Every signal has default handler that is run by kernel when handling that signal.
This default action can be overriden by a user defined handler that is called to handle the signal
Signals are handled in different ways.
Some signals (such as changing the size of a window) are simply ignored;
others (such as an illegal memory access) are handled by terminating the program.
Signal Handling
Handling signals in single-threaded programs is straightforward:
Signals are always delivered to a process.
However, delivering signals is more complicated in multithreaded programs, where a process may have
several threads. Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
Method for delivering a signal depends on type of signal generated
Thread Pools
Creating a separate thread is certainly superior to creating a separate process.
The first issue concerns the amount of time required to create the thread prior to servicing the request,
together with the fact that this thread will be discarded once it has completed its work.
The second issue is more troublesome: if we allow all concurrent requests to be serviced in a new
thread, we have not placed a bound on the number of threads concurrently active in the system.
Unlimited threads could exhaust system resources, such as CPU time or memory.
One solution is to use a Thread Pool.
Thread pool is to create a number of threads at process start up and place them into a pool where they
can sit and wait for work.
When a server receives a request ,it awakens a thread from this pool-if one is available and passes it the
request for service.
Once thread completes its service ,it returns to the pool and awaits more work
If the pool contains no available thread ,the server waits until one becomes free
Thread Pools
Thread pool offers these Advantages:
Servicing a request with an existing thread is usually faster than waiting to create a thread
Thread pool limits that number of threads that exists at any one point . This is particularly important
on systems that can not support a large number of concurrent threads.
The number of threads in the pool can be set heuristically based on factors such as
the number of CPUs in the system,
the amount of physical memory,
and the expected number of concurrent client requests.
More sophisticated thread-pool architectures can dynamically adjust the number of threads in the pool
according to usage patterns. Such architectures provide the further benefit of having a smaller pool-
thereby consuming less memory-when the load on the system is low.
Thread-Specific Data
Threads belonging to a process share the data of the process.
Indeed, this sharing of data provides one of the benefits of multithreaded programming.
However, in some circumstances, each thread might need its own copy of certain data.
We will call such data thread-specific data.
For example, in a transaction-processing system,
we might service each transaction in a separate thread.
Furthermore, each transaction might be assigned a unique identifier.
To associate each thread with its unique identifier, we could use thread-specific data.
Most thread libraries-including Win32 and Pthreads-provide some form of support for thread-specific
data. Java provides support as well.
Scheduler Activations
The final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models.
Such coordination allows the number of kernel threads to be dynamically
adjusted to help ensure the best performance.
Many systems implementing either the many-to-many or the two-level model
place an intermediate data structure between the user and kernel threads. This
data structure-typically known as a lightweight process, or LWP.
To the user-thread library, the LWP appears to be a virtual processor on
which the application can schedule a user thread to run.
Each LWP is attached to a kernel thread, and it is kernel threads that the
operating system schedules to run on physical processors.
If a kernel thread blocks (such as while waiting for an I/0 operation to
complete), the LWP blocks as well.
Up the chain, the user-level thread attached to the LWP also blocks.
Scheduler Activations
An application may require any number of LWPs to run efficiently.
Consider a CPU-bound application running on a single processor.
In this scenario, only one thread can run at once, so one LWP is sufficient.
An application that is I/O intensive may require multiple LWPs to execute, however.
Typically, an LWP is required for each concurrent blocking system call.
Suppose, for example, that five different file-read requests occur simultaneously.
Five LWPs are needed, because all could be waiting for I/0 completion in the kernel.
If a process has only four LWPs, then the fifth request must wait for one of the LWPs to return from
the kernel.
One scheme for communication between the user-thread library and the kernel is known as scheduler
activation.
Scheduler Activations
It works as follows:
The kernel provides an application with a set of virtual processors (LWPs), and the application can
schedule user threads onto an available virtual processor.
Furthermore, the kernel must inform an application about certain events.
This procedure is known as an Upcalls.
Upcalls are handled by the thread library with an upcall handler and upcall handlers must run on a
virtual processor.
One event that triggers an upcall occurs when an application thread is about to block.
In this scenario, the kernel makes an upcall to the application informing it that a thread is about to
block and identifying the specific thread.
The kernel then allocates a new virtual processor to the application.
Scheduler Activations
It works as follows:
The application runs an upcall handler on this new virtual processor, which saves the state of the
blocking thread and relinquishes the virtual processor on which the blocking thread is running.
The upcall handler then schedules another thread that is eligible to run on the new virtual processor.
When the event that the blocking thread was waiting for occurs, the kernel makes another upcall to
the thread library informing it that the previously blocked thread is now eligible to run.
The up call handler for this event also requires a virtual processor, and the kernel may allocate a
new virtual processor or preempt one of the user threads and run the upcall handler on its virtual
processor.
After marking the unblocked thread as eligible to run, the application schedules an eligible thread to
run on an available virtual processor.