0% found this document useful (0 votes)
44 views62 pages

Thread Important

ert

Uploaded by

Sumaiya Tasnim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views62 pages

Thread Important

ert

Uploaded by

Sumaiya Tasnim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Threads & Concurrency

Threading Issues
Thread Cancellation
Windows Threads
Book pesone Question:
4.4
23-05-2024
7(a)-------------------

7(b)------------------
7(c)-----------------

7(d)-----------------------

8(b)------------------------
8(c)-------------------
26-05-2024--------
1(a)--------------

1(c)---------------------
In Linux, signals are categorized into two types: signals that can be caught, blocked, or ignored
using a user-defined signal handler, and signals that cannot be caught, blocked, or ignored.
These signals are used to notify processes of various events, including errors, interrupts, and
external control.
1. Signals That Can Be Caught (User-Defined Signal Handlers Allowed)
These signals can be caught by user-defined signal handlers using the signal() or
sigaction() system calls.
The process can define how to respond to these signals by installing a custom signal handler
function.
List of signals that can be caught:
1. SIGHUP – Hangup detected on controlling terminal or death of controlling process
2. SIGINT – Interrupt from keyboard (Ctrl+C)
3. SIGQUIT – Quit from keyboard (Ctrl+)
4. SIGILL – Illegal instruction
5. SIGTRAP – Trace/breakpoint trap
6. SIGABRT – Abort signal from abort()
7. SIGBUS – Bus error (bad memory access)
8. SIGFPE – Floating point exception
9. SIGUSR1 – User-defined signal 1
10. SIGSEGV – Segmentation fault
11. SIGUSR2 – User-defined signal 2
12. SIGPIPE – Broken pipe: write to a pipe with no readers
13. SIGALRM – Timer signal from alarm()
14. SIGTERM – Termination signal
15. SIGCHLD – Child stopped or terminated
16. SIGCONT – Continue if stopped
17. SIGSTOP – Stop process (can be caught for handling but typically leads to process
suspension)
18. SIGTSTP – Stop typed at terminal (Ctrl+Z)
19. SIGTTIN – Background process attempting read
20. SIGTTOU – Background process attempting write
21. SIGPOLL – Pollable event (Sys V).
22. SIGPROF – Profiling timer expired
23. SIGSYS – Bad system call
24. SIGURG – Urgent condition on socket
25. SIGVTALRM – Virtual alarm clock
26. SIGXCPU – CPU time limit exceeded
27. SIGXFSZ – File size limit exceeded
28. SIGWINCH – Window resize signal
29. SIGIO – I/O now possible (same as SIGPOLL)
30. SIGPWR – Power failure
31. SIGSTKFLT – Stack fault on coprocessor (unused on most architectures)
These signals can be caught, blocked, or ignored, except for SIGKILL and SIGSTOP, which are
special and cannot be caught or ignored.
2. Signals That Cannot Be Caught or Ignored (Cannot Have User-Defined
Handlers)
Some signals are unblockable, meaning that they cannot be caught or handled by user-defined
signal handlers.
These signals are usually critical to system behavior.
List of signals that cannot be caught or ignored:
1. SIGKILL – Kill signal
o This signal immediately terminates a process. It is the strongest signal, and no
process can block or handle it.
o It is often used by administrators or the operating system to forcefully terminate
unresponsive or rogue processes.
o Example: kill -9 PID
2. SIGSTOP – Stop signal
o This signal stops (pauses) a process’s execution. It is often used to suspend a
process so that
o it can later be resumed with the SIGCONT signal. Like SIGKILL, it cannot be
blocked, caught, or ignored.
o Example: Issued by the shell using Ctrl+Z or by using kill -STOP PID.

Summary:
 Signals that can be caught: All signals except SIGKILL and SIGSTOP can be caught,
blocked, or ignored.
 Signals that cannot be caught or ignored: SIGKILL and SIGSTOP are the only two
signals
 that cannot have user-defined handlers and cannot be blocked.
By categorizing signals in this way, Linux ensures that certain critical signals (like SIGKILL and
SIGSTOP)
are always honored by the system, allowing administrators and the OS to reliably manage
process control.

2(b)--------------------
3(a)------------
3(b)--------------------
4(b)---------------
4(c)--------------------
When multiple threads attempt to update local and global variables, they can
encounter various issues,
particularly concerning data consistency and thread safety. Here’s a detailed
explanation of the implications

for both local and global variables in a multithreaded environment.

Local Variables
1. Isolation:
 Each thread has its own stack, which means that local variables are unique to each
thread.
 When a thread modifies a local variable, that change is not visible to other threads.
 Implication: There are no direct problems arising from multiple threads updating
their own
 local variables since these updates do not interfere with each other. Each thread
operates on its own copy of the variable.
2. Example:
 If Thread A has a local variable count initialized to 0 and increments it to 1,
 this change does not affect Thread B's count, which remains at 0.
Global Variables
1. Shared Access:
 Global variables are shared among all threads within the same process.
 This means that if one thread updates a global variable, other threads can see this
change immediately.
 Implication: This shared access can lead to race conditions if
 multiple threads attempt to read and write to the same global variable
simultaneously without proper synchronization.
2. Race Conditions:
 A race condition occurs when two or more threads access shared data and try to
change it
 at the same time. The final value of the global variable may depend on the timing
of how the threads are scheduled.
 Example: If two threads increment a global variable total simultaneously,
 they might both read the same initial value before either has written back the
incremented value, leading to lost updates.
3. Synchronization Mechanisms:
 To prevent issues with global variables, synchronization mechanisms such as
mutexes or locks should be used.
 These ensure that only one thread can modify the global variable at a time.
 Example: Using a lock around the code that modifies the global variable ensures
that if
 Thread A is updating total, Thread B must wait until Thread A releases the lock
before it can access total.
4. Atomic Operations:
 In some cases, atomic operations can be used for simple updates (like increments)
on global variables.
 These operations ensure that the read-modify-write sequence is completed without
interruption.
 Example: Using atomic increment functions provided by libraries or languages
ensures that increments are handled safely across threads.
Summary of Issues
 Local Variables:
 No issues arise from multiple threads updating their own local variables since each
thread has its own copy.
 Global Variables:
 Shared access can lead to race conditions, where one thread’s update may interfere
with another’s.
 Proper synchronization mechanisms (like locks or atomic operations) are necessary
to ensure data integrity and consistency.
Conclusion
In conclusion, while multiple threads can safely update their own local variables
without conflict,
they face significant challenges when dealing with global variables due to shared
access.
Race conditions and data inconsistency can arise if proper synchronization is not
implemented.
Therefore, careful management of global variables is essential in multithreaded
programming
to maintain data integrity and avoid unexpected behavior.

5(a)----------------------------
When the main thread terminates before its child processes or subordinate threads,

the behavior and implications vary based on the context of the programming
language and environment.

Here’s a detailed explanation of what happens in each case:

A. When the Main Thread Terminates Before Child Processes


1. Behavior:
 In languages like C/C++, when the main thread (or any thread) terminates,
 the entire process is terminated, which includes all child processes. This means that
if the main thread exits
 , all child processes are also killed immediately.
 In contrast, in languages like Java, if the main thread completes its execution
 but there are still non-daemon threads running, the JVM will keep the application
alive
 until all non-daemon threads have finished executing.
2. Implications:
 Resource Cleanup: When the main thread terminates, it may lead to
 abrupt termination of child processes without proper cleanup, potentially
 causing resource leaks or incomplete operations.
 Data Integrity: If child processes are performing critical tasks
 (like writing to a file or processing data), their premature termination can lead to
data corruption or loss.
3. Example:
 In a C/C++ program where the main thread creates child processes using fork(),
 if the main thread exits before those child processes finish their tasks, all child
processes will be terminated immediately.
B. When the Main Thread Terminates Before Subordinate
Threads
1. Behavior:
 In most threading models (like POSIX threads), if the main thread exits while
 there are still subordinate threads running, those threads may continue to execute
until
 they complete their tasks. However, if the main thread exits without joining these
threads,
 they may not be able to complete their work properly.
 In some environments (like Python), if the main thread exits without joining its
threads,
 it can lead to unpredictable behavior or incomplete execution of those threads.
2. Implications:
 Thread Completion: Subordinate threads may finish their execution
independently
 of the main thread's state. However, if they rely on resources that are cleaned up
when the main thread exits, they may encounter errors.
 Zombie Threads: If subordinate threads are not joined or managed properly,
 they can become "zombie" threads that consume resources without being cleaned
up.
3. Example:
 In Java, if the main method finishes executing while other non-daemon threads are
still running,
 those threads will continue to run until they complete their tasks. However, if they
access resources
 that were cleaned up by the main thread (like shared variables), this can lead to
issues.
Summary
 Child Processes:
 If the main thread terminates first in C/C++, all child processes are also terminated
immediately.
 In Java and similar environments, child processes continue running independently of
the main thread.
 Subordinate Threads:
 These may continue executing even if the main thread terminates (in languages like
Java).
 If not properly managed (e.g., not joined), they can lead to resource leaks or zombie
states.
Conclusion
In conclusion, when the main thread terminates before its child processes or
subordinate threads,
it can lead to various issues such as abrupt termination of processes, incomplete
execution of tasks by threads,
and potential resource leaks. Proper management through joining threads and
ensuring that critical operations are
completed before exiting is essential for maintaining data integrity and system
stability.

5(b)---------------------------
Here are five examples of when multithreading is beneficial:

1. Improved Performance
Multithreading allows applications to utilize available CPU resources more efficiently

by executing multiple threads concurrently. This is particularly advantageous for


tasks that can be parallelized,
such as processing large datasets or performing complex calculations. For
instance,

scientific simulations can benefit significantly from multithreading, as different


threads

can handle different parts of the simulation simultaneously, leading to faster


execution times and improved overall performance
1

2. Enhanced Responsiveness
In user interface applications, multithreading improves responsiveness by allowing

the main thread to remain active while other threads handle time-consuming tasks.

For example, a web browser can continue to respond to user interactions (like
scrolling or clicking)

while a video is loading in another thread. This ensures that the application does
not appear sluggish or unresponsive to the user
1

3. Resource Utilization
Multithreading enables better utilization of system resources, particularly in
multicore processors.

By dividing tasks into smaller threads, applications can take full advantage of
multiple CPU cores,

leading to more efficient resource utilization and improved overall system


performance. For example,

a server handling multiple client requests can process these requests concurrently
using

separate threads for each client, significantly enhancing throughput


1
3

4. Scalability
Multithreading enhances the scalability of applications by allowing them to handle

increasing workloads more effectively. As the number of users or tasks grows,


additional threads can be

spawned to manage the load without requiring significant changes to the


application architecture.

This is especially important for server applications that need to support a large
number of simultaneous connections or transactions
1

5. Asynchronous Programming
Multithreading is essential for asynchronous programming, where tasks can run
independently of the main program flow.

This is useful for operations like network communication or file I/O, where waiting
for a response

could lead to delays in program execution. By using multithreading, these tasks can
be performed

asynchronously, allowing the main program to continue executing other tasks


without interruption
1

.These examples illustrate how multithreading can significantly enhance application


performance,
responsiveness, resource utilization, scalability, and the ability to perform
asynchronous operations effectively.

30-05-2024

1(b)---------------------

You might also like