Thread Important
Thread Important
Threading Issues
Thread Cancellation
Windows Threads
Book pesone Question:
4.4
23-05-2024
7(a)-------------------
7(b)------------------
7(c)-----------------
7(d)-----------------------
8(b)------------------------
8(c)-------------------
26-05-2024--------
1(a)--------------
1(c)---------------------
In Linux, signals are categorized into two types: signals that can be caught, blocked, or ignored
using a user-defined signal handler, and signals that cannot be caught, blocked, or ignored.
These signals are used to notify processes of various events, including errors, interrupts, and
external control.
1. Signals That Can Be Caught (User-Defined Signal Handlers Allowed)
These signals can be caught by user-defined signal handlers using the signal() or
sigaction() system calls.
The process can define how to respond to these signals by installing a custom signal handler
function.
List of signals that can be caught:
1. SIGHUP – Hangup detected on controlling terminal or death of controlling process
2. SIGINT – Interrupt from keyboard (Ctrl+C)
3. SIGQUIT – Quit from keyboard (Ctrl+)
4. SIGILL – Illegal instruction
5. SIGTRAP – Trace/breakpoint trap
6. SIGABRT – Abort signal from abort()
7. SIGBUS – Bus error (bad memory access)
8. SIGFPE – Floating point exception
9. SIGUSR1 – User-defined signal 1
10. SIGSEGV – Segmentation fault
11. SIGUSR2 – User-defined signal 2
12. SIGPIPE – Broken pipe: write to a pipe with no readers
13. SIGALRM – Timer signal from alarm()
14. SIGTERM – Termination signal
15. SIGCHLD – Child stopped or terminated
16. SIGCONT – Continue if stopped
17. SIGSTOP – Stop process (can be caught for handling but typically leads to process
suspension)
18. SIGTSTP – Stop typed at terminal (Ctrl+Z)
19. SIGTTIN – Background process attempting read
20. SIGTTOU – Background process attempting write
21. SIGPOLL – Pollable event (Sys V).
22. SIGPROF – Profiling timer expired
23. SIGSYS – Bad system call
24. SIGURG – Urgent condition on socket
25. SIGVTALRM – Virtual alarm clock
26. SIGXCPU – CPU time limit exceeded
27. SIGXFSZ – File size limit exceeded
28. SIGWINCH – Window resize signal
29. SIGIO – I/O now possible (same as SIGPOLL)
30. SIGPWR – Power failure
31. SIGSTKFLT – Stack fault on coprocessor (unused on most architectures)
These signals can be caught, blocked, or ignored, except for SIGKILL and SIGSTOP, which are
special and cannot be caught or ignored.
2. Signals That Cannot Be Caught or Ignored (Cannot Have User-Defined
Handlers)
Some signals are unblockable, meaning that they cannot be caught or handled by user-defined
signal handlers.
These signals are usually critical to system behavior.
List of signals that cannot be caught or ignored:
1. SIGKILL – Kill signal
o This signal immediately terminates a process. It is the strongest signal, and no
process can block or handle it.
o It is often used by administrators or the operating system to forcefully terminate
unresponsive or rogue processes.
o Example: kill -9 PID
2. SIGSTOP – Stop signal
o This signal stops (pauses) a process’s execution. It is often used to suspend a
process so that
o it can later be resumed with the SIGCONT signal. Like SIGKILL, it cannot be
blocked, caught, or ignored.
o Example: Issued by the shell using Ctrl+Z or by using kill -STOP PID.
Summary:
Signals that can be caught: All signals except SIGKILL and SIGSTOP can be caught,
blocked, or ignored.
Signals that cannot be caught or ignored: SIGKILL and SIGSTOP are the only two
signals
that cannot have user-defined handlers and cannot be blocked.
By categorizing signals in this way, Linux ensures that certain critical signals (like SIGKILL and
SIGSTOP)
are always honored by the system, allowing administrators and the OS to reliably manage
process control.
2(b)--------------------
3(a)------------
3(b)--------------------
4(b)---------------
4(c)--------------------
When multiple threads attempt to update local and global variables, they can
encounter various issues,
particularly concerning data consistency and thread safety. Here’s a detailed
explanation of the implications
Local Variables
1. Isolation:
Each thread has its own stack, which means that local variables are unique to each
thread.
When a thread modifies a local variable, that change is not visible to other threads.
Implication: There are no direct problems arising from multiple threads updating
their own
local variables since these updates do not interfere with each other. Each thread
operates on its own copy of the variable.
2. Example:
If Thread A has a local variable count initialized to 0 and increments it to 1,
this change does not affect Thread B's count, which remains at 0.
Global Variables
1. Shared Access:
Global variables are shared among all threads within the same process.
This means that if one thread updates a global variable, other threads can see this
change immediately.
Implication: This shared access can lead to race conditions if
multiple threads attempt to read and write to the same global variable
simultaneously without proper synchronization.
2. Race Conditions:
A race condition occurs when two or more threads access shared data and try to
change it
at the same time. The final value of the global variable may depend on the timing
of how the threads are scheduled.
Example: If two threads increment a global variable total simultaneously,
they might both read the same initial value before either has written back the
incremented value, leading to lost updates.
3. Synchronization Mechanisms:
To prevent issues with global variables, synchronization mechanisms such as
mutexes or locks should be used.
These ensure that only one thread can modify the global variable at a time.
Example: Using a lock around the code that modifies the global variable ensures
that if
Thread A is updating total, Thread B must wait until Thread A releases the lock
before it can access total.
4. Atomic Operations:
In some cases, atomic operations can be used for simple updates (like increments)
on global variables.
These operations ensure that the read-modify-write sequence is completed without
interruption.
Example: Using atomic increment functions provided by libraries or languages
ensures that increments are handled safely across threads.
Summary of Issues
Local Variables:
No issues arise from multiple threads updating their own local variables since each
thread has its own copy.
Global Variables:
Shared access can lead to race conditions, where one thread’s update may interfere
with another’s.
Proper synchronization mechanisms (like locks or atomic operations) are necessary
to ensure data integrity and consistency.
Conclusion
In conclusion, while multiple threads can safely update their own local variables
without conflict,
they face significant challenges when dealing with global variables due to shared
access.
Race conditions and data inconsistency can arise if proper synchronization is not
implemented.
Therefore, careful management of global variables is essential in multithreaded
programming
to maintain data integrity and avoid unexpected behavior.
5(a)----------------------------
When the main thread terminates before its child processes or subordinate threads,
the behavior and implications vary based on the context of the programming
language and environment.
5(b)---------------------------
Here are five examples of when multithreading is beneficial:
1. Improved Performance
Multithreading allows applications to utilize available CPU resources more efficiently
2. Enhanced Responsiveness
In user interface applications, multithreading improves responsiveness by allowing
the main thread to remain active while other threads handle time-consuming tasks.
For example, a web browser can continue to respond to user interactions (like
scrolling or clicking)
while a video is loading in another thread. This ensures that the application does
not appear sluggish or unresponsive to the user
1
3. Resource Utilization
Multithreading enables better utilization of system resources, particularly in
multicore processors.
By dividing tasks into smaller threads, applications can take full advantage of
multiple CPU cores,
a server handling multiple client requests can process these requests concurrently
using
4. Scalability
Multithreading enhances the scalability of applications by allowing them to handle
This is especially important for server applications that need to support a large
number of simultaneous connections or transactions
1
5. Asynchronous Programming
Multithreading is essential for asynchronous programming, where tasks can run
independently of the main program flow.
This is useful for operations like network communication or file I/O, where waiting
for a response
could lead to delays in program execution. By using multithreading, these tasks can
be performed
30-05-2024
1(b)---------------------