OS Multi Threaded Programming
OS Multi Threaded Programming
programming
Prepared by: Engr. Marlon Peter G. Balingit
Multithreaded Programming in
Operating Systems (OS)
refers to the ability of an OS to support multiple threads
of execution within a single process. Threads are the
smallest unit of CPU execution and allow programs to
perform multiple tasks concurrently, improving
performance and responsiveness.
NOTE:
A program that is in execution is called a process.
Thread is a basic unit of execution.(basic unit of CpU
Utilization)
What is a
program
A process is the actual
execution of the program
And several processes can
be associated with the
same program
Scheduler
Threads
Definition: A thread is a lightweight process that shares resources like memory, files, and data with
other threads in the same process.
Benefits:
Faster context switching compared to processes.
Shared memory space enables easier data sharing.
Efficient use of CPU in multicore systems.
Program
Thread Register stack
counter
ID Set
It shares with other threads belonging to the same process its code section, data
section and other operating system resources such as open files and signals
•Hybrid Model:
•Combines user-level and kernel-level threads for flexibility and performance.
•Thread States:
New: Thread created but not yet started.
Runnable: Ready to run when the CPU is available.
Blocked/Waiting: Waiting for I/O or a resource.
Terminated: Thread has completed execution.
Advantages of Multithreading
• Responsiveness: Improves application
responsiveness, especially in user interfaces.
• Resource Sharing: Threads in a process share
memory and resources, reducing overhead.
• Scalability: Exploits the capabilities of multicore
processors.
• Improved Throughput: Multiple threads perform tasks
concurrently, increasing efficiency.
Challenges in Multithreaded
Programming
•Race Conditions: Multiple threads accessing shared
resources simultaneously can lead to inconsistent results.
•Deadlocks: Threads waiting indefinitely for resources
held by each other.
•Synchronization:
•Using locks, mutexes, and semaphores to coordinate
access to shared resources.
•Context Switching Overhead: Frequent switching
between threads may reduce performance.
Common Synchronization
Mechanisms
1.Mutexes: Ensures only one thread accesses a critical
section at a time.
2.Semaphores: Allows a fixed number of threads to
access a resource.
3.Monitors: High-level abstraction for managing thread
synchronization.
4.Atomic Operations: Provides hardware-level
guarantees for operation completion.