Os - Co3
Os - Co3
Thread
A thread is the smallest unit of execution within a process. Each thread consists of a
program counter (which points to the next instruction), a set of registers (which store current
working variables), and a stack (which keeps the history of function calls).
Threads are also referred to as lightweight processes. They help improve application
performance by enabling parallel execution. Multiple threads within the same process share
the same address space and resources, which allows for efficient communication and data
sharing.
Independent program in
Definition Segment of a process
execution
Context
Slower Faster
Switching
Types of Threads
There are two main types of threads in operating systems:
1. User-level Threads: Managed entirely in user space, without kernel support.
2. Kernel-level Threads: Managed directly by the operating system kernel, with each
thread visible to the OS.
Multithreading Models
User threads need to be mapped to kernel threads using specific models. The three main
models are:
1. Many-to-One Model:
POSIX threads, commonly referred to as Pthreads, provide a standard API for thread
creation and management in UNIX-like systems. Some commonly used functions include:
pthread_create(): Creates a new thread.
pthread_join(): Waits for a thread to complete.
pthread_mutex_lock(): Locks a mutex to protect shared resources.
pthread_mutex_timelock(): Tries to lock a mutex until a specified timeout.
pthread_cond_wait(): Waits for a condition variable.
pthread_cond_signal(): Signals a thread waiting on a condition variable.
Definition of Concurrency:
Concurrency means doing multiple tasks at the same time. In Operating Systems,
concurrency is when the system can run more than one process or thread at once. It
doesn't mean everything runs exactly at the same time, but it feels like that to the user.
Conclusion:
Concurrency is all about efficiently switching between tasks.
It helps in better use of CPU time.
Even if tasks don’t run truly parallel, they appear to.
Advantages of Concurrency
Faster execution
Better CPU and resource usage
Increased system responsiveness
Easy to scale
Improved fault tolerance
Disadvantages of Concurrency
Hard to manage shared resources
Bugs are difficult to find
Locking may slow down the system
Race conditions may occur
Can lead to deadlocks
Process Synchronization
Process Synchronization means managing how different processes work together when they
need to use the same resources, like memory or files. In a computer system where many
processes run at the same time, they may need to use shared data or resources. If these
processes are not controlled properly, they might create problems like incorrect data or
system errors.
The main goal of process synchronization is to make sure that shared resources are used
safely and correctly by only one process at a time. This helps to avoid problems such as
data corruption or unexpected behavior.
Types of Processes
There are two main types of processes based on how they work with each other:
1. Independent Processes – These processes do not affect each other. They can run
separately without causing any problems for the other.
2. Cooperative Processes – These processes depend on each other. The result of
one process may affect the other, so they need to be carefully synchronized to avoid
errors.
Race Condition
A race condition happens when two processes try to access and change the same resource
at the same time. For example, if two processes try to change the value of a variable called
counter, the final value of counter might be incorrect because both changes can happen at
the same time. The order of execution matters, and this can lead to wrong results.
To avoid race conditions, we must make sure that only one process changes the shared data
at a time. This is done using synchronization methods.
Critical Section Problem
A critical section is a part of the program where a process accesses shared resources. If
more than one process enters its critical section at the same time, they may change the data
in an unsafe way.
The challenge is to create a way for processes to take turns and avoid entering their critical
sections at the same time. This is called solving the critical section problem. Each process
must request permission before entering the critical section, and after finishing, it should
allow others to enter.
Peterson’s Solution
Peterson’s solution is a simple algorithm to solve the critical section problem for two
processes. It uses two shared variables:
flag[2] – tells whether a process wants to enter the critical section.
turn – tells whose turn it is to enter the critical section.
Each process sets its flag to true and gives the turn to the other process. Then it waits if the
other process also wants to enter and it’s the other process’s turn. Otherwise, it enters the
critical section safely.
This method follows all three rules: mutual exclusion, progress, and bounded waiting.
Producer-Consumer Problem
The chef and waiter must synchronize their actions: the chef cannot put food if
there’s no space, and the waiter cannot serve if there’s nothing to serve
2. Readers-Writers Problem
This problem involves a shared resource (like a database) that is accessed by
multiple readers and writers.
Multiple readers can read at the same time.
But if a writer is writing, no other reader or writer can access it.
There are two versions:
First readers-writers problem (Reader priority): Readers are not kept waiting if no
writer is active.
Second readers-writers problem (Writer priority): Writers are not starved by
continuous readers.
We use semaphores to protect the readCount (number of readers) and to allow
mutual exclusion when needed.
Real-Life Scenario:
Imagine a library with a logbook that many readers want to access, but sometimes
writers need to update it.
Classic Problem:
Multiple readers can read the logbook at the same time, but only one writer can
update it.
If a writer is updating the logbook, no one can read it.
The challenge: If a reader starts reading and then a writer wants to write, they must
wait until the logbook is free.