Operating System 2
Operating System 2
Process Definition
● A process is a program in execution. It is an active entity that performs a
specific task as instructed by the program's code. Unlike a program (which is
static), a process is dynamic and involves execution progress.
● Each process has its own memory and resources. This includes code, data,
stack. It also maintains file handles, open devices, and system resources
allocated by the operating system.
Process Definition
● A process has a lifecycle defined by different states. These states include
New, Ready, Running, Waiting, and Terminated. The process transitions
between these states based on events like I/O requests or CPU availability.
Process States
Threads
● A thread is the smallest unit of CPU execution. It represents a single
sequence of instructions within a process. Multiple threads can exist within
the same process and run independently.
● Threads share the same process resources. This includes memory space,
open files, and global variables. However, each thread has its own program
counter, registers, and stack.
Threads
● Creating threads is more efficient than creating processes. Since threads
share resources, the overhead for context switching and communication is
lower. This makes thread-based programs more responsive and lightweight.
● There are two types of threads: user-level and kernel-level. User-level threads
are managed by libraries, while kernel-level threads are managed directly by
the OS. Each type has its own benefits and limitations in terms of
performance and control.
OPERATING SYSTEM MODULE
Threads
● Threads within the same process share code, data segments, and open files.
Each thread has its own program counter, stack, and set of registers.
OPERATING SYSTEM MODULE
Benefit of Threads:
● Improved Application Responsiveness Threads allow parts of a program (like
UI) to run independently of long background tasks. This makes applications
remain responsive to user actions while performing heavy computations in the
background.
● Faster Execution through Parallelism On multi-core systems, threads can
execute concurrently on different processors. This enables faster completion
of tasks and better CPU utilization.
● Efficient Resource Sharing Threads within the same process share memory,
files, and other resources. This leads to lower overhead compared to
inter-process communication (IPC).
●
OPERATING SYSTEM MODULE
Benefit of Threads:
● Reduced Context Switching Overhead Switching between threads is quicker
than switching between processes. This is because threads share the same
memory space, so less information needs to be saved and loaded.
Scheduling Algorithm
● Scheduling algorithms determine how processes are assigned to the CPU. The
operating system uses these algorithms to decide which process runs next when the
CPU becomes available. This ensures efficient CPU usage and system
responsiveness.
Scheduling Algorithm
● Scheduling can be preemptive or non-preemptive. In preemptive scheduling,
the OS can interrupt a running process to switch to another, while
non-preemptive scheduling allows a process to run until it finishes or blocks.
Preemptive methods are better for responsiveness; non-preemptive ones are
simpler to implement.
● Drawbacks: Can lead to the convoy effect, where short processes wait for a
long process to complete.
OPERATING SYSTEM MODULE
● Description: Selects the process with the smallest execution time next.
● Description: Each process is assigned a priority, and the scheduler picks the
highest priority process.
Real-time Scheduling
● Real-time scheduling is a method used in operating systems to manage tasks
that must be executed within strict timing constraints. It ensures that critical
processes meet their deadlines to maintain system reliability.
● There are two main types: hard real-time systems, where missing a deadline
can lead to failure, and soft real-time systems, where occasional deadline
misses are tolerable. The scheduling strategy used depends on which type of
system is in use.
● In real-time scheduling, tasks are assigned priorities based on their urgency
and importance. Higher priority tasks are typically executed before lower
priority ones to meet critical timing requirements.
OPERATING SYSTEM MODULE
Real-time Scheduling
● Popular real-time scheduling algorithms include Rate Monotonic Scheduling
(RMS) and Earliest Deadline First (EDF). RMS assigns priority based on task
frequency, while EDF prioritizes tasks with the closest deadlines.