0% found this document useful (0 votes)
3 views26 pages

Operating System 2

The document provides an overview of processes, threads, and scheduling in operating systems. It defines a process as a program in execution with a lifecycle consisting of various states, managed by the OS through a Process Control Block (PCB). Additionally, it discusses the benefits of threads for multitasking and performance, as well as various scheduling algorithms used to manage CPU allocation among processes.

Uploaded by

harkesh.181312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views26 pages

Operating System 2

The document provides an overview of processes, threads, and scheduling in operating systems. It defines a process as a program in execution with a lifecycle consisting of various states, managed by the OS through a Process Control Block (PCB). Additionally, it discusses the benefits of threads for multitasking and performance, as well as various scheduling algorithms used to manage CPU allocation among processes.

Uploaded by

harkesh.181312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

OPERATING SYSTEM MODULE

Process, Threads and


Scheduling
OPERATING SYSTEM MODULE

Process Definition
● A process is a program in execution. It is an active entity that performs a
specific task as instructed by the program's code. Unlike a program (which is
static), a process is dynamic and involves execution progress.

● Each process has its own memory and resources. This includes code, data,
stack. It also maintains file handles, open devices, and system resources
allocated by the operating system.

● Processes are managed by the operating system. The OS handles the


creation, execution, suspension, and termination of processes. It uses
structures like the Process Control Block (PCB) to keep track of process
details.
OPERATING SYSTEM MODULE

Process Definition
● A process has a lifecycle defined by different states. These states include
New, Ready, Running, Waiting, and Terminated. The process transitions
between these states based on events like I/O requests or CPU availability.

● Each process operates independently and is isolated from others. One


process cannot directly access the memory or data of another process. This
isolation enhances system stability and security.

● Multiple processes can run concurrently in a system. Through multitasking


and CPU scheduling, the operating system creates the illusion of parallel
execution. This improves CPU utilization and system responsiveness.
OPERATING SYSTEM MODULE

Process States

A process goes through various states during its lifetime:


1. New: The process is being created.
2. Ready: The process is loaded into main memory and waiting to be
assigned to the CPU for execution.
3. Running: The process is currently being executed by the CPU.
4. Waiting (Blocked): The process is waiting for some event (like I/O
completion or resource availability).
5. Terminated: The process has finished execution and is removed from
the system.
OPERATING SYSTEM MODULE

Process Control Block (PCB)


● The Process Control Block (PCB) is a data structure used by the operating
system to store all the information about a process. It acts as the identity card
of a process and allows the OS to manage and control multiple processes
efficiently.
● When a process is created, the OS generates a PCB for it. When a process is
switched out (due to a context switch), the OS saves its current state in the
PCB, so that it can resume later from the exact same point.
OPERATING SYSTEM MODULE

Process Control Block (PCB)

● Each process is represented in the operating system by a Process


Control Block (PCB). The PCB stores all the information about the
process, such as:
○ Process ID (PID)
○ Process state
○ Program counter (PC)
○ CPU registers
○ Memory management information
○ Scheduling information (priority, pointers to scheduling queues)
○ I/O status information
OPERATING SYSTEM MODULE

Pointer: It is a stack pointer that is required to


be saved when the process is switched from
one state to another to retain the current
position of the process.
Process state: It stores the respective state of
the process.
Process number: Every process is assigned a
unique id known as process ID or PID which
stores the process identifier.
OPERATING SYSTEM MODULE

Program counter: Program Counter stores the


counter, which contains the address of the next
instruction that is to be executed for the process.
Register: Registers in the PCB, it is a data
structure. When a processes is running and it's time
slice expires, the current value of process specific
registers would be stored in the PCB and the
process would be swapped out. When the process
is scheduled to be run, the register values is read
from the PCB and written to the CPU registers. This
is the main purpose of the registers in the PCB.
OPERATING SYSTEM MODULE

Memory limits: This field contains the


information about memory management system
used by the operating system. This may include
page tables, segment tables, etc.

List of Open files: This information includes


the list of files opened for a process.
OPERATING SYSTEM MODULE

Threads
● A thread is the smallest unit of CPU execution. It represents a single
sequence of instructions within a process. Multiple threads can exist within
the same process and run independently.

● Threads share the same process resources. This includes memory space,
open files, and global variables. However, each thread has its own program
counter, registers, and stack.

● Threads enable multitasking within a process. They allow concurrent


execution of different parts of a program. This is especially useful in
applications like web servers or GUIs.
OPERATING SYSTEM MODULE

Threads
● Creating threads is more efficient than creating processes. Since threads
share resources, the overhead for context switching and communication is
lower. This makes thread-based programs more responsive and lightweight.

● Threads can improve performance on multi-core processors. Different threads


can run in parallel on separate CPU cores. This parallelism increases
throughput and application speed.

● There are two types of threads: user-level and kernel-level. User-level threads
are managed by libraries, while kernel-level threads are managed directly by
the OS. Each type has its own benefits and limitations in terms of
performance and control.
OPERATING SYSTEM MODULE

Threads

● Threads within the same process share code, data segments, and open files.
Each thread has its own program counter, stack, and set of registers.
OPERATING SYSTEM MODULE

Benefit of Threads:
● Improved Application Responsiveness Threads allow parts of a program (like
UI) to run independently of long background tasks. This makes applications
remain responsive to user actions while performing heavy computations in the
background.
● Faster Execution through Parallelism On multi-core systems, threads can
execute concurrently on different processors. This enables faster completion
of tasks and better CPU utilization.
● Efficient Resource Sharing Threads within the same process share memory,
files, and other resources. This leads to lower overhead compared to
inter-process communication (IPC).

OPERATING SYSTEM MODULE

Benefit of Threads:
● Reduced Context Switching Overhead Switching between threads is quicker
than switching between processes. This is because threads share the same
memory space, so less information needs to be saved and loaded.

● Simplified Program Structure for Certain Tasks Multithreading makes it easier


to structure programs that perform multiple tasks simultaneously, like
downloading and processing data at the same time. It enables clean
separation of concerns within a program.

● Enhanced Scalability in Server Applications Servers can handle multiple client


requests using threads. Each thread can serve one client, improving
scalability and responsiveness.
OPERATING SYSTEM MODULE

Scheduling Algorithm
● Scheduling algorithms determine how processes are assigned to the CPU. The
operating system uses these algorithms to decide which process runs next when the
CPU becomes available. This ensures efficient CPU usage and system
responsiveness.

● They help manage multiple processes in a multitasking environment. Since many


processes may be ready at the same time, scheduling algorithms prioritize and
organize their execution. This avoids conflicts and ensures fairness among
processes.

● Different algorithms use different strategies to improve performance. Some focus on


minimizing waiting time, others on fairness or meeting deadlines. The choice of
algorithm affects system behavior significantly.
OPERATING SYSTEM MODULE

Scheduling Algorithm
● Scheduling can be preemptive or non-preemptive. In preemptive scheduling,
the OS can interrupt a running process to switch to another, while
non-preemptive scheduling allows a process to run until it finishes or blocks.
Preemptive methods are better for responsiveness; non-preemptive ones are
simpler to implement.

● Common scheduling algorithms include FCFS, SJF, RR, and Priority


scheduling. Each has its own strengths and weaknesses, and may be suited
for different types of systems. Some are ideal for batch processing, while
others work better in real-time or interactive environments.
OPERATING SYSTEM MODULE

Scheduling Algorithm : First Come First Serve


First-Come, First-Served (FCFS)

● Description: The simplest scheduling algorithm. Processes are scheduled in


the order they arrive in the ready queue.

● Characteristics: Non-preemptive, easy to implement.

● Drawbacks: Can lead to the convoy effect, where short processes wait for a
long process to complete.
OPERATING SYSTEM MODULE

Scheduling Algorithm : First Come First Serve


Shortest Job First (SJF)

● Description: Selects the process with the smallest execution time next.

● Characteristics: Can be preemptive (Shortest Remaining Time First) or


non-preemptive.

● Benefits: Minimizes average waiting time.

● Drawbacks: Requires knowledge or estimation of process burst times; may


lead to starvation of long processes.
OPERATING SYSTEM MODULE

Scheduling Algorithm : First Come First Serve


Round Robin (RR)

● Description: Each process is assigned a fixed time slice (quantum) and is


cycled through in the ready queue.

● Characteristics: Preemptive, fair for all processes.

● Benefits: Good for time-sharing systems.

● Drawbacks: Performance depends on the quantum size; too large → FCFS


behavior, too small → high context switching overhead.
OPERATING SYSTEM MODULE

Scheduling Algorithm : First Come First Serve


Priority Scheduling

● Description: Each process is assigned a priority, and the scheduler picks the
highest priority process.

● Characteristics: Can be preemptive or non-preemptive.

● Drawbacks: Risk of starvation for low-priority processes.

● Solution to Starvation: Aging — gradually increasing the priority of waiting


processes.
OPERATING SYSTEM MODULE

Scheduling Algorithm : First Come First Serve


Multilevel Queue Scheduling

● Description: Processes are divided into different queues based on priority.


● Each queue has its own scheduling algorithm.
● Scheduling between queues is done with fixed priorities or time slices.
● Benefits: Can separate processes with different requirements and treat them
accordingly.
● Drawbacks: Rigid, processes cannot move between queues.
OPERATING SYSTEM MODULE

Real-time Scheduling
● Real-time scheduling is a method used in operating systems to manage tasks
that must be executed within strict timing constraints. It ensures that critical
processes meet their deadlines to maintain system reliability.
● There are two main types: hard real-time systems, where missing a deadline
can lead to failure, and soft real-time systems, where occasional deadline
misses are tolerable. The scheduling strategy used depends on which type of
system is in use.
● In real-time scheduling, tasks are assigned priorities based on their urgency
and importance. Higher priority tasks are typically executed before lower
priority ones to meet critical timing requirements.
OPERATING SYSTEM MODULE

Real-time Scheduling
● Popular real-time scheduling algorithms include Rate Monotonic Scheduling
(RMS) and Earliest Deadline First (EDF). RMS assigns priority based on task
frequency, while EDF prioritizes tasks with the closest deadlines.

● Preemptive scheduling allows a high-priority task to interrupt a currently


running lower-priority task. Non-preemptive scheduling completes the current
task before switching, which may risk deadline misses for urgent tasks.

● Real-time scheduling is used in systems like embedded devices, automotive


control units, and medical equipment. These systems require predictable and
timely task execution to function safely and effectively.
OPERATING SYSTEM MODULE

Real-time Scheduling Characteristics


● Real-time scheduling ensures tasks are completed within strict deadlines.
● The system must behave consistently and predictably under all operating
conditions.
● Higher-priority tasks can interrupt lower-priority ones to meet urgent
deadlines.
● Tasks are scheduled based on fixed or dynamic priority levels.
● The maximum response time of a task must be known and guaranteed.
● Variations in task execution timing should be minimized.
● Real-time systems manage CPU, memory, and I/O to avoid conflicts and
ensure timely execution.
OPERATING SYSTEM MODULE

Real-time Scheduling Characteristics


● The scheduler must support both regularly occurring and event-triggered
tasks.
● The system must continue to function correctly even if some components fail.
● Real-time scheduling is typically implemented within RTOS environments to
ensure timing constraints.
OPERATING SYSTEM MODULE

THANK YOU !!!

You might also like