Operating System: CPU Scheduling
Operating System: CPU Scheduling
UNIT 2: Part 1
CPU Scheduling
1. CPU Scheduling:
CPU scheduling is the process that makes full use of the CPU
by allowing one process to use it when the execution of the
presently operating process is halted (put in a waiting state)
because of the unavailability of any resource, such as I/O. CPU
scheduling seeks to improve system speed, fairness, and
efficiency.
2. Throughput
A measure of the work done by the CPU is the number of
processes being executed and completed per unit of time. This
is called throughput.
3. Turnaround Time
The turnaround time is the time that passes between when a
process is initiated for execution in the CPU and when it is
finished. The total time spent waiting to execute in the CPU,
waiting for I/O, waiting to enter memory, and waiting in the
ready queue is known as the turnaround time.
Turn Around Time = Completion Time – Arrival Time
4. Waiting Time
A scheduling algorithm does not affect how long it takes to
finish a process once it has begun. It only affects how long a
process waits in the ready queue i.e. time spent by a process
waiting in the ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
A process could start producing results very early and keep
producing new results while providing the user with past
outcomes. Therefore, the amount of time that elapses from
when a process enters the processor to when the processor
generates its first response. We refer to this measure as
response time. Response time is the time it takes to start
responding, not the time it takes to output the response.
Response Time = CPU Allocation Time (when the CPU was allocated for the
first) – Arrival Time
6. Completion Time
The completion time is the time when the process stops
executing and exits the CPU, which means that the process has
completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the
scheduling mechanism should favor the higher-priority
processes.
8. Predictability
A given process always should run in about the same amount
of time under a similar system load.
2. Multiple-Processor Scheduling:
The goal of multiple processor scheduling, also known as
multiprocessor scheduling, is to create a system's scheduling
function that utilizes several processors.
In multiprocessor scheduling, multiple CPUs split the
workload (load sharing) to enable concurrent execution of
multiple processes.
The system's numerous CPUs communicate often and share a
common bus, memory, and other peripherals. As a result, the
system is said to be strongly connected. These systems are
employed whenever large amounts of data need to be
processed.
1. fork()
- Purpose: To create a new process.
- How it works: When a process calls `fork()`, the operating
system makes a copy of the current process. The new process is
called the child process, and the original process is the parent
process.
- Example: Imagine you have a program that needs to do two
tasks at once. You can use `fork()` to create a child process that
does one task while the parent process does the other.
2. exit()
- Purpose: To terminate a process.
- How it works: When a process is done with its work, it calls
`exit()` to end. This tells the operating system that the process
has finished and can be cleaned up.
- Example: After a process finishes its task, it calls `exit()` to
close itself.
3. wait()
- Purpose: For a parent process to wait for its child process to
finish.
- How it works: When a parent process calls `wait()`, it pauses
until one of its child processes has finished. This ensures the
parent process knows when the child process is done.
- Example: A parent process might need to wait for a child
process to finish a task before continuing.
4. waitpid()
- Purpose: Similar to `wait()`, but more flexible.
- How it works: `waitpid()` allows a parent process to wait for
a specific child process to finish, rather than any child process.
It can also be used with options to control the wait behavior.
- Example: If a parent process has multiple children and
needs to wait for a specific one to finish, it uses `waitpid()`.
5. exec()
- Purpose: To replace the current process with a new
program.
- How it works: When a process calls `exec()`, it stops running
its current program and starts running a new one. The process
ID (PID) stays the same, but everything else about the process
changes to the new program.
- Example: If a process needs to run a different program, it
can use `exec()` to replace its current code with the new
program's code.
Easy-to-Memorize Summary:
- fork(): Create a new process (child process).
- exit(): End a process.
- wait(): Parent waits for a child process to finish.
- waitpid(): Parent waits for a specific child process.
- exec(): Replace current process with a new program.
Important Stuff for CPU Scheduling
1. Process ID: A unique identifier assigned to each process.
2. Arrival Time (AT): The time when a process enters the ready queue or
is ready to be executed by the CPU.
4. Completion Time (CT): The total time taken by the CPU to finish
executing a process.
5. Turn Around Time (TAT): The total time taken by the CPU from the
arrival of the process to its completion.
Formula: TAT = CT - AT
6. Waiting Time (WT): The amount of time a process waits in the ready
queue before getting CPU time.
Formula: WT = TAT - BT
7. Ready Queue: A queue where processes wait for their turn to be
executed by the CPU.
1 System Model:
- Resources: A system has a limited number of resources (e.g.,
memory, CPU, files, I/O devices) shared among multiple
processes. Each resource type can have multiple identical
instances (e.g., 2 CPUs, and 5 printers).
- Example of Deadlock:
- Same Resource Type: Three processes each hold one CD
drive and request another. All wait indefinitely.
- Different Resource Types: Process P1 holds a DVD drive and
requests a printer, while P2 holds a printer and requests a DVD
drive. Both wait indefinitely.
- Multithreading: Multithreaded applications are prone to
deadlocks as multiple threads compete for shared resources.
Deadlocks Characterization