0% found this document useful (0 votes)
48 views15 pages

Operating System: CPU Scheduling

Uploaded by

akvakarthik2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views15 pages

Operating System: CPU Scheduling

Uploaded by

akvakarthik2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Operating System

UNIT 2: Part 1
CPU Scheduling
1. CPU Scheduling:
CPU scheduling is the process that makes full use of the CPU
by allowing one process to use it when the execution of the
presently operating process is halted (put in a waiting state)
because of the unavailability of any resource, such as I/O. CPU
scheduling seeks to improve system speed, fairness, and
efficiency.

1.1 CPU Scheduling Criteria:

CPU Scheduling has several criteria. Some of them are


mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep
the CPU as busy as possible. Theoretically, CPU utilization can
range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending upon the system load.

2. Throughput
A measure of the work done by the CPU is the number of
processes being executed and completed per unit of time. This
is called throughput.

3. Turnaround Time
The turnaround time is the time that passes between when a
process is initiated for execution in the CPU and when it is
finished. The total time spent waiting to execute in the CPU,
waiting for I/O, waiting to enter memory, and waiting in the
ready queue is known as the turnaround time.
Turn Around Time = Completion Time – Arrival Time

4. Waiting Time
A scheduling algorithm does not affect how long it takes to
finish a process once it has begun. It only affects how long a
process waits in the ready queue i.e. time spent by a process
waiting in the ready queue.
Waiting Time = Turnaround Time – Burst Time.

5. Response Time
A process could start producing results very early and keep
producing new results while providing the user with past
outcomes. Therefore, the amount of time that elapses from
when a process enters the processor to when the processor
generates its first response. We refer to this measure as
response time. Response time is the time it takes to start
responding, not the time it takes to output the response.
Response Time = CPU Allocation Time (when the CPU was allocated for the
first) – Arrival Time

6. Completion Time
The completion time is the time when the process stops
executing and exits the CPU, which means that the process has
completed its burst time and is completely executed.

7. Priority
If the operating system assigns priorities to processes, the
scheduling mechanism should favor the higher-priority
processes.

8. Predictability
A given process always should run in about the same amount
of time under a similar system load.
2. Multiple-Processor Scheduling:
The goal of multiple processor scheduling, also known as
multiprocessor scheduling, is to create a system's scheduling
function that utilizes several processors.
In multiprocessor scheduling, multiple CPUs split the
workload (load sharing) to enable concurrent execution of
multiple processes.
The system's numerous CPUs communicate often and share a
common bus, memory, and other peripherals. As a result, the
system is said to be strongly connected. These systems are
employed whenever large amounts of data need to be
processed.

2.2 Approaches to Multiple-Processor Scheduling:


For more info visit this link. Here I’m including
textbook content.
2.2.1 Asymmetric Multiprocessing (AMP)
In multiprocessor systems, one approach to CPU scheduling is
Asymmetric multiprocessors are defined as the type of
processors involving the use of multiple processors controlled
by a processor called the master processor. The processors in an
asymmetric processing system operate based on a master-slave
relationship.
According to this relationship, the master processor performs
the task of the operating system, manages the entire data
structure, programs the slave processors, and assigns tasks to
the slave processors as well.
Even though all the processors of the asymmetric
multiprocessing system are interconnected, no sort of
communication link exists between the master and slave
processors. All the slave processors are controlled by the master
processor.
Another important fact about the processors of an asymmetric
multiprocessor is that they do not have any shared memory
and operate independently of one another.
2.2.2 Symmetric Multiprocessing (SMP)

The second approach is Symmetric Multiprocessing (SMP),


where each processor is self-scheduling. In SMP, all processes
may be in a common ready queue, or each processor may have
its private queue of ready processes. Regardless of the queue
system used, each processor's scheduler examines the ready
queue and selects a process to execute.
Symmetric multiprocessors are defined as the type of
processors that are identical to each other and can perform
shared tasks due to the presence of shared memory. This
shared memory among the processors results in efficient task
completion as the tasks can be divided among the processors
and can be accomplished at a faster rate.
Unlike asymmetric multiprocessors, symmetric
multiprocessors being identical in architecture, do not have a
master-slave relationship. The task of the operating system is
shared by all the processors equally.
Symmetric processors exchange certain kinds of
communications among each other via shared memory. The
symmetric multiprocessors have an added advantage over the
asymmetric multiprocessors.

2.2.3 Processor Affinity


Processor Affinity means a process has an affinity for the
processor on which it is currently running. When a process
runs on a specific processor, the data it uses gets stored in that
processor's cache memory. This makes it faster for the process
to access the data it needs because it's already in the cache. If
the process moves to a different processor, the cache on the first
processor is cleared, and the new processor's cache needs to be
filled with the process's data again. Because this clearing and
refilling take time, most SMP (symmetric multiprocessing)
systems try to keep processes on the same processor. This
practice is called Processor Affinity.
There are two types of processor affinity:
- Soft Affinity: The operating system tries to keep a process on
the same processor but doesn't guarantee it.
- Hard Affinity: The process can specify which processors it can
run on. For example, Linux supports soft affinity but also
provides system calls like “sched_setaffinity()” for hard
affinity.
2.2.4 Load Balancing
Load balancing means distributing the work evenly across all
processors in an SMP system. It's necessary only if each
processor has its queue of tasks. If there's a common queue,
load balancing isn't needed because an idle processor can just
take the next task from the common queue. In SMP systems,
balancing the workload is important to make sure all
processors are used efficiently. If not balanced, some processors
may be idle while others are overloaded with tasks. There are
two general approaches to load balancing:
- Push Migration: A task regularly checks the load on each
processor and moves tasks from busy processors to less busy or
idle ones to balance the load.
- Pull Migration: An idle processor takes a waiting task from a
busy processor to balance the load.
2.2.5 Multicore Processors (Extra Topic, this is not there in the
textbook)
Multicore processors have multiple cores on a single chip. Each
core can run its tasks, making it appear to the operating system
as a separate processors. SMP systems with multicore
processors are faster and use less power than systems with
separate physical chips for each processor. However,
scheduling tasks can be more complex. When a processor
accesses memory, it might have to wait a long time for the data.
This waiting is called a Memory Stall and can happen if the
data isn't in the cache. To handle this, multicore processors
often have multiple hardware threads per core. If one thread is
waiting for data, the core can switch to another thread.
There are two ways to handle multiple threads:
- Coarse-Grained Multithreading: A thread runs until it has to
wait for data, then the processor switches to another thread.
This switch takes time because the processor has to stop the
current thread before starting the next one.
- Fine-Grained Multithreading: The processor switches
between threads at a finer level, like between instruction cycles.
This switch is quicker because the processor is designed to
handle frequent thread changes.
Sure, I'll explain these system calls in simple English:

3. System Call Interface for Process Management


These system calls are used by an operating system to manage
processes. Here’s a simple explanation of each:

1. fork()
- Purpose: To create a new process.
- How it works: When a process calls `fork()`, the operating
system makes a copy of the current process. The new process is
called the child process, and the original process is the parent
process.
- Example: Imagine you have a program that needs to do two
tasks at once. You can use `fork()` to create a child process that
does one task while the parent process does the other.

2. exit()
- Purpose: To terminate a process.
- How it works: When a process is done with its work, it calls
`exit()` to end. This tells the operating system that the process
has finished and can be cleaned up.
- Example: After a process finishes its task, it calls `exit()` to
close itself.

3. wait()
- Purpose: For a parent process to wait for its child process to
finish.
- How it works: When a parent process calls `wait()`, it pauses
until one of its child processes has finished. This ensures the
parent process knows when the child process is done.
- Example: A parent process might need to wait for a child
process to finish a task before continuing.

4. waitpid()
- Purpose: Similar to `wait()`, but more flexible.
- How it works: `waitpid()` allows a parent process to wait for
a specific child process to finish, rather than any child process.
It can also be used with options to control the wait behavior.
- Example: If a parent process has multiple children and
needs to wait for a specific one to finish, it uses `waitpid()`.

5. exec()
- Purpose: To replace the current process with a new
program.
- How it works: When a process calls `exec()`, it stops running
its current program and starts running a new one. The process
ID (PID) stays the same, but everything else about the process
changes to the new program.
- Example: If a process needs to run a different program, it
can use `exec()` to replace its current code with the new
program's code.

Easy-to-Memorize Summary:
- fork(): Create a new process (child process).
- exit(): End a process.
- wait(): Parent waits for a child process to finish.
- waitpid(): Parent waits for a specific child process.
- exec(): Replace current process with a new program.
Important Stuff for CPU Scheduling
1. Process ID: A unique identifier assigned to each process.

2. Arrival Time (AT): The time when a process enters the ready queue or
is ready to be executed by the CPU.

3. Burst Time (BT): The amount of time a process requires to complete


its execution.

4. Completion Time (CT): The total time taken by the CPU to finish
executing a process.

5. Turn Around Time (TAT): The total time taken by the CPU from the
arrival of the process to its completion.

Formula: TAT = CT - AT

6. Waiting Time (WT): The amount of time a process waits in the ready
queue before getting CPU time.

Formula: WT = TAT - BT
7. Ready Queue: A queue where processes wait for their turn to be
executed by the CPU.

8. Gantt Chart: A visual representation of executed processes over time,


used to analyze performance metrics like waiting time, completion time,
and turnaround time.
UNIT 2: Part 2
Deadlocks
Introduction:
Multiple processes can compete for a limited supply of
resources in a multiprogramming environment. When
resources are requested by a process and are not immediately
available, the process goes into a waiting state. Sometimes,
because other waiting processes are retaining the resources it
has requested, a waiting process can never again change its
state. This situation is called a deadlock.

1 System Model:
- Resources: A system has a limited number of resources (e.g.,
memory, CPU, files, I/O devices) shared among multiple
processes. Each resource type can have multiple identical
instances (e.g., 2 CPUs, and 5 printers).

- Resource Requests: When a process requests a resource, any


available instance of that resource type will fulfill the request. A
process may request as many resources as it requires to carry
out its designated task. The number of resources requested
may not exceed the total number of resources available in the
system. In other words, a process cannot request three printers
if the system has only two.
A process may utilize a resource in the following sequence:
- Resource Usage Sequence:
1. Request: Process requests for a resource. If unavailable, the
process waits.
2. Use: Process uses the resource (e.g., printing).
3. Release: The process releases the resource after use.
Request and release of resources that are not managed by the
operating system can be accomplished through the wait () and
signal ().
The kernel keeps track of which resources are free and which
are allocated, to which process they are allocated, and a queue
of processes waiting for this resource to become available for all
kernel-managed resources.
Deadlocks:

- Definition: When a set of processes waits for an event that


must first be initiated by another set of processes, it's known as
a deadlock.

- Resources Involved: Can be physical (e.g., printers, memory)


or logical (e.g., files, semaphores).

- Example of Deadlock:
- Same Resource Type: Three processes each hold one CD
drive and request another. All wait indefinitely.
- Different Resource Types: Process P1 holds a DVD drive and
requests a printer, while P2 holds a printer and requests a DVD
drive. Both wait indefinitely.
- Multithreading: Multithreaded applications are prone to
deadlocks as multiple threads compete for shared resources.

Deadlocks Characterization

You might also like