0% found this document useful (0 votes)
3 views30 pages

Final Report Operating System - Groupp 9

This report by Group 9 from Vietnam National University focuses on CPU scheduling algorithms within operating systems. It covers definitions, functions, and responsibilities of operating systems, as well as detailed discussions on process management and various CPU scheduling algorithms such as FCFS, SJF, and Round Robin. The report emphasizes the importance of efficient scheduling for maximizing CPU utilization and minimizing wait times.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views30 pages

Final Report Operating System - Groupp 9

This report by Group 9 from Vietnam National University focuses on CPU scheduling algorithms within operating systems. It covers definitions, functions, and responsibilities of operating systems, as well as detailed discussions on process management and various CPU scheduling algorithms such as FCFS, SJF, and Round Robin. The report emphasizes the importance of efficient scheduling for maximizing CPU utilization and minimizing wait times.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

VIETNAM NATIONAL UNIVERSITY

INTERNATONAL SCHOOL

OPERATING SYSTEM FINAL REPORT

GROUP 9

RESEARCH: Study on CPU scheduling algorithms


Subject lecturer : Pham Dinh Tan

Members: Nguyen Thanh Nam 23070949 – K23 ICE

Dau Nguyen Khanh Tran 22071180 – K22 AIT

Hoang Minh Quang 23070225 – K23 AIT

Ha Noi, 5/2024
Comments, evaluation and scoring

2
Table of Contents

INTRODUCTION…………………………………………………………….4
LIST OF SYMBOLS AND ABBREVIATIONS…………………………….6

CHAPTER 1: DEFINITION OF OPERATING SYSTEM………………......7


1.1. Introducing ……………………………………………………………7
1.2. Functions of the Operating System ………………………………….7
1.3. Responsibilities of the Operating System…………………………….8
1.4. Classification of Operating Systems ………………………………..8

CHAPTER 2: PROCESS MANAGEMENT ………………………………9


2.1. Definition process management ……………………………………..9
2.2. Process States: ………………………………………………………...10
2.3. Characteristics of a Process …………………………………………10
2.4. Process Control Block (PCB) ……………………………………….11

CHAPTER 3: PROCESS SCHEDULING ……………………………….12


3.1. CPU and I/O burst cycle ……………………………………………12
3.2. CPU Scheduler ……………………………………………………….13
3.3. Dispatcher in CPU Scheduling ……………………………………13
3.4. Scheduling criteria ……………………………………………………14
3.5. What is the significance of scheduling and how is it achieved?..…15
3.6. Types of CPU Schedulin ………………………………………………16

CHAPTER 4. CPU SCHEDULING ALGORITHM ………………………20


4.1. First Come First Serve (FCFS) ………………………………………22
4.2. Shortest Job First (SJF) ………………………………………….......22
4.3. Shortest Remain Time First (SRTF) algorithm …………………….24
4.4 Priority Scheduling Algorithm ……………………………………....26
4.5. Round Robin algorithms (RR) ……………………………………….27

CHAPTER 5. CONCLUSION ………………………………………………28


REFERENCE ………………………………………………………………...29

3
Division of work

Nguyen Thanh Nam Report drafting, explain scheduling algorithm,

process management

Hoang Minh Quang Report drafting, get example, make algorithm

dỉagrams

Dau Nguyen Khanh Tran Report drafting, prepare presentaion, collect

documents slide

Introduction.

An operating system (OS) is frequently regarded as the brains of a computer


system, while its physical components are still vital. It offers the hardware with an
environment in which applications can run. The OS provides an intuitive user interface
while effectively allocating and managing resources. Several processes might be
prepared for action at the same time in a multitasking environment. To do this, the OS
must choose and control processes from the queue in a clever manner. Process
scheduling is an essential OS feature.

The OS selects processes from the queue using appropriate scheduling algorithms to
guarantee efficient processing and short wait times. These algorithms minimize
waiting times and maximize CPU utilization, enabling processes to complete their
duties quickly. Our team will investigate the subject of "Study on CPU scheduling
algorithms" in light of the special qualities that process scheduling algorithms provide.
i

4
Operating system.

5
LIST OF SYMBOLS AND ABBREVIATIONS

CPU Central Process Unit

I/O Input/Output

FCFS First Come First Serve

SJF Shortest Job First

SRTF Shortest Remain Time First

RR Round Robin

PCB Process Control Block

6
CHAPTER 1: DEFINITION OF OPERATING SYSTEM.

1.1. Introducing

Computers and mobile devices employ operating systems, which are software
programs designed to control and manage physical components and software data.

The operating system encompasses a central position in communication and


connection between the user and the computer hardware. When a computer is booted
up, the operating system is the next thing to be launched, after which the user can then
utilize other application programs through the interactive interface

Figure 1. Operating system

7
1.2. Functions of the Operating System:

The operating system has the following four main functions:

- Process management

- Memory management

- Storage system management

- User interaction

1.3. Responsibilities of the Operating System:

- Direct administration and control of hardware.

- Use the computer to carry out fundamental tasks including reading, writing,
organizing files, and storing data.

- Give programs a rudimentary interface system.

- Give the machine a simple command system to operate. We refer to these commands
as system commands.

1.4. Classification of Operating Systems:

Operating systems may be categorized based on how many applications are running at
once and how the user perceives them:

an operating system designed for a single user and single purpose. The operating
system only permits the simultaneous execution of one program. Multiple programs
must be run in order of execution. Additionally, only one user may log into the system
for each work session.

8
CHAPTER 2: PROCESS MANAGEMENT

2.1. Definition process management:

 A process is considered an executing program. A process includes four


main:

- The register values of the processor.


- The values of data areas in memory, such as:
+ Text: Contains the executable code.
+ Data: Holds global variables.
+ Heap: Stores dynamically allocated links.
+ Stack: Stores local variables, function parameters, and return addresses.

Figure 2. Process in memory

A process may also ask for system resources like memory, devices, and CPU
time in order to do its work. The operating system employs a scheduler to
choose which process to run next and when to suspend the execution of a given
process.

The purpose of having multiple processes running concurrently includes:


 Increasing CPU utilization.
 Enhancing multitasking capabilities.
 Improving processing speed.

9
2.2 Process States:

 New: The process is being created.


 Running: The instructions of the process are being executed.
 Ready: The process is waiting to be allocated CPU time.
 Waiting: The process is paused, waiting for resource allocation, I/O
operations, or specific events.
 Terminated: The process has completed its execution.! Process in
memory

Figure 3. Process state transition diagram

A process is in its initial state when it is created. The procedure is scheduled to


move from new to ready status using the Long-term Scheduler. The process is
scheduled to transition from the ready state to the running state—the state of
execution, or the state in which the CPU is being used—through the use of the
Short-Term Scheduler. A process can reach one of three states while it is in the
operating state:

 Waiting: When the process is waiting for I/O (e.g., when calling print (),
scanf() functions in C).
 Ready: When the process is interrupted by the Short-term Scheduler.
Reasons for interruption may include Clock Interrupt, I/O Interrupt,
Operating System Call.
 Terminated: When the application has finished execution: When it
encounters the exit command.

2.3. Characteristics of a Process:

A/ I/O-boundedness: When a process is running, its main focus is on


input/output operations, with little processing commands involved.

10
b/ CPU-boundedness: A process's primary tasks while it is operating are
processing and computing; few input/output operations take place during this
time.
c/ Interactive vs. Batch Processing: When both interactive and batch processes
are involved, batch operations could be suspended in favor of interactive
processes, which must be finished as soon as possible.
d/ Process CPU Time Utilization: The processes that have received the least
CPU time are the ones that are waiting the longest.

2.4. Process Control Block (PCB):


The operating system manages processes within the system through the Process
Control Block (PCB).

Figure 4. Process Control Block


The PCB is a data structure containing information about the process, including:
 Process state: new, ready, waiting, etc.
 Program counter register contains the address of the next instruction to be
executed in this process.
 Set of CPU registers
 CPU scheduling information (*)
 Memory information: manages the memory allocated to the process.
 I/O status: list of input/output devices, list of open files

11
CHAPTER 3: PROCESS SCHEDULING

Only one process is operating at any given moment in a single-tasking system;


additional processes, which are ready, must wait for the running process to
finish before receiving CPU time. To optimal CPU utilization, multiple
processes should always be running.
3.1. CPU and I/O burst cycle

A cycle of CPU execution and I/O waiting occurs during the execution of a
process. These two states are alternated via processes. A CPU burst starts the
execution process, which is subsequently followed by an I/O burst, another
CPU burst, and still another I/O burst. When the system asks to stop running,
the final CPU burst comes to an end.

Figure 5. Alternating cycle of CPU and I/O bursts

The duration of CPU bursts has been widely measured. Although they vary
greatly from one process to another and from one computer to another, they
tend to have a frequency curve like the one shown below. The curve is typically
characterized as exponential or hyper-exponential, with many short CPU bursts
and a small number of long CPU bursts
12
Figure 6. Histogram of CPU burst durations

3.2. CPU Scheduler:

The operating system's CPU scheduler, sometimes referred to as the short-term


scheduler, controls the order and scheduling of programs that are prepared to
run on the CPU. It's an essential part that makes sure the CPU is used fairly and
efficiently by choosing from a pool of available processes according to a
specific scheduling method.

Process Selection: It chooses which process will run next from the ready queue.

Time Allocation: It determines how much CPU time each process will receive.

Order of Execution: The order in which processes are selected and executed
can significantly affect the performance and responsiveness of the system.
Scheduling Algorithms: The CPU scheduler employs various algorithms like
First-Come, First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling,
Round Robin, and others to decide the order of process execution

3.3. Dispatcher in CPU Scheduling:

Another component related to CPU scheduling is the dispatcher. The dispatcher


is a module responsible for granting control of the CPU to the selected process
chosen by the short-term scheduler. Its functions include the following

13
Context Switching:

- The dispatcher performs context switching, which involves saving the state of
the currently running process and loading the state of the next process to be
executed.
- Context switching allows the operating system to switch between processes
efficiently.

User Mode Transition:

- The dispatcher switches the CPU to user mode when transferring control to a
user process.
- In user mode, the process can execute its user-level instructions.

Jump to Appropriate Location:

- After context switching, the dispatcher ensures that the CPU jumps to the
appropriate location within the user program to resume execution.
This step is crucial for restarting the user program from where it left off.

- The dispatcher must be as fast as possible because it is invoked every time a


process switch occurs. The time required for the dispatcher to stop one process
and start another is known as the dispatch latency.

3.4. Scheduling criteria:

Some of the scheduling criteria for processes that need attention are:
 CPU Utilization: The system must maximize CPU usage time. The
busier the CPU, the better (max) (ranging from 40 – 90%).

 Throughput: The number of processes completed within a unit of time.


The faster the process execution, the better. (The larger the throughput,
the better). (10 processes/second – 1 process/hour).

 Turnaround Time: The period from when a process is submitted until it


is completed, which needs to be minimized. The shorter the turnaround

14
time, the greater the throughput. Turnaround Time = memory access
waiting time + ready queue waiting time + CPU execution time + I/O
execution.

 Waiting Time: The total time spent waiting in the ready queue, which
needs to be as small as possible. The longer the waiting time, the longer
the turnaround time, leading to reduced throughput. (Min)

 Response Time: The time from when a user submits a request until the
first response is received, which should be minimized. (Min).

3.5. What is the significance of scheduling and how is it achieved?

 Ensures fairness among processes, avoiding situations where a process is


indefinitely waiting.

 Maximizes the number of processes served within a unit of time.

 Minimizes costs and system resources.

15
3.6. Types of CPU Scheduling

These are two kinds of Scheduling methods:

Pre-emptive Scheduling
Pre-emptive scheduling involves changing the state of a process, so it may go
from the running state to the ready state or from the waiting state to the ready
state. The CPU will execute a process for a set amount of time, after which it
must wait for its next turn. For a brief period, the resources are allotted to the
process; if any CPUS burst time remains, the process advances to the ready
queue. Otherwise, the resources are removed. Round-robin, SJF (pre-emptive),
and other algorithms are examples of pre-emptive scheduling...

Non pre-emptive Scheduling


When using non-preemptive scheduling, once a resource is assigned to a
process, it cannot be removed until the process is finished. Other processes in
the ready queue are not allowed to take over the CPU; instead, they must wait
for their turn. Once the CPU has been assigned to a process, it remains in that
process's custody until the process has finished executing or enters the waiting
state for an I/O activity.

16
BASIS OF PREEMPTIVE NON- PREEMPTIVE
COMPARISON SCHEDULING SCHEDULING
Preemptive scheduling Non-preemptive
is whereby the processes scheduling is whereby
with higher priorities are once the CPU has
executed first. A process been allocated to a
can be interrupted by process, the process
DESCRIPTION another process in the keeps the CPU until it
middle of its releases the CPU either
execution by terminating or by
switching to the waiting
state

Preemptive scheduling Non-preemptive


can be described as scheduling
flexible because it can be described as rigid
allows the critical because even if a critical
ALTERNATIVE processes to access CPU process enters the ready
DESCIPTION as they arrive at the queue, the process
ready queue no matter running CPU is not
what process is currently interrupted
being executed
In preemptive In non-preemptive
scheduling, if a scheduling once, the
PROCESS process with higher process has been
EXECUTION priority enters in a ready allocated to the CPU, it
queue, the process with will complete its
lower priority is execution.
removed.
In the preemptive In non-preemptive
PROCESS scheduling, scheduling, the process
SCHEDULING the process can be cannot be scheduled.
scheduled.
Preemptive scheduling This is usually not the
can cause a problem case with
DATA when two processes non-preemptive
SHARING share data because
PROBLEM one may be interrupted
in the middle of
updating shared
data structures.
Preemptive scheduling Non-preemptive

17
OVERHIT OF has the overhead of scheduling has no
SWITCHING THE switching the process overhead of switching
PROCESS from ready state to the process from running
running state, running state to ready state
state to ready state.
In preemptive In non-preemptive
scheduling, the scheduling, the CPU
CPU UTILIZATION CPU utilization is higher utilization is lower
than in non-preemptive than in preemptive
scheduling scheduling.
In preemptive
scheduling, if a high In the non-preemptive
priority process scheduling, if the CPU is
frequently arrives in the allocated to the process
PROCESSING ready queue, then the having larger burst time,
process with low then the processes with
priority has to wait for small burst time may
long, and it may have to have to starve.
starve
The preemptive The non- preemptive
scheduling is scheduling is less costly
costly when compared when
to non-preemptive compared to preemptive
COST scheduling scheduling because it
because it has to does not have to
maintain the maintain the integrity of
integrity of shared data shared data

Important terms used in process scheduling

1) Arrival Time (AT)

The time when the process arrives at the running state is called the Arrival
time of the process. In simple words, the time at which any process enters the
CPU is known as the arrival time.

2) Completion Time (CT)

The time when the Process is done with all its execution, and it enters the

18
termination state is called the completion time of the process. It can be also
defined as the time when a process ends.

3) Burst Time (BT)

The time for which the process needs to be in the running state is known as
the burst time of the process. We can also define it as the time which a process
requires for execution is the Burst time of the process.

4) Turn Around Time (TAT)

Turn Around time can be defined as the total time the process remains in the
main memory of the system. The Ready state, waiting for state and the
Running State, together make up the main memory of the system. So, the time
for which the process remains in these states is known as the Turn Around
Time of the process. In simple words, it is the time that a process spends after
entering the ready state and before entering the termination state.
It can be calculated as follows:
Turn Around Time = Completion Time Arrival Time
TAT = CT – AT

5) Waiting Time (WT)

The time for which a process waits to go into the running state. It is the
sum of the time spent by the process in the ready state and the waiting state.
Another way of calculating it is as follows:
Waiting Time= Turn Around Time Burst Time
WT = TAT BT

6) Response Time

The time difference between the first time a process goes into the running
state and the arrival time of the process is called the response time of the
process.

7) Gant Chart
The Gant chart is used to represent the currently executing process at every
single unit of time. This time unit is the smallest unit of time in the processor.

CHAPTER 4. CPU SCHEDULING ALGORITHM

19
 What are Scheduling algorithms?
CPU scheduling algorithms are a set of protocols in an operating system that
determine the order in which processes access the central processing unit (CPU)
for execution.

 What is their function?


Their main function is to control the way several processes are executed by
selecting which one to use the CPU, when and for how long. This is essential in
a multitasking setting when several tasks must be completed fairly and
effectively.

There are mainly six types of process scheduling algorithms:

- First Come First Serve (FCFS)


- Shortest-Job-First (SJF) Scheduling
- Shortest Remaining Time
- Priority Scheduling
- Round Robin Scheduling
- Multilevel Queue Scheduling

We go for the first algorithms:

4.1. First Come First Serve (FCFS)

First Come First Serve is the full form of FCFS. It is the easiest and most simple
CPU scheduling algorithm. In this type of algorithm, the process which requests
the CPU gets the CPU allocation first. This scheduling method can be managed
with queue

20
Algorithms description

Advantages of FCFS Method :


 The simplest form of the CPU scheduling algorithms
 CPU is not interrupted
 Lowest implementation cost (don’t need to change the numerical order)

Disadvantages of FCFS Method:


 Long average waiting time
 Too rigid and has not any priorities
 When has a long process, the other processes in the queue behind must
wait till done

Example of First Come First Serve Scheduling

Let's take an example of The FCFS scheduling algorithm. In the Following


schedule, there are 5 processes with process ID P0, P1, P2, P3 and P4. P0
arrives at time 0, P1 at time 1, P2 at time 2, P3 arrives at time 3 and Process P4
arrives at time 4 in the ready queue. The processes and their respective Arrival
and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the
following formula.

21
Turn Around Time = Completion Time Arrival Time
Waiting Time=Turnaround time Burst Time
The average waiting Time is determined by summing the respective waiting
time of all the processes and dividing the sum by the total number of
processes.

4.2. Shortest Job First (SJF)

Shortest Job First is a Preemptive or Non-Preemptive algorithm. In the shortest


job first algorithm, the job having shortest or less burst time will get the CPU
first. It is the best approach to minimize the waiting time. It is simple to
implement in the batch operating system because in this CPU time is known in
advance, but it is not used in interactive systems, because in interactive
systems, CPU time is not known.

22
For example, given processes with arival and execution times. Use the SJF
algorithms to scheduling processes

At time t=0, process P1 arrives first and is given priority for execution (with an
execution time of 11 seconds).

During the time P1 is executing (from 0 to 11 seconds), processes P2 and P3


arrive (at t=3s and t=7s, respectively) and are placed in the queue.

Considering the processes in the queue while P1 is executing, the process with
the shortest execution time will be prioritized for execution immediately after
P1 completes. Since the processing time of P2 (7 seconds) is shorter than that of
P3 (19 seconds), P2 will be processed next.

At time t=11s, P1 finishes execution, and P2 is processed (with a completion


time of 7 seconds). During the execution of P2 (from 11 to 18 seconds),
processes P4 and P5 arrive (at t=13s and t=17s, respectively) and are added to
the queue.

At this point, the queue contains processes P3, P4, and P5 with execution times
of 19s, 4s, and 9s, respectively. Therefore, P4, with the shortest execution time,
will be prioritized for execution next. At time t=18s, P2 completes execution,
and P4 is processed (with a completion time of 4s).

23
With only two processes (P3 and P5) remaining in the queue, P5, with the
shorter execution time, will be processed next. At time t=22s, P4 completes
execution, and P5 begins processing. Finally, at time t=31s, with only processes
P3 and P5 left in the queue, P3 is processed next. The translation to English is
complete, maintaining the original order.

From the table above we can calculate the average time of the process as
8,2s

Advantages of Shortest Job First (SJF) Scheduling

 SJF is basically used for Long Term Scheduling


 The average waiting time of Shortest Job First (SJF) is less than the FCFS
(FirstCome, First -Serve) algorithm
 In terms of the average turnaround time, it is optimal

Disadvantages of Shortest Job First (SJF) Scheduling

 Install complex algorithms


 Can cause very long turnaround times
 Hardly be deployed for short-term CPU scheduling

4.3. Shortest Remain Time First (SRTF) algorithm

Similar to SJF (Shortest Job First), this algorithm prioritizes executing


processes based on the remaining time needed to complete the process (which is
the total time minus the time already executed).
Therefore, in this algorithm, it is always necessary to accurately and timely
check the progress of the process’s execution time. Concurrently, the CPU time
reallocation mode must also be applied; otherwise, the algorithm’s priority

24
nature will be lost. (Why it is necessary to use this algorithm instead of the
previous one)
For example, given processes with arival and execution times. Uses the SRTF
algorithms to scheduling processes

At time t = 0, the process P1 starts executing. When P1 has been executed for 3
seconds, the remaining execution time for process P1 is (11-3 = 8), and at the
same time, process P2 appears. Since the remaining execution time of P2 is less
than P1 (7 < 8), P2 is given priority to execute, and P1 must wait.

P2 executes until the 8th second, the remaining execution time for process P2 is
(7 - 5 = 2), and at the same time, process P3 appears. However, the remaining
time of process P2 is still less than P1 and P3 (P1 has 8 left, P3 has 19 lefts, P2
has 2 left). Therefore, P2 continues to execute, and P1 and P3 must wait.

At time t = 10, P2 finishes executing, and no new processes appear. It is


observed that the remaining execution time of P1 is less than P3 (8 < 19), so P1
will execute next.
P1 executes until the 13th second, the remaining execution time for P1 is (8 – 3
= 5), and at the same time, process P4 appears. At this point, the execution time
of P4 is the least compared to P1 and P3 (P1 has 5 left, P3 has 19 lefts, P4 has 4
left). Therefore, P4 will execute, and P1 and P3 must wait.

P4 executes until the 17th second and finishes, and at the same time, process P5
appears. At this point, the remaining execution time of P1 is the least compared
to P5 and P3 (P1 has 5 left, P3 has 19 lefts, P5 has 9 left). Therefore, P1 will
execute, and P3 and P5 must wait.
P1 executes until the 22nd second and finishes. Remaining are P3 and P5, and
the remaining execution time of P5 is less than P3 (P3 has 19 lefts, P5 has 9
left), so P5 is given priority to execute first, and P3 must wait.

By the 31st second, P5 finishes executing. Only P3 remains, and P3 completes


the execution.
At the 50th second, P3 finishes executing. And all processes have been
completed

25
Advantages:

 Fast Processing: SRTF processes jobs faster than non-preemptive SJF.


 Minimized Completion Time: Shortest remaining time ensures quicker
job completion.

Disadvantages:

 Overhead: Frequent context switching consumes CPU time.


 Diminished Advantage: Overhead reduces the advantage of fast
processing.

4.4 Priority Scheduling Algorithm

• A defined priority will be given to every process in priority scheduling


algorithms. The designated CPU will start working on higher priority processes
first. The FCFS algorithm is used if the processes are of equal priority.

• When priority is the opposite of the next execution time, the SJF (Shortest Job
First) algorithm is merely a specific example of the generic priority scheduling
algorithm. The priority decreases as process execution duration increases, and
vice versa.

• Algorithms can be preemptive or non-preemptive:

- The preemptive algorithm will prioritize the CPU for the new process if the
priority of the newly arrived process is higher than that of the currently
executed process.

26
- The non-preemptive algorithm only puts new processes in the waiting period.
If the newly arrived process has the highest priority over the pending processes,
it will be kept in the waiting line.

• Priority is represented by numbers. Here we specify that smaller numbers


indicate higher priority.

The algorithm used for priority scheduling has an issue with starvation and
endless
blocking.A blocked process is one that is waiting and prepared to execute but
lacks a CPU. - Certain low-priority activities may be forced to wait indefinitely
by priority scheduling methods. These processes may never receive the CPU or
may receive it very slowly

4.5. Round Robin algorithms (RR)


For multitasking systems, the Round Robin scheduling algorithm was created
Specifically. Although it contains extra priority factors that enable the system to
move between tasks, it is essentially comparable to FCFS.

• Ideas:

- A defined (time quantum - Tq), typically between 10 and 100 milliseconds.

- Round lines. The scheduler goes through the queues, providing CPU resources
to each progress for a maximum time in time quantum.

27
- The line also executes the processes in FIFO form. New arrivals are added to
the end of the queue. The scheduler will execute the process at the end of the
cave, interrupt it when the last 1 Tq is expired, and then move to another
process.

• Possible circumstances:

- The time it takes to execute one process Tprocess < Tq, the process will
release the CPU, the process is deleted from the queue, the CPU will navigate to
another process.

- Tprocess > Tq: system disruption occurs, this process is taken to the end of the
queue, the CPU will turn the process to the new pending head.

CHAPTER 5. CONCLUSION

After studying and completing the report, my team achieved some of the
following key results:

5.1 Purpose of CPU Scheduling:

- Efficiency: CPU scheduling allows one process to use the CPU while
another process is waiting (e.g., for I/O).
- Speed: It aims to make the system faster by minimizing idle time.
- Fairness: Ensures that all processes get a fair share of CPU time.

5.2 Scheduler Types:

 Long-Term Scheduler:
Selects the processes from the pool of jobs that have been submitted for
inclusion into the system.
Medium-Term Scheduler:
Manages the movement of processes between main memory and secondary
storage (swapping).
 Short-Term Schedule:
Selects the next process to run from the ready queue. Executes frequently (e.g.,
after clock interrupts or I/O interrupts).

5.3 Scheduling Goals:

Maximizing Throughput: Total work completed per time unit.

28
Minimizing Wait Time: Reducing the time a process waits from being ready
until it starts execution.
Minimizing Latency/Response Time: Time from submission to completion
(batch) or until the system responds (interactive).
Maximizing Fairness: Equal CPU time for each process or appropriate times
based on priority and workload.

5.4 Scheduling Algorithms:


First Come First Serve (FCFS): Simple but may lead to poor average
turnaround time.
Shortest Job First (SJF): Minimizes average turnaround time but requires
knowledge of burst times.
Priority Scheduling: Assigns priorities to processes.
Round Robin: Time-sliced execution.

Shortest Remain Time First (SRTF): inimize average turnaround time by


choosing the process that has the shortest burst time left to execute.

Reference
[1] Operating System Concepts, 10th Edition, Abraham Silberschatz, Peter Baer
Galvin,
Greg Gagne.
[2] www.wikipedia.org.
[3] Mohan, Sumit and Singh, Rajnesh, Optimized Time Quantum for Dynamic
Round
Robin Algorithm. The journal “International Journal of Advance Research in
Computer Science and Management”, 06/2018, vol. 04, page. 03, 318-321. DOI
10.18231/2454-9150.2018.0343
[4] https://www.studytonight.com/operating-system/cpu-scheduling

29
i

You might also like