Final Report Operating System - Groupp 9
Final Report Operating System - Groupp 9
INTERNATONAL SCHOOL
GROUP 9
Ha Noi, 5/2024
Comments, evaluation and scoring
2
Table of Contents
INTRODUCTION…………………………………………………………….4
LIST OF SYMBOLS AND ABBREVIATIONS…………………………….6
3
Division of work
process management
dỉagrams
documents slide
Introduction.
The OS selects processes from the queue using appropriate scheduling algorithms to
guarantee efficient processing and short wait times. These algorithms minimize
waiting times and maximize CPU utilization, enabling processes to complete their
duties quickly. Our team will investigate the subject of "Study on CPU scheduling
algorithms" in light of the special qualities that process scheduling algorithms provide.
i
4
Operating system.
5
LIST OF SYMBOLS AND ABBREVIATIONS
I/O Input/Output
RR Round Robin
6
CHAPTER 1: DEFINITION OF OPERATING SYSTEM.
1.1. Introducing
Computers and mobile devices employ operating systems, which are software
programs designed to control and manage physical components and software data.
7
1.2. Functions of the Operating System:
- Process management
- Memory management
- User interaction
- Use the computer to carry out fundamental tasks including reading, writing,
organizing files, and storing data.
- Give the machine a simple command system to operate. We refer to these commands
as system commands.
Operating systems may be categorized based on how many applications are running at
once and how the user perceives them:
an operating system designed for a single user and single purpose. The operating
system only permits the simultaneous execution of one program. Multiple programs
must be run in order of execution. Additionally, only one user may log into the system
for each work session.
8
CHAPTER 2: PROCESS MANAGEMENT
A process may also ask for system resources like memory, devices, and CPU
time in order to do its work. The operating system employs a scheduler to
choose which process to run next and when to suspend the execution of a given
process.
9
2.2 Process States:
Waiting: When the process is waiting for I/O (e.g., when calling print (),
scanf() functions in C).
Ready: When the process is interrupted by the Short-term Scheduler.
Reasons for interruption may include Clock Interrupt, I/O Interrupt,
Operating System Call.
Terminated: When the application has finished execution: When it
encounters the exit command.
2.3. Characteristics of a Process:
10
b/ CPU-boundedness: A process's primary tasks while it is operating are
processing and computing; few input/output operations take place during this
time.
c/ Interactive vs. Batch Processing: When both interactive and batch processes
are involved, batch operations could be suspended in favor of interactive
processes, which must be finished as soon as possible.
d/ Process CPU Time Utilization: The processes that have received the least
CPU time are the ones that are waiting the longest.
11
CHAPTER 3: PROCESS SCHEDULING
A cycle of CPU execution and I/O waiting occurs during the execution of a
process. These two states are alternated via processes. A CPU burst starts the
execution process, which is subsequently followed by an I/O burst, another
CPU burst, and still another I/O burst. When the system asks to stop running,
the final CPU burst comes to an end.
The duration of CPU bursts has been widely measured. Although they vary
greatly from one process to another and from one computer to another, they
tend to have a frequency curve like the one shown below. The curve is typically
characterized as exponential or hyper-exponential, with many short CPU bursts
and a small number of long CPU bursts
12
Figure 6. Histogram of CPU burst durations
Process Selection: It chooses which process will run next from the ready queue.
Time Allocation: It determines how much CPU time each process will receive.
Order of Execution: The order in which processes are selected and executed
can significantly affect the performance and responsiveness of the system.
Scheduling Algorithms: The CPU scheduler employs various algorithms like
First-Come, First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling,
Round Robin, and others to decide the order of process execution
13
Context Switching:
- The dispatcher performs context switching, which involves saving the state of
the currently running process and loading the state of the next process to be
executed.
- Context switching allows the operating system to switch between processes
efficiently.
- The dispatcher switches the CPU to user mode when transferring control to a
user process.
- In user mode, the process can execute its user-level instructions.
- After context switching, the dispatcher ensures that the CPU jumps to the
appropriate location within the user program to resume execution.
This step is crucial for restarting the user program from where it left off.
Some of the scheduling criteria for processes that need attention are:
CPU Utilization: The system must maximize CPU usage time. The
busier the CPU, the better (max) (ranging from 40 – 90%).
14
time, the greater the throughput. Turnaround Time = memory access
waiting time + ready queue waiting time + CPU execution time + I/O
execution.
Waiting Time: The total time spent waiting in the ready queue, which
needs to be as small as possible. The longer the waiting time, the longer
the turnaround time, leading to reduced throughput. (Min)
Response Time: The time from when a user submits a request until the
first response is received, which should be minimized. (Min).
15
3.6. Types of CPU Scheduling
Pre-emptive Scheduling
Pre-emptive scheduling involves changing the state of a process, so it may go
from the running state to the ready state or from the waiting state to the ready
state. The CPU will execute a process for a set amount of time, after which it
must wait for its next turn. For a brief period, the resources are allotted to the
process; if any CPUS burst time remains, the process advances to the ready
queue. Otherwise, the resources are removed. Round-robin, SJF (pre-emptive),
and other algorithms are examples of pre-emptive scheduling...
16
BASIS OF PREEMPTIVE NON- PREEMPTIVE
COMPARISON SCHEDULING SCHEDULING
Preemptive scheduling Non-preemptive
is whereby the processes scheduling is whereby
with higher priorities are once the CPU has
executed first. A process been allocated to a
can be interrupted by process, the process
DESCRIPTION another process in the keeps the CPU until it
middle of its releases the CPU either
execution by terminating or by
switching to the waiting
state
17
OVERHIT OF has the overhead of scheduling has no
SWITCHING THE switching the process overhead of switching
PROCESS from ready state to the process from running
running state, running state to ready state
state to ready state.
In preemptive In non-preemptive
scheduling, the scheduling, the CPU
CPU UTILIZATION CPU utilization is higher utilization is lower
than in non-preemptive than in preemptive
scheduling scheduling.
In preemptive
scheduling, if a high In the non-preemptive
priority process scheduling, if the CPU is
frequently arrives in the allocated to the process
PROCESSING ready queue, then the having larger burst time,
process with low then the processes with
priority has to wait for small burst time may
long, and it may have to have to starve.
starve
The preemptive The non- preemptive
scheduling is scheduling is less costly
costly when compared when
to non-preemptive compared to preemptive
COST scheduling scheduling because it
because it has to does not have to
maintain the maintain the integrity of
integrity of shared data shared data
The time when the process arrives at the running state is called the Arrival
time of the process. In simple words, the time at which any process enters the
CPU is known as the arrival time.
The time when the Process is done with all its execution, and it enters the
18
termination state is called the completion time of the process. It can be also
defined as the time when a process ends.
The time for which the process needs to be in the running state is known as
the burst time of the process. We can also define it as the time which a process
requires for execution is the Burst time of the process.
Turn Around time can be defined as the total time the process remains in the
main memory of the system. The Ready state, waiting for state and the
Running State, together make up the main memory of the system. So, the time
for which the process remains in these states is known as the Turn Around
Time of the process. In simple words, it is the time that a process spends after
entering the ready state and before entering the termination state.
It can be calculated as follows:
Turn Around Time = Completion Time Arrival Time
TAT = CT – AT
The time for which a process waits to go into the running state. It is the
sum of the time spent by the process in the ready state and the waiting state.
Another way of calculating it is as follows:
Waiting Time= Turn Around Time Burst Time
WT = TAT BT
6) Response Time
The time difference between the first time a process goes into the running
state and the arrival time of the process is called the response time of the
process.
7) Gant Chart
The Gant chart is used to represent the currently executing process at every
single unit of time. This time unit is the smallest unit of time in the processor.
19
What are Scheduling algorithms?
CPU scheduling algorithms are a set of protocols in an operating system that
determine the order in which processes access the central processing unit (CPU)
for execution.
First Come First Serve is the full form of FCFS. It is the easiest and most simple
CPU scheduling algorithm. In this type of algorithm, the process which requests
the CPU gets the CPU allocation first. This scheduling method can be managed
with queue
20
Algorithms description
21
Turn Around Time = Completion Time Arrival Time
Waiting Time=Turnaround time Burst Time
The average waiting Time is determined by summing the respective waiting
time of all the processes and dividing the sum by the total number of
processes.
22
For example, given processes with arival and execution times. Use the SJF
algorithms to scheduling processes
At time t=0, process P1 arrives first and is given priority for execution (with an
execution time of 11 seconds).
Considering the processes in the queue while P1 is executing, the process with
the shortest execution time will be prioritized for execution immediately after
P1 completes. Since the processing time of P2 (7 seconds) is shorter than that of
P3 (19 seconds), P2 will be processed next.
At this point, the queue contains processes P3, P4, and P5 with execution times
of 19s, 4s, and 9s, respectively. Therefore, P4, with the shortest execution time,
will be prioritized for execution next. At time t=18s, P2 completes execution,
and P4 is processed (with a completion time of 4s).
23
With only two processes (P3 and P5) remaining in the queue, P5, with the
shorter execution time, will be processed next. At time t=22s, P4 completes
execution, and P5 begins processing. Finally, at time t=31s, with only processes
P3 and P5 left in the queue, P3 is processed next. The translation to English is
complete, maintaining the original order.
From the table above we can calculate the average time of the process as
8,2s
24
nature will be lost. (Why it is necessary to use this algorithm instead of the
previous one)
For example, given processes with arival and execution times. Uses the SRTF
algorithms to scheduling processes
At time t = 0, the process P1 starts executing. When P1 has been executed for 3
seconds, the remaining execution time for process P1 is (11-3 = 8), and at the
same time, process P2 appears. Since the remaining execution time of P2 is less
than P1 (7 < 8), P2 is given priority to execute, and P1 must wait.
P2 executes until the 8th second, the remaining execution time for process P2 is
(7 - 5 = 2), and at the same time, process P3 appears. However, the remaining
time of process P2 is still less than P1 and P3 (P1 has 8 left, P3 has 19 lefts, P2
has 2 left). Therefore, P2 continues to execute, and P1 and P3 must wait.
P4 executes until the 17th second and finishes, and at the same time, process P5
appears. At this point, the remaining execution time of P1 is the least compared
to P5 and P3 (P1 has 5 left, P3 has 19 lefts, P5 has 9 left). Therefore, P1 will
execute, and P3 and P5 must wait.
P1 executes until the 22nd second and finishes. Remaining are P3 and P5, and
the remaining execution time of P5 is less than P3 (P3 has 19 lefts, P5 has 9
left), so P5 is given priority to execute first, and P3 must wait.
25
Advantages:
Disadvantages:
• When priority is the opposite of the next execution time, the SJF (Shortest Job
First) algorithm is merely a specific example of the generic priority scheduling
algorithm. The priority decreases as process execution duration increases, and
vice versa.
- The preemptive algorithm will prioritize the CPU for the new process if the
priority of the newly arrived process is higher than that of the currently
executed process.
26
- The non-preemptive algorithm only puts new processes in the waiting period.
If the newly arrived process has the highest priority over the pending processes,
it will be kept in the waiting line.
The algorithm used for priority scheduling has an issue with starvation and
endless
blocking.A blocked process is one that is waiting and prepared to execute but
lacks a CPU. - Certain low-priority activities may be forced to wait indefinitely
by priority scheduling methods. These processes may never receive the CPU or
may receive it very slowly
• Ideas:
- Round lines. The scheduler goes through the queues, providing CPU resources
to each progress for a maximum time in time quantum.
27
- The line also executes the processes in FIFO form. New arrivals are added to
the end of the queue. The scheduler will execute the process at the end of the
cave, interrupt it when the last 1 Tq is expired, and then move to another
process.
• Possible circumstances:
- The time it takes to execute one process Tprocess < Tq, the process will
release the CPU, the process is deleted from the queue, the CPU will navigate to
another process.
- Tprocess > Tq: system disruption occurs, this process is taken to the end of the
queue, the CPU will turn the process to the new pending head.
CHAPTER 5. CONCLUSION
After studying and completing the report, my team achieved some of the
following key results:
- Efficiency: CPU scheduling allows one process to use the CPU while
another process is waiting (e.g., for I/O).
- Speed: It aims to make the system faster by minimizing idle time.
- Fairness: Ensures that all processes get a fair share of CPU time.
Long-Term Scheduler:
Selects the processes from the pool of jobs that have been submitted for
inclusion into the system.
Medium-Term Scheduler:
Manages the movement of processes between main memory and secondary
storage (swapping).
Short-Term Schedule:
Selects the next process to run from the ready queue. Executes frequently (e.g.,
after clock interrupts or I/O interrupts).
28
Minimizing Wait Time: Reducing the time a process waits from being ready
until it starts execution.
Minimizing Latency/Response Time: Time from submission to completion
(batch) or until the system responds (interactive).
Maximizing Fairness: Equal CPU time for each process or appropriate times
based on priority and workload.
Reference
[1] Operating System Concepts, 10th Edition, Abraham Silberschatz, Peter Baer
Galvin,
Greg Gagne.
[2] www.wikipedia.org.
[3] Mohan, Sumit and Singh, Rajnesh, Optimized Time Quantum for Dynamic
Round
Robin Algorithm. The journal “International Journal of Advance Research in
Computer Science and Management”, 06/2018, vol. 04, page. 03, 318-321. DOI
10.18231/2454-9150.2018.0343
[4] https://www.studytonight.com/operating-system/cpu-scheduling
29
i