Os Cpu Scheduling
Os Cpu Scheduling
OS Report
Submitted by
Kedharnadh K [RA2211003011694]
Sanjana Reddy Sangam [RA2211003011710]
Dr. D. VIJI
Assistant Professor, Department of Computing Technologies
BACHELORS OF TECHNOLOGY
in
COMPUTER SCIENCE ENGINEERING
SCHOOL OF COMPUTING
NOVEMBER 2023
BONAFIDE CERTIFICATE
Certified that this B. Tech project report titled “CPU Scheduling Simulation” is
the bonafide work of Mr. Kedharnadh K [Reg. No. RA2211003011694] and Ms.
Sanjana Reddy Sangam [Reg. No. RA2211003011710] who carried out the project
work under my supervision. Certified further, that to the best of my knowledge the
work reported herein does not form part of any other thesis or dissertation on the
basis of which a degree or award was conferred on an earlier occasion for this or
ABSTRACT
ACKNOWLEDGEMENT
TABLE OF CONTENTS
CH CONTENTS P
APT A
ER G
NO E
O
1 INTRODUCTI 5
ON
1.1 Motivation
1.2 Objective
1.3 Problem
Statement
1.4 Challenges
2 LITERATURE 7
SURVEY
3 REQUIREME 9
NT ANALYSIS
4 ARCHITECTU 1
RE & DESIGN 4
5 CODE 1
6 SNIPPETS 6
OUTPUT
2
9
CONCLUSION 3
7 1
REFERENCES 3
8
4
1. INTRODUCTION
1.1 Motivation:
The real-world impact of CPU scheduling inefficiencies can be felt in various domains, from
server farms striving to deliver swift responses to millions of user requests to embedded
systems managing critical tasks in healthcare and automotive applications. This project aims
to address these challenges by delving into the heart of CPU scheduling, seeking to unveil the
dynamics of scheduling algorithms and their far-reaching effects on system performance.
1.2 Objective:
The primary objective of this CPU Scheduling Simulation project is to gain a profound
understanding of CPU scheduling algorithms and their practical implications in the world of
5
CPU scheduling is a quintessential problem in the field of operating systems. The problem
statement can be encapsulated in the question of how to effectively allocate CPU time to
processes to ensure optimal system performance and resource utilization. The challenge lies
in selecting the most suitable scheduling algorithm for a given scenario, taking into account
factors such as turnaround time, response time, fairness, and system throughput. This project
seeks to address the problem statement by simulating and evaluating various scheduling
algorithms, ultimately offering insights into their strengths, weaknesses, and real-world
applicability.
1.4 Challenges:
The challenges encountered in this project are multifaceted. Developing a CPU scheduling
simulator that accurately emulates real-world scenarios and processes is a complex task. It
involves handling issues related to process generation, time-sharing, and data collection, all
while ensuring that the simulation reflects the dynamic nature of modern operating systems.
Additionally, the selection and implementation of scheduling algorithms pose challenges as
we aim to represent their characteristics faithfully. These challenges, along with others
encountered during the project, will be discussed in greater detail in subsequent sections,
offering a glimpse into the problem-solving and optimization processes undertaken.
2.LITERATURE SURVEY
The field of CPU scheduling has a rich history dating back to the inception of operating
systems. In the early days of computing, CPU scheduling was a relatively straightforward
task. Early operating systems, such as the Batch Processing System, primarily employed a
First-Come, First-Served (FCFS) scheduling policy, where processes were executed in the
order they arrived. However, as computing environments grew in complexity and diversity, it
became evident that more sophisticated scheduling mechanisms were required to optimize
system performance and resource utilization.
Over the years, a multitude of scheduling algorithms have been proposed and researched,
each with its unique set of principles and trade-offs. The historical perspective of CPU
scheduling provides valuable insights into the evolution of scheduling policies, underscoring
the need for flexibility and adaptability in managing CPU resources.
Classic scheduling algorithms have made a profound impact on the field of operating
systems. One of the earliest and most celebrated algorithms is Shortest Job First (SJF), which
was introduced by D.E. Knuth in 1968. SJF aims to minimize the average waiting time by
selecting the process with the shortest execution time for execution next. Its strength lies in
its optimality for minimizing waiting times, but it is challenged by the difficulty of accurately
predicting execution times.
Round Robin (RR), proposed by W. F. Clock and T. Elmaghraby in 1961, takes a time-
sharing approach, granting each process a fixed time quantum. RR ensures fairness and
responsiveness in system performance, but it may introduce overhead and potentially reduce
efficiency due to frequent context switches.
Recent developments in CPU scheduling have adapted these classic algorithms and
introduced
new policies to meet the demands of modern computing environments. Multilevel Feedback
Queue (MLFQ) scheduling, introduced by F. J. Corbato in the 1960s, has been refined over
time and continues to be relevant for managing dynamic workloads in time-sharing systems.
A wealth of research papers and comparative studies have been published to assess the
performance of scheduling algorithms under various scenarios. These studies have delved
into factors such as system throughput, turnaround time, response time, fairness, and
adaptability to different workloads. These comparative studies provide valuable insights into
the relative strengths and weaknesses of various scheduling policies, aiding in the selection of
the most suitable algorithm for specific system requirements.
As we look to the future, the literature survey reveals a continued interest in improving CPU
scheduling algorithms. Researchers are exploring innovative approaches, including the
integration of machine learning and artificial intelligence techniques, to predict process
behavior and make dynamic scheduling decisions. The emergence of heterogeneous and
3. REQUIREMENTS
3.1 Analysis
In a CPU Scheduling Simulation project, the analysis phase is a critical step where you
examine the results of your simulation, draw conclusions, and gain insights into the
performance of different scheduling algorithms. This phase involves interpreting the data
generated during the simulation and evaluating the impact of various scheduling policies.
Here's a breakdown of the analysis process:
1. Data Collection:
Start by collecting the data generated by your simulation. This data typically includes
information on process execution times, waiting times, turnaround times, and other relevant
metrics. Ensure that the data is well-organized and accessible for analysis.
2. Performance Metrics:
Define the performance metrics you intend to use for the evaluation of scheduling algorithms.
Common metrics include:
Turnaround Time: The total time a process takes to complete, from arrival to termination.
Response Time: The time it takes for a process to receive its first CPU burst.
Waiting Time: The cumulative time a process spends waiting in the ready queue.
CPU Utilization: The percentage of time the CPU is actively executing a process.
3. Data Visualization:
Utilize data visualization tools and techniques to present the simulation results in a clear and
understandable manner. Create plots, charts, graphs, and tables to visualize the performance
of different scheduling algorithms. Visual representations can make it easier to identify trends
and patterns.
4. Comparative Analysis:
Compare the performance of various scheduling algorithms based on the collected data.
Analyze how different algorithms perform in terms of the defined metrics. Pay attention to
variations in turnaround times, response times, and other critical parameters.
5. Identifying Trade-Offs:
Identify the trade-offs associated with each scheduling algorithm. For instance, some
algorithms may excel in minimizing response times but at the cost of potentially higher
waiting times. Evaluate which trade-offs are acceptable for different system requirements and
workloads.
6. Real-World Relevance:
Discuss the practical implications of your findings. Consider how the performance of
scheduling algorithms aligns with real-world scenarios and system types. Explore the impact
on user experience and system efficiency.
8. Recommendations:
10
Offer recommendations for the selection of scheduling algorithms based on your findings.
Consider the trade-offs and priorities of different systems when suggesting the most suitable
algorithm for specific applications.
9. Limitations:
Acknowledge the limitations of your simulation and analysis. Discuss factors that may have
affected the results, such as simplifications in the simulation model, assumptions made, or
constraints of the software and hardware used.
readers.
Computer System: You will need a computer system with sufficient processing power and
memory to run the CPU scheduling simulation software and associated tools efficiently. The
specific hardware requirements may vary based on the complexity of your simulation.
Storage Space: Adequate storage space is required to store the simulation program, data, and
any generated logs or results.
Input Devices: A standard keyboard and mouse or other input devices for interacting with the
simulation software.
11
Operating System: You need an operating system compatible with the simulation software
and programming tools you plan to use. Popular choices include Windows, Linux, or macOS.
Data Visualization Tools: To present your simulation results effectively, consider using data
visualization tools and libraries such as Matplotlib for Python or ggplot2 for R.
Documentation Tools: Utilize documentation tools or formats to create and maintain project
documentation, such as Markdown, LaTeX, or Microsoft Word.
Project-Specific Requirements:
Clear Objectives: Define clear objectives for your CPU scheduling simulation project.
Determine what specific aspects of CPU scheduling you intend to explore and the goals you
aim to achieve.
Scheduling Algorithms: Decide which scheduling algorithms you want to simulate, as well as
their specific parameters and characteristics.
12
Data Generation: Specify how you will generate or acquire the data needed for the
simulation. This includes defining the characteristics of processes, their arrival times, burst
times, and any other relevant attributes.
Simulation Logic: Define the logic and rules for how the simulation will progress, including
how scheduling decisions will be made, time quantum management (if applicable), and how
processes will transition between states.
Performance Metrics: Determine the performance metrics you will use to evaluate the
effectiveness of different scheduling algorithms. Common metrics include turnaround time,
response time, waiting time, and CPU utilization.
Simulation Output: Decide on the format and presentation of the simulation output. This may
include generating logs, reports, graphs, or other visual representations of the simulation
results.
User Interface (Optional): If your project includes a user interface for interaction or
visualization, define the requirements for this component.
Testing and Validation: Establish a testing plan to validate the accuracy and reliability of the
simulation. This may involve comparing the simulation results with analytical or theoretical
expectations.
Project Timeline: Develop a project timeline that outlines the key milestones, tasks, and
deadlines for the project, including the development, testing, and documentation phases.
13
Resources: Identify any external resources, research papers, textbooks, or reference materials
that will be crucial for understanding scheduling algorithms and implementing the
simulation.
The architecture and design phase of our CPU Scheduling Simulation project is a pivotal step in
shaping the structure and components of the simulation. It serves as the blueprint for the
implementation, guiding the development process and ensuring that the simulation accurately
models the behavior of scheduling algorithms. In this section, we provide an in-depth overview of
the architectural design and the key components that constitute our simulation.
At the highest level, our CPU Scheduling Simulation can be visualized as a multi-component
system where processes traverse through various states and interact with the central scheduling
component. The following components constitute the system architecture:
Scheduler Component: The scheduler is the core component responsible for making scheduling
decisions. It manages the allocation of CPU time to processes and selects the next process to run
based on the chosen scheduling algorithm. The scheduler component plays a pivotal role in
determining system performance.
Process Component: Processes are central to our simulation. Each process is represented as an
entity with attributes such as arrival time, burst time, priority (if applicable), and state transitions.
Processes move between states, including "ready," "running," and "terminated," in response to
scheduling decisions and time progression.
Queue Management: In cases where scheduling algorithms involve ready queues (e.g., Round
Robin or Priority Scheduling), a component is responsible for managing these queues. This
includes handling the insertion of processes, removal of processes upon execution, and
prioritization.
Clock or Timer Component: Time management is an integral part of our simulation. We employ a
simulation clock to track the progression of time, and a timer component enforces the scheduling
of processes based on defined time quanta or priorities.
Processes enter the system and are managed by the Process Component. These processes
transition between states, with the scheduler making scheduling decisions based on defined
policies.
15
The Scheduler Component communicates with the Process Component to determine the next
process to run. This communication involves the selection of a process from the ready queue
based on the scheduling algorithm's criteria.
Data, including process attributes and scheduling decisions, flows between components to update
the state of the system. As processes execute, the simulation clock advances, and the system
continues to progress.
5. CODE SNIPPETS
import java.util.Collections;
16
import java.util.List;
row.getArrivalTime() + row.getBurstTime()));
}
else
{
17
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
}
else if (((Row) o1).getArrivalTime() < ((Row) o2).getArrivalTime())
{
return -1;
18
}
else
{
return 1;
}
});
while (!rows.isEmpty())
{
List<Row> availableRows = new ArrayList();
}
else
{
return 1;
19
}
});
import java.util.Collections;
20
import java.util.HashMap;
import java.util.List;
import java.util.Map;
while (!rows.isEmpty())
{
Row row = rows.get(0);
int bt = (row.getBurstTime() < timeQuantum ? row.getBurstTime() : timeQuantum);
this.getTimeline().add(new Event(row.getProcessName(), time, time + bt));
21
time += bt;
rows.remove(0);
{
if (event.getProcessName().equals(row.getProcessName()))
{
if (map.containsKey(event.getProcessName()))
22
{
int w = event.getStartTime() - (int) map.get(event.getProcessName());
row.setWaitingTime(row.getWaitingTime() + w);
}
else
{
row.setWaitingTime(event.getStartTime() - row.getArrivalTime());
}
map.put(event.getProcessName(), event.getFinishTime());
}
}
row.setTurnaroundTime(row.getWaitingTime() + row.getBurstTime());
}
}
}
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
23
{
return 0;
}
else if (((Row) o1).getArrivalTime() < ((Row) o2).getArrivalTime())
{
return -1;
}
else
{
return 1;
}
});
while (!rows.isEmpty())
{
List<Row> availableRows = new ArrayList();
{
return 0;
}
else if (((Row) o1).getPriorityLevel() < ((Row) o2).getPriorityLevel())
24
{
return -1;
}
else
{
return 1;
}
});
if (row.getBurstTime() == 0)
{
for (int i = 0; i < rows.size(); i++)
{
if (rows.get(i).getProcessName().equals(row.getProcessName()))
{
rows.remove(i);
break;
}
}
}
}
if (timeline.get(i - 1).getProcessName().equals(timeline.get(i).getProcessName()))
{
timeline.get(i - 1).setFinishTime(timeline.get(i).getFinishTime());
timeline.remove(i);
25
map.put(event.getProcessName(), event.getFinishTime());
}
}
row.setTurnaroundTime(row.getWaitingTime() + row.getBurstTime());
}}}
import java.util.ArrayList;
26
import java.util.Collections;
import java.util.List;
while (!rows.isEmpty())
{
List<Row> availableRows = new ArrayList();
27
{
availableRows.add(row);
}
}
}
}
28
6. OUTPUT
29
Fig. 6.1
Fig. 6.2
Fig. 6.3
Fig. 6.4
Screenshot Representations:
30
Furthermore, Figure 6.4 meticulously displays the calculated average waiting time and average
turnaround time after the scheduling process. These figures serve as pivotal metrics, reflecting the
efficiency and responsiveness of the system under different scheduling algorithms. By
incorporating these visualizations, our project report provides a comprehensive and detailed
insight into the simulation results, enhancing the readers' understanding of the complex dynamics
of CPU scheduling and its impact on system performance.
31
7. CONCLUSION
In this section, we provide a concise summary of the CPU Scheduling Simulation project,
highlighting the major achievements and insights gained throughout the project's lifecycle.
Our project began with clear objectives in mind, aiming to explore the dynamic world of CPU
scheduling and the performance of various scheduling algorithms. We are pleased to report that
we have achieved these objectives effectively. The project allowed us to gain a comprehensive
understanding of CPU scheduling and its real-world implications.
Through the course of this project, we implemented a CPU scheduling simulation that faithfully
emulates the behavior of multiple scheduling algorithms. The simulator successfully
demonstrated the operation of algorithms such as Round Robin, First-Come, First-Served (FCFS),
and Priority Scheduling, among others. The results obtained from the simulation provided
valuable insights into the strengths and weaknesses of these algorithms under various scenarios.
Our analysis and evaluation of the simulation results have yielded several key findings and
insights. These findings include:
Trade-Offs: It became apparent that there are trade-offs associated with different scheduling
algorithms. For example, algorithms optimized for minimizing response times might sacrifice
overall system throughput. Understanding these trade-offs is crucial when selecting the most
suitable algorithm for a particular application.
Real-World Relevance: The findings from our simulation align closely with real-world scenarios.
The insights gained through this project have direct applications in diverse computing
environments, from server farms handling multiple user requests to embedded systems managing
time-sensitive tasks.
Our project has illuminated the intricate world of CPU scheduling, but it also points to several
areas for future research and enhancement. Some potential avenues for further exploration
include:
32
Optimization and Real-Time Systems: Delving into the optimization of existing scheduling
algorithms and their adaptability to real-time systems with stringent timing constraints.
In conclusion, our CPU Scheduling Simulation project has provided valuable insights into the
world of CPU scheduling and its impact on system performance. We have successfully achieved
our project objectives, offering a comprehensive understanding of scheduling algorithms and their
practical implications.
The knowledge gained from this project will serve as a valuable resource for system
administrators, software developers, and researchers in the field of operating systems. By
shedding light on the nuances of CPU scheduling, we aim to contribute to the ongoing
advancement of efficient resource allocation and system performance.
We express our gratitude to all those who supported us during the project's development and
analysis phases. Their contributions have been invaluable in making this project a success.
33
8. REFERENCES
IEEE Website:
https://www.computer.org/csdl/proceedingsarticle/fie/2007/04417885/12OmNyFCvXC
Research Gate:
https://www.researchgate.net/publication/4305451_A_CPU_scheduling_algorithm_simulator
34