0% found this document useful (0 votes)
25 views24 pages

Skill Paper III Unit

Uploaded by

Kesavan Kesavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views24 pages

Skill Paper III Unit

Uploaded by

Kesavan Kesavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

CPU Scheduling Algorithms in Operating

Systems:
we will be learning about the CPU Scheduling Algorithms in Operating
Systems. These are algorithms are very important topic in Operating Systems.
This is because this CPU Scheduling Algorithms forms a base and foundation for
the Operating Systems subject.

There are many processes which are going on in the Operating System. A
task is a group of processes. Every task is executed by the Operating System. The
Operating System divides the task into many processes. The final goal of the
Operating System is completion of the

But there are some certain conditions which must be followed by the task.
The conditions are that the Task must be finished in the quickest possible time
with the limited resources which the Operating System have. This is the main
motive of CPU Scheduling Algorithms.

CPU Scheduling:
The CPU Scheduling is the process by which a process is executed by the
using the resources of the CPU. The process also can wait due to the absence or
unavailability of the resources. These processes make the complete use of
Central Processing Unit.

The operating system must choose one of the processes in the list of ready-
to-launch processes whenever the CPU gets idle. A temporary (CPU) scheduler
does the selection. The Scheduler choose one of the ready-to-start memory
processes to get the CPU.

Before, going to the Types of CPU Scheduling Algorithms, we are going to


learn about the Basic Terminologies which are to be followed and used in the
CPU Scheduling Algorithms by us.

1. Process ID:

The Process ID is the first Thing is to be written while solving the problem. The
Process ID acts like the name of the process. It is usually represented with
numbers or P letter with numbers

Example:

0, 1, 2, 3, . . . . . . . . . .

P0, P1, P2, P3 . . . . . . . .

Usually, we start the naming of the process from zero. We can also start the
numbering from number 1 also. It is our interest

1. Arrival Time:

The time which is required for the Process to enter the ready queue or the time
when the Process is ready to be executed by the CPU. This Arrival Time can be
represented as AT in short form. The Arrival Times is always positive or also zero.

2. Burst Time:

The Time Slot which the Process requires to complete the Process is known as
the Burst Time. The Burst Time can be represented as BT in short form. The Burst
Times of a process is always greater than zero.

3. Completion Time:

The Total Time required by the CPU to complete the process is known as
Completion Time. The Completion Time can be represented as CT in short form.
The Completion will always be greater than zero.

4. Turn Around Time:

The time taken by the CPU since the Process has been ready to execute or since
the process is in Ready Queue is known as Turn Around Time. The Turn Around
Time can be calculated with the help of Completion Time and Arrival Time. The
Turn Around Time can be represented as TAT in short form.

The Turn Around Time is the difference of Completion Time and Arrival Time.

Formula:

TAT = CT - AT

Here, CT is Completion Time and AT is Arrival Time.


5. Waiting Time:

The time the Process has been waiting to complete its process since the
assignment of process for completion is known as Waiting Time. The Waiting
Time can be represented as WT in short form. The Waiting Time can be
calculated with the help of Turn Around Time and Burst Time.

The Waiting Time is the difference between Turn Around Time and Burst Time.

Formula:

WT = TAT - BT

6. Ready Queue:

The Queue where all the processes are stored until the execution of the previous
process. This ready queue is very important because there would be confusion in
CPU when two same kinds of processes are being executed at the same time.

Then, in these kinds of conditions the ready queue comes into place and then,
the its duty is fulfilled.

7. Gantt Chart:

It is the place where all the already executed processes are stored. This is very
useful for calculating Waiting Time, Completion Time, Turn Around Time.

Types of CPU Scheduling:


CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state(for
I/O request or invocation of wait for the termination of one of the child
processes).

2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs).

3. When a process switches from the waiting state to the ready state(for
example, completion of I/O).

4. When a process terminates.

In circumstances 1 and 4, there is no choice in terms of scheduling. A new


process(if one exists in the ready queue) must be selected for execution. There is
a choice, however in circumstances 2 and 3.

When Scheduling takes place only under circumstances 1 and 4, we say the
scheduling scheme is non-preemptive; otherwise, the scheduling scheme is
preemptive.

Modes in CPU Scheduling Algorithms:

There are two modes in CPU Scheduling Algorithms. They are:

1. Pre-emptive Approach

2. Non Pre-emptive Approach

In Pre Emptive-Approach the process once starts its execution then


the CPU is allotted to the same process until the completion of process. There
would be no shift of Processes by the Central Processing Unit. The complete CPU
is allocated to the Process and there would be no change of CPU allocation until
the process is complete.

In Non-Pre Emptive-Approach the process once stars its execution,


then the CPU is not allotted to the same process until the completion of process.
There is a shift of Processes by the Central Processing Unit. The complete CPU is
allocated to the Process when only certain required conditions are achieved and
there will be change of CPU allocation if there is break or False occurrence in the
required conditions.

Types of CPU Scheduling Algorithms:

I. First Come First Serve

II. Shortest Job First

III. Priority Scheduling

IV. Round Robin Scheduling

First Come First Serve Scheduling Algorithm:


This is the first type of CPU Scheduling Algorithms. Here, in this CPU
Scheduling Algorithm we are going to learn how CPU is going to allot resources
to the certain process.
Here, in the First Come First Serve CPU Scheduling Algorithm, the
CPU allots the resources to the process in a certain order. The order is serial way.
The CPU is allotted to the process in which it has occurred.

We can also say that First Come First Serve CPU Scheduling Algorithm
follows First In First Out in Ready Queue.

First Come First Serve can be called as FCFS in short form.

Characteristics of FCFS (First Come First Serve):

• First Come First Serve can follow or can be executed in Pre emptive
Approach or Non-Pre emptive Approach.

• The Process which enters the Ready Queue is executed First. So, we say
that FCFS follows First in First Out Approach.

• First Come First Come First Serve is only executed when the Arrival Time
(AT) is greater than or equal to the Time which is at present.

Advantages:

• Very easy to perform by the CPU

• Follows FIFO Queue Approach

Disadvantages

• First Come First Serve is not very efficient.

• First Come First Serve suffers because of Convoy Effect.

Shortest Job First CPU Scheduling Algorithm


This is another type of CPU Scheduling Algorithms. Here, in this CPU
Scheduling Algorithm we are going to learn how CPU is going to allot resources
to the certain process.

The Shortest Job is heavily dependent on the Burst Times. Every CPU
Scheduling Algorithm is basically dependent on the Arrival Times. Here, in this
Shortest Job First CPU Scheduling Algorithm, the CPU allots its resources to the
process which is in ready queue and the process which has least Burst Time.
If we face a condition where two processes are present in the Ready Queue
and their Burst Times are same, we can choose any process for execution. In
actual Operating Systems, if we face the same problem then sequential
allocation of resources takes place.

Shortest Job First can be called as SJF in short form.

Characteristics:

• SJF (Shortest Job First) has the least average waiting time. This is because
all the heavy processes are executed at the last. So, because of this reason
all the very small, small processes are executed first and prevent
starvation of small processes.

• It is used as a measure of time to do each activity.

• If shorter processes continue to be produced, hunger might result. The


idea of aging can be used to overcome this issue.

• Shortest Job can be executed in Pre emptive and also non pre emptive way
also.

Advantages

• SJF is used because it has the least average waiting time than the other
CPU Scheduling Algorithms

• SJF can be termed or can be called as long term CPU scheduling algorithm.

Disadvantages

• Starvation is one of the negative traits Shortest Job First CPU Scheduling
Algorithm exhibits.

• Often, it becomes difficult to forecast how long the next CPU request will
take

Priority CPU Scheduling


This is another type of CPU Scheduling Algorithms. Here, in this CPU
Scheduling Algorithm we are going to learn how CPU is going to allot resources
to the certain process.
The Priority CPU Scheduling is different from the remaining CPU Scheduling
Algorithms. Here, each and every process has a certain Priority number.

There are two types of Priority Values.

i. Highest Number is considered as Highest Priority Value

ii. Lowest Number is considered as Lowest Priority Value

Priority for Prevention The priority of a process determines how the CPU
Scheduling Algorithm operates, which is a preemptive strategy. Since the editor
has assigned equal importance to each function in this method, the most crucial
steps must come first. The most significant CPU planning algorithm relies on the
FCFS (First Come First Serve) approach when there is a conflict, that is, when
there are many processors with equal value.

Characteristics

• Priority CPU scheduling organizes tasks according to importance.

• When a task with a lower priority is being performed while a task with a
higher priority arrives, the task with the lower priority is replaced by the
task with the higher priority, and the latter is stopped until the execution
is finished.

• A process's priority level rises as the allocated number decreases.

Advantages

• The typical or average waiting time for Priority CPU Scheduling is shorter
than First Come First Serve (FCFS).

• It is easier to handle Priority CPU scheduling

• It is less complex

Disadvantages

The Starvation Problem is one of the Pre emptive Priority CPU Scheduling
Algorithm's most prevalent flaws. Because of this issue, a process must wait a
longer period of time to be scheduled into the CPU. The hunger problem or the
starvation problem is the name given to this issue.
Round Robin CPU Scheduling:
Round Robin is a CPU scheduling mechanism those cycles around assigning
each task a specific time slot. It is the First come, First served CPU Scheduling
technique with preemptive mode. The Round Robin CPU algorithm frequently
emphasizes the Time-Sharing method.

=>Round Robin CPU Scheduling Algorithm characteristics include:

Because all processes receive a balanced CPU allocation, it is straightforward,


simple to use, and starvation-free.

One of the most used techniques for CPU core scheduling. Because the
processes are only allowed access to the CPU for a brief period of time, it is seen
as preemptive.

The benefits of round robin CPU Scheduling:

• Every process receives an equal amount of CPU time, therefore round


robin appears to be equitable.

• To the end of the ready queue is added the newly formed process.

*****************************

DISTRIBUTED SYSTEM

➢ A distributed operating system is a type of operating


system that manages a network of independent
computers and makes it appear as if they are a single
computer.

➢ It allows for the sharing of resources, such as storage, processing


power, and memory across multiple machines.
➢ It also enables concurrent processing of tasks across different
machines and provides fault tolerance, making it possible for the
system to continue functioning even in the event of node failures.

➢ Distributed operating systems are widely used in applications such as


cloud computing, big data processing, and high-performance
computing

Types of Distributed Operating System

Let’s check out each type of distributed operating system in detail:

1. Peer-to-Peer Systems: In a peer-to-peer (P2P) distributed operating


system, each computer or node is equal in terms of functionality and
can act as a client or server.Nodes can share resources such as
processing power, storage, and bandwidth with each other. P2P
systems are often used in file sharing, instant messaging, and
gaming applications. They are also known as “Loosely Coupled
Systems”.

2. Client-Server Systems: In a client-server distributed operating


system, the server provides a specific set of services or resources to
the client. The client makes requests to the server, and the server
responds by providing the requested service or resource. Client-
server distributed operating systems are commonly used in
enterprise applications.
3. Middleware: In contrast to the other distributed operating systems,
middleware is a software layer that sits between the operating
system and application software. It provides a set of services that
enable communication between different applications running on different
machines. Middleware is used to create distributed systems that can run
across multiple platforms.

4. N-tier Systems: N-tier distributed operating systems are based on the


concept of dividing an application into different tiers, where each tier
has a specific responsibility. For example, a three-tier system might
have a presentation tier, a business logic tier, and a data storage tier.
The different tiers can run on different machines, providing scalability,
fault tolerance, and performance.

5. Three-tier Systems: A three-tier distributed operating system is a


specific type of N-tier system that consists of a presentation tier, an
application tier, and a data storage tier. The presentation tier provides
the user interface, the application tier handles the businesslogic, and
the data storage tier handles data storage and retrieval.

Disadvantages of Distributed Operating System

While distributed operating systems offer many advantages, they also have
some disadvantages, including:

1. Complexity: Distributed operating systems are complex and require


specialized skills to design, implement, and manage, which can lead to
higher costs and longer development times.

2. Communication overhead: Communication between nodes in a


distributed operating system can introduce overhead and latency,
which can negatively impact performance.

3. Synchronization: Maintaining consistency and synchronization


between nodes can be challenging, especially when dealing with
distributed data.

4. Security: Distributed operating systems can be more vulnerable to


security threats due to the increased number of nodes and the need to
manage access and permissions across multiple machines.

5. Dependence on network infrastructure: Distributed operating


systems are highly dependent on the underlying network infrastcture,
which can impact system availability and performance.

********************
SOFTWARE PROCESS MODELS

A software process model is a specified definition of a software process,


which is presented from a particular perspective. Models, by their nature, are a
simplification, so a software process model is an abstraction of the actual process,
which is being described. Process models may contain activities, which are part of
the software process, software product, and the roles of people involved in software
engineering

TYPES OF SOFTWARE PROCESS MODELS

• Waterfall model
• V-Model
• RAD (Rapid Application Development) Model
• Spiral Model
• Incremental Model
Waterfall model
Winston Royce introduced the Waterfall Model in 1970. This model is named
"Waterfall Model", because its diagrammatic representation resembles a cascade of
waterfalls .

This model has five phases:


1. Requirements analysis and specification phase
2. Design Phase
3. Implementation and unit testing
4. Integration and System Testing
5. Operation and maintenance phase
The steps always follow in this order and do not overlap. The developer must
complete every phase before the next phase begins.

Advantages of Waterfall model


 This model is simple to implement also the number of resources that are
required for it is minimal.
 The requirements are simple and explicitly declared; they remain unchanged
during the entire project development.
 The start and end points for each phase is fixed, which makes it easy to cover
progress.

Disadvantages of Waterfall model


 In this model, the risk factor is higher, so this model is not suitable for more
significant and complex projects.
 This model cannot accept the changes in requirements during development.
V-Model
V-Model also referred to as the Verification and Validation Model. In this,
each phase of SDLC must complete before the next phase starts. It follows a
sequential design process same as the waterfall model.
There are the various phases of Verification Phase of V-model:
• Business requirement analysis
• System Design
• Architecture Design
• Module Design
• Coding Phase

Advantage of V-Model:
 Easy to Understand.
 Testing Methods like planning, test designing happens well before coding.
 This saves a lot of time. Hence a higher chance of success over the waterfall
model.

Disadvantage of V-Model:
 Very rigid and least flexible.
 Not a good for a complex project.
 Software is developed during the implementation stage, so no early prototypes
of the software are produced

RAD (Rapid Application Development)


Model

RAD is a linear sequential software development process model that


emphasizes a concise development cycle using an element based construction
approach. If the requirements are well understood and described, and the project
scope is a
constraint, the RAD process enables a development team to create a fully
functional system within a concise time period. The various phases of RAD are
as follows:
• Business Modelling
• Data Modelling
• Process Modelling
• Application Generation
• Testing & Turnover
Advantage of RAD Model
 This model is flexible for change.
 In this model, changes are adoptable.

 Each phase in RAD brings highest priority functionality to the customer. ❖ It


reduced development time.

Spiral Model

The spiral model, initially proposed by Boehm, is an evolutionary


software process model that couples the iterative feature of prototyping with the
controlled and systematic aspects of the linear sequential model. It implements the
potential for rapid development of new versions of the software. Using the spiral
model, the software is developed in a series of incremental releases.

Each cycle in the spiral is divided into four parts:


• Objective setting
• Risk Assessment and reduction
• Development and validation
• Planning
Advantages
 High amount of risk analysis
 Useful for large and mission-critical projects.

Disadvantages
 Can be a costly model to use.
 Risk analysis needed highly particular expertise

Incremental Model

Incremental Model is a process of software development where requirements


divided into multiple standalone modules of the software development cycle. In
this model, each module goes through the requirements, design,
implementation and testing phases. Every subsequent release of the module
adds function to the previous release. The process continues until the complete
system achieved.

Advantage of Incremental Model


 Errors are easy to be recognized. ❖ Easier to test
and debug
 More flexible.

Disadvantage of Incremental Model


❖ Need for good
planning
❖ Total Cost is high.
Software Requirements
The software requirements are description of features and
functionalities of the target system. The requirements can be obvious or
hidden, known or unknown, expected or unexpected from client’s point of
view.

Requirement Engineering

The process to gather the software requirements from client, analysis and
document them is known as requirement engineering.

The goal of requirement engineering is to develop and maintain


sophisticated and descriptive ‘System Requirements Specification’
document.

Requirement Engineering Process

It is a four step process, which includes –

• Feasibility Study
• Requirement Gathering
• Software Requirement Specification
• Software Requirement Validation

Let us see the process briefly -

Feasibility study

This feasibility study is focused towards goal of the organization. This


study analysis whether the software product can be practically materialized
in terms of implementation, contribution of project to organization, cost
constraints and as per values and objectives of the organization.

Requirement Gathering

If the feasibility report is positive towards undertaking the project, next


phase starts with gathering requirements from the user. Analysts and
engineers communicate with the client and end-users to know their ideas
on what the software should provide and which features they want the
software to include.

Software Requirement Specification

SRS is a document created by system analyst after the requirements are


collected from various stakeholders.

SRS should come up with following features:

• User Requirements are expressed in natural language.


• Technical requirements are expressed in structured language, which
is used inside the organization.
• Design description should be written in Pseudo code.
• Format of Forms and GUI screen prints.
• Conditional and mathematical notations for DFDs etc.

Software Requirement Validation

After requirement specifications are developed, the requirements


mentioned in this document are validated.

Requirements can be checked against following conditions -

• If they can be practically implemented


• If they are valid and as per functionality and domain of software
• If there are any ambiguities
• If they are complete
• If they can be demonstrated
Requirement Elicitation Process

Requirement elicitation process can be depicted using the following


diagram:
• Negotiation & discussion - If requirements are ambiguous or there
are some conflicts in requirements of various stakeholders, if they are,
it is then negotiated and discussed with stakeholders. Requirements
may then be prioritized and reasonably compromised.
The requirements come from various stakeholders. To remove the
ambiguity and conflicts, they are discussed for clarity and
correctness. Unrealistic requirements are compromised reasonably.
• Documentation - All formal & informal, functional and non-
functional requirements are documented and made available for next
phase processing.

*****************

ESTIMATION AND SCHEDULING OF SOFTWARE


PROJECTS

Estimation and scheduling of software projects are crucial steps in


project management to ensure timely delivery, efficient resource
allocation, and proper budgeting. Below are key concepts and methods
used in estimating and scheduling software projects:

1. Estimation Techniques

a. Top-Down Estimation

• Definition: This approach looks at the project as a whole and


divides it into manageable components.
• Use: Useful in early project phases when only high-level
information is available.
• Pros: Quick and suitable for projects with well-understood tasks.
• Cons: Can miss details and lead to inaccurate estimates.

b. Bottom-Up Estimation

• Definition: Breaks the project into smaller tasks and estimates


each one individually, then aggregates the estimates.
• Use: Provides more accurate estimates when detailed project
information is available.
• Pros: More detailed and precise.
• Cons: Can be time-consuming and complex.

c. Analogous Estimation

• Definition: Estimates are based on previous similar projects.


• Use: Useful when past project data is available and applicable.
• Pros: Quick and leverages historical data.
• Cons: Accuracy depends on how similar the past project is to the
current one.

d. Parametric Estimation

• Definition: Uses statistical data and mathematical models to


predict project duration and costs.
• Use: Good for projects with repeatable tasks.
• Pros: Scalable and works well for large projects.
• Cons: Requires reliable data to be effective.

e. Expert Judgment

• Definition: Relies on experts' experience and intuition to estimate


tasks.
• Use: When experienced individuals are available to assess the
project.
• Pros: Quick and can be accurate when experts are highly
knowledgeable.
• Cons: Subjective and varies by the individual.
f. Three-Point Estimation (PERT)

• Definition: Uses three estimates for each task (Optimistic,


Pessimistic, Most Likely) to calculate a weighted average.
• Use: When tasks have uncertainty.
• Pros: Accounts for uncertainty and variability.
• Cons: Requires more effort to generate multiple estimates.

2. Scheduling Techniques

a. Gantt Charts

• Definition: A visual timeline that illustrates the start and end dates
of tasks in a project.
• Use: Provides a clear view of project progress, task dependencies,
and deadlines.
• Pros: Simple and easy to understand.
• Cons: May become complex for large projects.

b. Critical Path Method (CPM)

• Definition: Identifies the longest sequence of tasks that


determine the project’s completion time (the critical path).
• Use: Focuses on tasks that directly impact project duration.
• Pros: Helps in identifying essential tasks and resource allocation.
• Cons: Ignores resource constraints and may not account for task
variability.

c. Program Evaluation and Review Technique (PERT)

• Definition: Similar to CPM but adds a probabilistic approach,


using three-point estimates (Optimistic, Pessimistic, and Most
Likely) for each task.
• Use: Used when task durations are uncertain.
• Pros: Accounts for risk and uncertainty in the schedule.
• Cons: Requires more complex analysis.
d. Agile Scheduling

• Definition: Uses iterative development cycles (sprints) and


adaptive planning.
• Use: Common in projects with evolving requirements, such as
software development.
• Pros: Flexible and adaptive.
• Cons: Less predictable for long-term projects.

e. Resource Leveling

• Definition: Adjusts the schedule based on resource availability,


ensuring resources are not over-allocated.
• Use: To balance resource allocation over the duration of the project.
• Pros: Reduces resource overallocation.
• Cons: Can delay project completion.

3.Steps in Estimate scheduling:

Step 1: Define the Scope

• Break down the project into well-defined tasks using a Work


Breakdown Structure (WBS).

Step 2: Estimate Effort

• Use one or more estimation techniques to calculate the effort


required for each task.

Step 3: Assign Resources

• Allocate team members and other resources to tasks based on


availability and skill set.

Step 4: Create the Schedule


• Develop a timeline for task completion using scheduling
techniques like Gantt charts or CPM.

Step 5: Monitor and Adjust

• Continuously track project progress, making adjustments to the


schedule as necessary to deal with unforeseen delays or changes.

3. Key Considerations

• Risk Management: Incorporate contingency buffers for uncertain


tasks.
• Dependencies: Identify task dependencies to understand the impact
of delays.
• Milestones: Set important milestones to track project progress.
• Team Communication: Ensure all team members understand their
roles and deadlines.

By combining accurate estimation techniques with a well-


structured scheduling approach, projects can be managed
effectively to meet their objectives.
.

You might also like