0% found this document useful (0 votes)
19 views21 pages

Module 1 - 3 Opersyst

Software is categorized into application software, which performs specific tasks for users, and system software, which manages basic computer operations. Key components of system software include operating systems, language translators, and process management tools. The document also discusses the evolution of operating systems, types of computer systems, and the importance of process and thread management in computing.

Uploaded by

iiish22001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views21 pages

Module 1 - 3 Opersyst

Software is categorized into application software, which performs specific tasks for users, and system software, which manages basic computer operations. Key components of system software include operating systems, language translators, and process management tools. The document also discusses the evolution of operating systems, types of computer systems, and the importance of process and thread management in computing.

Uploaded by

iiish22001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Software Basics

Software is a program or set of instructions that tells the computer what to do.

Two Classes of Software

1. Application Software – are programs that are used to accomplish specific or specialized task for computer users
such as creating and editing documents, making graphic presentations.

2. System Software – are programs that control the basic operations of a computer system such as saving files in
·

a storage device, input from hardware, printing files

Examples: operating systems, communications control programs, interpreters, compilers, debuggers, text
editors, linkers, loaders

Categories of Application Software

1. Document Production Software – these are the programs that help computer users in composing, editing,
printing, and electronically publishing documents.

2. Spreadsheet software – these programs allow users to enter and manipulate data into an electronic worksheet
of rows and columns.

3. Presentation software – through this kind of software, users are able to create slide shows for visual
presentation purposes.

4. Database management software – these enable the users to store, manipulate, and retrieve vast amounts of
data in an organized and structured manner.

5. Business Software – are programs that can provide users with tools for business management.

6. Multimedia Software – multimedia programs that can process pictures, sound, and video

7. Entertainment Software – these include music players, video players, and games.

Some key benefits of having system software are:

1. They hide the ugly details of computer operations from the user.

2. Computer programmers can write programs without knowing the details on how the computer hardware works.

3. System software such as operating system controls the execution of programs.

Software Programming Concepts

➢ System Programming - the act of developing system software


➢ Machine Language- natural or primitive language that the computer actually understands
➢ Assembly Language- uses the abbreviation or mnemonics in place of binary patterns in order to make the task
of programming easier
➢ Assembler - a special translator program that converts assembly language mnemonics into language instructions
➢ High-level programming languages - use English-like commands or instructions, easiest to use and contain many
complicated or advanced instructions
➢ Compiler - special translator program that converts high-level language instructions into machine language

Types of System Programs


Language Translator – are system programs that convert a program written in a high level programming language or
assembly language into machine language thereby allowing to execute it.

Types of Language Translators

➢ Assemblers – a language translator that takes a source program written in assembly language and converts it to
an executable file that contains machine language instructions.

➢ Compilers – is a language translator that takes a source program written in a high level programming language
and converts it to an executable file that contains machine language instructions.

➢ Interpreters – converts a program written in a high level programming language to machine code so that it can
be executed by a computer.

➢ Linkers - system program that links or combines object modules together with the libraries needed by these
· I

programs to form a single executable program

➢ Loaders - System program that takes the load module from secondary storage and brings it into main memory

·
for execution; performs address binding

Address Binding
The process of assigning or mapping symbolic references to actual main memory addresses. These actual memory
addresses are often called physical addresses or absolute addresses.

Three techniques the loader may use in placing the load module in main memory:

1. Absolute Loading - the load module already contains actual memory addresses instead of symbolic variables.

2. Relocatable Loading - the load module is generated before load module is loaded and load module can be loaded
anywhere in the main memory based on the available main memory location.

3. Dynamic Run Time Loading - Absolute address is not generated when loaded, but when it is needed by the CPU;
the loader places the load module into the main memory without converting the relative addresses to absolute
addresses.
Operating System
It is a system software that allows users and the application programs they are using to interact with the computer
-

hardware in an easy and convenient manner.

Major functions of an operating system:

• It creates a virtual machine interface between the user, application program and hardware.
-

• It acts as the computer’s resource manager or resource allocator.

• It functions as the program launcher.

Parts of the Operating System

➢ Kernel – is the heart and soul of operating system. It is responsible in controlling the computer hardware and

.
performing many of the services being offered by the operating system.

➢ Shell – is the part of the operating system that serves as the interface between users and kernel. It is often called
the command interpreter.

Two basic types of shell:

➢ Command Line Interface (CLI) – requires user to type the commands at a prompt. This type of shell is seen in the
old DOS and many Linux Systems.

➢ Graphical User Interface (GUI) – user enter commands by either using drop-down menus or by clicking on icons
using a mouse pointer.

Services Provided by the OS

• Program Execution

• Access to I/O devices

• File System Management

• System Access

• Error Handling

• Communication
Core Components of OS
➢ Process Manager – this component is also called the Process Scheduler or CPU Scheduler, which is responsible for
determining which among the program will execute first and determining runtime duration.
➢ Memory Manager – responsible for making sure that programs are given sufficient memory space to execute
effectively.
➢ File Manager – organizes the files stored in secondary storage and presents an interface to the users.
➢ I/O Manager – also called as Device Manager, it manage the different I/O devices of the entire computer system.

History of Operating System


Serial Processing (First Generation)

During the late 1940’s and early 1950’s, computer were massive, expensive, slow, and very primitive. These mainframes
have the following characteristics:

1. There were no operating systems, so these computers were “bare” machines.

2. These computers can only be used by one person at a time.

3. There were no keyboards during this time, user commands enter by using toggle switches.
·

4. After this, high-level programming languages were developed and stored in punched cards.

5. There were no fixed or hard disks.

Batch Systems (Second Generation)

• It started in the mid 1950’s.

• One solution to the inefficient use of expensive computer during the serial processing period is to group similar
-

jobs and process them together as a batch


-

Multiprogrammed Systems (Third Generation)

• Multiprogramming, which means concurrent execution of two or more programs by single CPU was
implemented
Time Sharing Systems (Fourth Generation)

• Time sharing is simply an extension of multiprogramming. The operating system assigns the CPU to a user or to a
process for a certain period of time, usually in the range of a few milliseconds. This time period is often called
the time slice or time quantum.

Other Types of Computer Systems and their Operating Systems


➢ Personal Computer Systems - These computers were designed as single user systems
➢ Multiprocessor Systems - These computers are computer systems with more than one CPU.

There are generally two kinds of multiprocessor systems:

1. Symmetric Multiprocessors (SMP)

• This is a multiprocessor system that has several, usually identical, processor that share a common main memory.
Because of this, SMP’s are also called Shared Memory Multiprocessors.

• In SMP, there is only one operating system and each processor runs a copy of that operating system. The main
characteristics of an SMP is that each processor may be assigned to perform any available task.
·
2. Asymmetric Multiprocessor (AMP)

• Also called Distributed Memory Multiprocessors, these systems also have multiple processors but each processor
has its own local memory.

• The main difference between an AMP and an SMP is that in the former, each processor is assigned certain tasks
only.

➢ Network and Distributed Systems

• Network operating systems allow the sharing of resources among computers connected in a network. This
type of operating system is responsible for managing the sharing resources and the actual communication
among the computers.

• Distributed Systems are very similar to network systems in the sense that they are also the interconnection
of independent computers that can share resources.

• In addition to managing resource sharing and communication among the computers, the goals of a distributed
operating system are:

o Transparency
o Parallelism
o Reliability

➢ Real Time Systems – are computers that operate on a very strict time constraint. They are literally required to
produce results immediately upon receipt of input data.
➢ Handheld Systems – are computers that are characterized by being battery powered, having slower processors
compared to PC’s and having smaller memory.
Process Management 3. CPU registers: Like the Program Counter
(CPU registers must be saved and
A process is a program in execution. For example, restored when a process is swapped in
when we write a program in C or C++ and and out of CPU)
compile it, the compiler creates binary code. The 4. Accounts information
original code and binary code are both programs. 5. I/O status information: For example,
When we actually run the binary code, it devices allocated to the process, open
becomes a process. files, etc.
A process is an ‘active’ entity, as opposed to a 6. CPU scheduling information: For
program, which is considered to be a ‘passive’ example, Priority (Different processes
entity. A single program can create many may have different priorities, for
example a short process may be assigned
processes when run multiple times; for example,
when we open a .exe or binary file multiple a low priority in the shortest job first
times, multiple instances begin (multiple scheduling)
processes are created). All the above attributes of a process are also
known as the context of the process.
What does a process look like in memory?
Every process has its own program control
block(PCB), i.e. each process will have a unique
PCB. All of the above attributes are part of the
PCB.

States of Process
1. New: Newly Created Process (or) being-
created process.

2. Ready: After the creation process moves to


Text Section: A Process, sometimes known as the Ready state, i.e. the process is ready for
Text Section, also includes the current activity execution.
represented by the value of the Program
Counter. 3. Run: Currently running process in CPU (only
Stack: The stack contains the temporary data, one process at a time can be under execution in
such as function parameters, returns addresses, a single processor).
and local variables. 4. Wait (or Block): When a process requests I/O
Data Section: Contains the global variable. access.
Heap Section: Dynamically allocated memory to
process during its run time. 5. Complete (or terminated): The process
completed its execution.

6. Suspended Ready: When the ready queue


·

Characteristics of Process becomes full, some processes are moved to


suspended ready state
1. Process ID: A unique identifier assigned
by the operating system 7. Suspended Block: When the waiting queue
2. Process State: Can be ready, running, etc. becomes full.
2. Faster context switch: Context switch time
between threads is lower compared to process
context switch. Process context switching
requires more overhead from the CPU.

3. Effective utilization of multiprocessor


system: If we have multiple threads in a single
process, then we can schedule multiple threads
on multiple processor. This will make process
execution faster.

4. Resource sharing: Resources like code, data,


Thread and files can be shared among all threads within
a process.
What is a thread? *
Note: stack and registers can’t be shared among
A thread is a path of execution within a process. the threads. Each thread has its own stack and
A process can contain multiple threads. registers.

Why Multithreading? 5. Communication: Communication between


multiple threads is easier, as the threads shares
A thread is also known as lightweight process. common address space. while in process we have
The idea is to achieve parallelism by dividing a to follow some specific communication
process into multiple threads. For example, in a technique for communication between two
browser, multiple tabs can be different threads. process.
MS Word uses multiple threads: one thread to
format the text, another thread to process 6. Enhanced throughput of the system: If a
inputs, etc. process is divided into multiple threads, and each
thread function is considered as one job, then
Process vs. Thread the number of jobs completed per unit of time is
The primary difference is that threads within the increased, thus increasing the throughput of the
same process run in a shared memory space, system.
while processes run in separate memory spaces.
User Level Thread & Kernel Level
Threads are not independent of one another like Thread
processes are, and as a result threads share with
other threads their code section, data section,
and OS resources (like open files and signals).
But, like process, a thread has its own program
counter (PC), register set, and stack space.

Advantages of Thread over Process

1. Responsiveness: If the process is divided into


multiple threads, if one thread completes its
execution, then its output can be immediately
returned.
Multi-Threading Models in Process One to One Model
Management In this model, one to one relationship between
Many operating systems support kernel thread kernel and user thread. In this model multiple
and user thread in a combined way. Example of thread can run on multiple processor. Problem
with this model is that creating a user thread
such system is Solaris. Multi threading model
requires the corresponding kernel thread.
are of three types.

• Many to many model.

• Many to one model.

• One to one model.

Many to Many Model

In this model, we have multiple user threads


multiplex to same or lesser number of kernel
level threads. Number of kernel level threads are
specific to the machine, advantage of this model Process Synchronization
is if a user thread is blocked we can schedule
On the basis of synchronization, processes are
others user thread to other kernel thread. Thus,
categorized as one of the following two types:
System doesn’t block if a particular thread is
blocked. Independent Process : Execution of one process
does not affects the execution of other
processes.

Cooperative Process : Execution of one process


affects the execution of other processes.

Process synchronization problem arises in the


case of Cooperative process also because
Many to One Model
resources are shared in Cooperative processes.
In this model, we have multiple user threads
mapped to one kernel thread. In this model
when a user thread makes a blocking system call Semaphores in Process
entire process blocks. As we have only one
kernel thread and only one user thread can
Synchronization
access kernel at a time, so multiple threads are Semaphore was proposed by Dijkstra in 1965
not able access multiprocessor at the same time. which is a very significant technique to manage
concurrent processes by using a simple integer
value, which is known as a semaphore.
Semaphore is simply a variable which is non-
negative and shared between threads. This
variable is used to solve the critical section
problem and to achieve process synchronization
in the multiprocessing environment.
Semaphores are of two types:

Binary Semaphore – This is also known as mutex


lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement
the solution of critical section problem with
multiple processes.
Counting Semaphore – Its value can range over
an unrestricted domain. It is used to control
access to a resource that has multiple instances

Now let us see how it do so.

First, look at two operations which can be used


to access and change the value of the semaphore
Now suppose there is a resource whose number
variable.
of instance is 4. Now we initialize S = 4 and rest is
same as for binary semaphore. Whenever
process wants that resource it calls P or wait
function and when it is done it calls V or signal
function. If the value of S becomes zero then a
process has to wait until S becomes positive. For
example, suppose there are 4 process P1, P2, P3,
P4, and they all call wait operation on S(initialized
with 4). If another process P5 wants the resource
then it should wait until one of the four
Some point regarding P and V operation processes calls signal function and value of
1. P operation is also called wait, sleep or down semaphore becomes positive.
operation and V operation is also called Limitations
signal, wake-up or up operation.
1. One of the biggest limitation of
2. Both operations are atomic and semaphore is priority inversion.
semaphore(s) is always initialized to
one.Here atomic means that variable on 2. Deadlock, suppose a process is trying to
which read, modify and update happens at wake up another process which is not in
the same time/moment with no pre- sleep state.Therefore a deadlock may
emption i.e. in between read, modify and block indefinitely.
update no other operation is performed that
3. The operating system has to keep track
may change the variable.
of all calls to wait and to signal the
3. A critical section is surrounded by both semaphore.
operations to implement process
synchronization See below image.critical
section of Process P is in between P and V
operation.
Monitors in Process Synchronization any condition variable are suspended. The
suspended processes are placed in block queue
The monitor is one of the ways to achieve of that condition variable.
process synchronization. The monitor is
supported by programming languages to achieve Note: Each condition variable has its unique
mutual exclusion between processes. For block queue.
example Java Synchronized methods. Java Signal operation
provides wait() and notify() constructs. x.signal(): When a process performs signal
operation on condition variable, one of the
1. It is the collection of condition variables and blocked processes is given chance.
procedures combined in a special kind of module
or a package. If (x block queue empty)
// Ignore signal
2. The processes running outside the monitor else
can’t access the internal variable of the monitor // Resume a process from block queue.
but can call procedures of the monitor.

3. Only one process at a time can execute code


inside monitors. Advantages of Monitor: Monitors have the
advantage of making parallel programming
Syntax easier and less error prone than using techniques
such as semaphore.

Disadvantages of Monitor: Monitors have to be


implemented as part of the programming
language . The compiler must generate code for
them. This gives the compiler the additional
burden of having to know what operating system
facilities are available to control access to critical
sections in concurrent processes. Some
languages that do support monitors are
Java,C#,Visual Basic,Ada and concurrent Euclid.

Peterson's Algorithm in Process


Condition Variables:
Synchronization
Two different operations are performed on the
condition variables of the monitor. Problem: The producer consumer problem (or
bounded buffer problem) describes two
Wait.
processes, the producer and the consumer,
Signal.
which share a common, fixed-size buffer used as
let say we have 2 condition variables a queue. Producer produce an item and put it
condition x, y; // Declaring variable into buffer. If buffer is already full then producer
will have to wait for an empty block in buffer.
Wait operation
Consumer consume an item from buffer. If buffer
x.wait() : Process performing wait operation on
is already empty then consumer will have to wait
for an item in buffer. Implement Peterson’s // if producer is ready to produce an item
Algorithm for the two processes using shared // and if its producer's turn
memory such that there is mutual exclusion while (flag[j] == true && turn == j)
between them. The solution should have free
{ // then consumer will wait }
from synchronization problems.
// otherwise consumer will consume
// an item from buffer (critical Section)

// Now, consumer is out of critical section


flag[i] = false;
// end of code for consumer

Explanation of Peterson’s algorithm

Peterson's Algorithm Peterson’s Algorithm is used to synchronize two


processes. It uses two variables, a bool array flag
// code for producer (j) of size 2 and an int variable turn to accomplish it.
// producer j is ready to produce an item
flag[j] = true; In the solution i represents the Consumer and j
represents the Producer. Initially the flags are
// but consumer (i) can consume an item false. When a process wants to execute it’s
turn = i; critical section, it sets it’s flag to true and turn as
// if consumer is ready to consume an item and the index of the other process. This means that
if its consumer's turn the process wants to execute but it will allow the
while (flag[i] == true && turn == i) other process to run first. The process performs
busy waiting until the other process has finished
{ // then producer will wait } it’s own critical section.
// otherwise producer will produce
After this the current process enters it’s critical
// an item and put it into buffer (critical Section)
section and adds or removes a random number
// Now, producer is out of critical section from the shared buffer. After completing the
flag[j] = false; critical section, it sets it’s own flag to false,
// end of code for producer indication it does not wish to execute anymore.

//-------------------------------------------------------- The program runs for a fixed amount of time


// code for consumer i before exiting. This time can be changed by
changing value of the macro RT.
// consumer i is ready
// to consume an item
flag[i] = true;

// but producer (j) can produce an item


turn = j;
Bakery Algorithm in Process for j := 0 to n - 1
do begin
Synchronization
while choosing[j] do no-op;
The Bakery algorithm is one of the simplest while number[j] != 0
known solutions to the mutual exclusion and (number[j], j) < (number[i], i) do no-op;
problem for the general case of N process. end;
Bakery Algorithm is a critical section solution for
critical section
N processes. The algorithm preserves the first
come first serve property.
number[i] := 0;
• Before entering its critical section, the
process receives a number. Holder of the remainder section
smallest number enters the critical section.
until false;
• If processes Pi and Pj receive the same
number,
Explanation
if i < j

Pi is served first;
Firstly the process sets its “choosing” variable to
else Pj is served first. be TRUE indicating its intent to enter critical
section. Then it gets assigned the highest ticket
• The numbering scheme always generates number corresponding to other processes. Then
numbers in increasing order of enumeration; the “choosing” variable is set to FALSE indicating
i.e., 1, 2, 3, 3, 3, 3, 4, 5, … that it now has a new ticket number. This is in-
fact the most important and confusing part of
the algorithm.
Notation – lexicographical order (ticket #,
process id #) – Firstly the ticket number is It is actually a small critical section in itself ! The
compared. If same then the process ID is very purpose of the first three lines is that if a
compared next, i.e.- process is modifying its TICKET value then at that
time some other process should not be allowed
– (a, b) < (c, d) if a < c or if a = c and b < d to check its old ticket value which is now
– max(a [0], . . ., a [n-1]) is a number, k, such that obsolete. This is why inside the for loop before
k >= a[i] for i = 0, . . ., n - 1 checking ticket value we first make sure that all
Shared data – choosing is an array [0..n – 1] of other processes have the “choosing” variable as
boolean values; & number is an array [0..n – 1] of FALSE.
integer values. Both are initialized to False & Zero After that we proceed to check the ticket values
respectively. of processes where process with least ticket
Algorithm Pseudocode – number/process id gets inside the critical
section. The exit section just resets the ticket
repeat value to zero.
choosing[i] := true;
number[i] := max(number[0], number[1], ...,
number[n - 1])+1;
choosing[i] := false;
I a known synchronization
problem

Sleeping Barber problem in Process When a customer arrives, he executes customer


procedure the customer acquires the mutex for
Synchronization
entering the critical region, if another customer
Problem : The analogy is based upon a enters thereafter, the second one will not be able
hypothetical barber shop with one barber. There to anything until the first one has released the
is a barber shop which has one barber, one mutex. The customer then checks the chairs in
barber chair, and n chairs for waiting for the waiting room if waiting customers are less
customers if there are any to sit on the chair. then the number of chairs then he sits otherwise
he leaves and releases the mutex.
If there is no customer, then the barber sleeps in
his own chair. If the chair is available then customer sits in the
When a customer arrives, he has to wake up the waiting room and increments the variable
barber. waiting value and also increases the customer’s
If there are many customers and the barber is semaphore this wakes up the barber if he is
cutting a customer’s hair, then the remaining sleeping.
customers either wait if there are empty chairs in
At this point, customer and barber are both
the waiting room or they leave if no chairs are
awake and the barber is ready to give that person
empty.
a haircut. When the haircut is over, the customer
exits the procedure and if there are no customers
in waiting room barber sleeps.

Solution : The solution to this problem includes


three semaphores.First is for the customer which
counts the number of customers present in the
waiting room (customer in the barber chair is not
included because he is not waiting). Second, the
barber 0 or 1 is used to tell whether the barber is
idle or is working, And the third mutex is used to Algorithm for Sleeping Barber problem:
provide the mutual exclusion which is required
for the process to execute. In the solution, the Semaphore Customers = 0;
customer has the record of the number of Semaphore Barber = 0;
customers waiting in the waiting room if the Mutex Seats = 1;
number of customers is equal to the number of int FreeSeats = N;
chairs in the waiting room then the upcoming Barber {
customer leaves the barbershop. while(true) {
When the barber shows up in the morning, he /* waits for a customer (sleeps). */
executes the procedure barber, causing him to down(Customers);
block on the semaphore customers because it is /* mutex to protect the number of available
initially 0. Then the barber goes to sleep until the seats.*/
first customer comes up. down(Seats);
/* a chair gets free.*/
FreeSeats++;

/* bring customer for haircut.*/


up(Barber);

/* release the mutex on the chair.*/


up(Seats);
/* barber is cutting hair.*/
}
}

Customer {
while(true) {
/* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) {

/* sitting down.*/
FreeSeats--;

/* notify the barber. */


up(Customers);

/* release the lock */


up(Seats);

/* wait in the waiting room if barber is busy. */


down(Barber);
// customer is having hair cut
} else {
/* release the lock */
up(Seats);
// customer leaves
}
}
}
Processes
Processes is a running instance of a program.

Three components of process:

• The executable program itself

• The address space of the process

• The execution context of the process

Thread is a sequence of instructions within a


P process

Process Control
An important function of the operating system is
to manage all the processes that exist within a
computer system.

Process descriptors - Data structure wherein


information about a process is stored.
Process States Typical information about a process that is stored
New state- a process is being created by the CPU in its PCB(process control block):

Ready State – the process is ready to be


executed by the CPU

Running State – the process is being executed by


the CPU

Blocked State – the process stops executing


because it is waiting for some event to happen
Process table is used to keep track of where PCB
Terminated state – the process has finished
·

is located. It contains a collection of pointers to


executing or has been aborted.
the different PCB
Concurrency

• Execution of two or more independent


processes at the same time.

Cooperating Processes

These are processes that share data with one


another
Process as Operations Techniques that can be used by the operating
Major operations an operating system can system to facilitate IPC( inter-process
perform on processes: communication):

• Process creation • Signals – these are software generated


interrupts that are used by a process to
• Process termination inform another process that a certain
• Suspending a process event has occurred

• Resuming a process • Pipes – these allow a process to pass


information to another process
Context Switching
• Message queuing – this is an IPC facility
• The operating system switches the CPU from that allows processes to send messages
·

one process to another. with other processes using message


queue

• Shared memory – allows processes to


exchange data by using shared memory
which other process can access
Principles of Scheduling
CPU Scheduling

is deciding the order of process or thread


execution. This is done by the process manager
or the CPU scheduler.

CPU Scheduling
Non-preemptive
CPU Scheduling
In non-preemptive scheduling, the CPU cannot
• Job scheduling- act of selecting which
be taken away from its executing process. The
processes in the job queue will be loaded
only time the CPU scheduler can assign the CPU
into main memory
to another process is when the currently
• Job scheduler - operating system module executing process terminates or enters into a
that is responsible for job scheduling. It is the blocked state.
one that determines and controls the degree
of multiprogramming
Pre-emptive Scheduling
• CPU scheduler - component of the operating
system that is responsible for CPU In preemptive scheduling, the CPU can be taken
scheduling away from its executing process. The currently
executing process is sent back to the queue and
the CPU scheduler assigns the CPU to another
process.

Pre-emptive Scheduling will happen if:

1. An interrupt occurred so the current process


has to stop executing. The CPU is then assigned
to execute the ISR of the requesting device or
process.

2. The priority of the process that enters the


ready queue is higher than the currently
executing process. The CPU is then assigned to
execute the higher priority process.
3. The time limit of the process for using the PCU • Round robin algorithm
has been exceeded. The CPU is then assigned to
• Priority scheduling
execute another process even though running
process has not yet completed its current CPU
burst.
Waiting Time = Time Left Queue – Arrival
Preemptive scheduling is ideally used for Time
interactive or real time processing. Non-
preemptive schedule is good for batch Turnaround Time = Completion Time –
Arrival Time
processing only.

CPU Scheduling Algorithms


Fairness – meaning all processes will be given
equal opportunity to use the CPU, is another
performance criterion. It is hard to measure so it
is not included in the above list.

Different CPU scheduling algorithms

• First come first served algorithm

• Shortest process first algorithms

• Shortest remaining time first algorithm


First-Come First-Serve Algorithm Round Robin Algorithm

Shortest Process First Algorithm


Shortest Remaining Time First
Algorithm

/
Priority Scheduling

You might also like