0% found this document useful (0 votes)
1 views14 pages

Virtual Memeory

Demand paging is a virtual memory technique where pages are loaded into memory only when needed, leading to page faults when a requested page is not in memory. The document also discusses various scheduling algorithms, including preemptive and non-preemptive scheduling, and outlines the roles of long-term, short-term, and medium-term schedulers. Additionally, it covers disk scheduling algorithms aimed at optimizing disk access time and performance.

Uploaded by

tmpriyasvcas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views14 pages

Virtual Memeory

Demand paging is a virtual memory technique where pages are loaded into memory only when needed, leading to page faults when a requested page is not in memory. The document also discusses various scheduling algorithms, including preemptive and non-preemptive scheduling, and outlines the roles of long-term, short-term, and medium-term schedulers. Additionally, it covers disk scheduling algorithms aimed at optimizing disk access time and performance.

Uploaded by

tmpriyasvcas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

DEMAND PAGING

Demand paging is a technique used in virtual memory systems where


pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages
of a program into memory at runtime, instead of loading the entire
program into memory at the start.
A page fault occurred when the program needed to access a page that is
not currently in memory.

What is Page Fault?


The term “page miss” or “page fault” refers to a situation where a
referenced page is not found in the main memory.

PAGE SIZE
In operating systems, "page size" refers to the fixed-size blocks (pages)
into which virtual memory is divided, and the physical memory is divided
into frames, used for memory management.
These pages are typically powers of 2, such as 4KB or 16KB, and are
used to map virtual addresses to physical memory addresses.
 Common Page Sizes:
 4KB: This is a very common page size, supported by many architectures, including x86
and ARM.
 16KB: While less common, some architectures, like ARM, also support larger page
sizes, and Android 15 and higher have support for building Android with a 16 KB ELF
alignment, which works with 4 KB and 16 KB kernels starting with android14-6.1.

 Other Sizes: Some architectures support other page sizes, such as 2MB or 4MB (i386)
or 1GB (newer CPUs).

Priority Scheduling:

 Preemptive Scheduling:
 Allows a higher-priority process to interrupt a lower-priority process that is
currently running.
 This ensures that high-priority tasks are addressed promptly, but can lead to
frequent context switching.
 Non-Preemptive Scheduling:
 Once a process starts running, it continues to run until it completes or
voluntarily yields the CPU, even if a higher-priority process arrives.
 This minimizes context switching, but can lead to longer wait times for high-
priority tasks.

In operating systems, deadline scheduling prioritizes tasks based on their


deadlines, ensuring the task with the earliest deadline is executed first,
which is crucial for real-time applications.

Job or processors Scheduling?


Categories of Scheduling
Scheduling falls into one of two categories:
 Non-Preemptive: In this case, a process’s resource cannot be taken
before the process has finished running. When a running process
finishes and transitions to a waiting state, resources are switched.
 Preemptive: In this case, the OS can switch a process from running
state to ready state. This switching happens because the CPU may
give other processes priority and substitute the currently active process
for the higher priority process.
Please refer Preemptive vs Non-Preemptive Scheduling for details.

Types of Process Schedulers


There are three types of process schedulers:
1. Long Term or Job Scheduler
Long Term Scheduler loads a process from disk to main memory for
execution. The new process to the ‘Ready State’.
 It mainly moves processes from Job Queue to Ready Queue.
 It controls the Degree of Multi-programming, i.e., the number of
processes present in a ready state or in main memory at any point in
time.
 It is important that the long-term scheduler make a careful selection of
both I/O and CPU-bound processes. I/O-bound tasks are which use
much of their time in input and output operations while CPU-bound
processes are which spend their time on the CPU. The job scheduler
increases efficiency by maintaining a balance between the two.
 In some systems, the long-term scheduler might not even exist. For
example, in time-sharing systems like Microsoft Windows, there is
usually no long-term scheduler. Instead, every new process is directly
added to memory for the short-term scheduler to handle.
 Slowest among the three (that is why called long term).

2. Short-Term or CPU Scheduler


CPU Scheduler is responsible for selecting one process from the ready
state for running (or assigning CPU to it).
 STS (Short Term Scheduler) must select a new process for the CPU
frequently to avoid starvation.
 The CPU scheduler uses different scheduling algorithms to balance the
allocation of CPU time.
 It picks a process from ready queue.
 Its main objective is to make the best use of CPU.
 It mainly calls dispatcher.
 Fastest among the three (that is why called Short Term).
The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State). Context
switching is done by the dispatcher only. A dispatcher does the following
work:
 Saving context (process control block) of previously running process if
not finished.
 Switching system mode to user mode.
 Jumping to the proper location in the newly loaded program.
Time taken by dispatcher is called dispatch latency or process context
switch time.

Short-Term Scheduler

3. Medium-Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a process from
memory to disk (or swapping).
 It reduces the degree of multiprogramming (Number of processes
present in main memory).
 A running process may become suspended if it makes an I/O request.
A suspended processes cannot make any progress towards
completion. In this condition, to remove the process from memory and
make space for other processes, the suspended process is moved to
the secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be
necessary to improve the process mix (of CPU bound and IO bound)
 When needed, it brings process back into memory and pick up right
where it left off.
 It is faster than long term and slower than short term.

Medium-Term Scheduler

Context Switching

In order for a process execution to be continued from the same point at a


later time, context switching is a mechanism to store and restore the state
or context of a CPU in the Process Control block. A context switcher
makes it possible for multiple processes to share a single CPU using this
method. A multitasking operating system must include context switching
among its features.
The state of the currently running process is saved into the process
control block when the scheduler switches the CPU from executing one
process to another. The state used to set the computer, registers, etc. for
the process that will run next is then loaded from its own PCB. After that,
the second can start processing.

Conte xt Switching

In order for a process execution to be continued from the same point at a


later time, context switching is a mechanism to store and restore the state
or context of a CPU in the Process Control block. A context switcher
makes it possible for multiple processes to share a single CPU using this
method. A multitasking operating system must include context switching
among its features.

 Program Counter
 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Unit 5:Device and information management


Disk scheduling in an operating system is crucial for optimizing disk access
time and improving overall system performance by efficiently managing
multiple I/O requests and minimizing disk head movement.
DISK PERFORMANCE OPTIMIZATION
1. Disk Scheduling Algorithms:
 Purpose:
These algorithms determine the order in which the OS processes disk I/O
requests, aiming to minimize seek time and rotational latency.
 Examples:
 First-Come, First-Served (FCFS): Processes requests in the order they
arrive, simple but can lead to inefficient disk usage.
 Shortest Seek Time First (SSTF): Selects the request closest to the current
head position, minimizing seek time but can lead to starvation.
 SCAN: The disk arm moves in one direction, servicing requests along the
way, and then reverses direction.
 C-SCAN: Similar to SCAN, but after reaching the end of the disk, the arm
moves back to the beginning, providing more uniform waiting time.
 Deadline Scheduling: Assigns deadlines to I/O requests and prioritizes
those nearing their deadlines.

Disk Scheduling Algorithms


There are several Disk Several Algorithms. We will discuss in detail each
one of them.
 FCFS (First Come First Serve)
 SSTF (Shortest Seek Time First)
 SCAN
 C-SCAN
 LOOK
 C-LOOK
 RSS (Random Scheduling)
 LIFO (Last-In First-Out)
 N-STEP SCAN
 F-SCAN

Important Terms related to Disk Scheduling Algorithms

 Seek Time - It is the time taken by the disk arm to locate the
desired track.
 Rotational Latency - The time taken by a desired sector of
the disk to rotate itself to the position where it can access the
Read/Write heads is called Rotational Latency.
 Transfer Time - It is the time taken to transfer the data
requested by the processes.
 Disk Access Time - Disk Access time is the sum of the Seek
Time, Rotational Latency, and Transfer Time.

1. FCFS (First Come First Serve)


FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.
2. SSTF (Shortest Seek Time First)
In SSTF (Shortest Seek Time First) , requests having the shortest seek
time are executed first. So, the seek time of every request is calculated in
advance in the queue and then they are scheduled according to their
calculated seek time. As a result, the request near the disk arm will get
executed first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the throughput of the
system. Let us understand this with the help of an example.
Example:

3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and
services the requests coming in its path and after reaching the end of the
disk, it reverses its direction and again services the request arriving in its
path. So, this algorithm works as an elevator and is hence also known as
an elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Example:
C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been
scanned, after reversing its direction. So, it may be possible that too many
requests are waiting at the other end or there may be zero or few requests
pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk
arm instead of reversing its direction goes to the other end of the disk and
starts servicing the requests from there. So, the disk arm moves in a
circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).
Example:

5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except
for the difference that the disk arm in spite of going to the end of the disk
goes only to the last request to be serviced in front of the head and then
reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.
Example:

LOOK Algorithm

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And


the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm)
is calculated as:
= (190-50) + (190-16) = 314
6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is
similar to the CSCAN disk scheduling algorithm. In CLOOK, the disk arm
in spite of going to the end goes only to the last request to be serviced in
front of the head and then from there goes to the other end’s last request.
Thus, it also prevents the extra delay which occurred due to unnecessary
traversal to the end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm
should move “towards the larger value”
C-LOOK

So, the total overhead movement (total distance covered by the disk arm)
is calculated as
= (190-50) + (190-16) + (43-16) = 341
7. RSS (Random Scheduling)
It stands for Random Scheduling and just like its name it is natural. It is
used in situations where scheduling involves random attributes such as
random processing time, random due dates, random weights, and
stochastic machine breakdowns this algorithm sits perfectly. Which is why
it is usually used for analysis and simulation.
8. LIFO (Last-In First-Out)
In LIFO (Last In, First Out) algorithm, the newest jobs are serviced before
the existing ones i.e. in order of requests that get serviced the job that is
newest or last entered is serviced first, and then the rest in the same
order.
Advantages of LIFO (Last-In First-Out)
Here are some of the advantages of the Last In First Out Algorithm.
 Maximizes locality and resource utilization
 Can seem a little unfair to other requests and if new requests keep
coming in, it cause starvation to the old and existing ones.
9. N-STEP SCAN
It is also known as the N-STEP LOOK algorithm. In this, a buffer is
created for N requests. All requests belonging to a buffer will be serviced
in one go. Also once the buffer is full no new requests are kept in this
buffer and are sent to another one. Now, when these N requests are
serviced, the time comes for another top N request and this way all get
requests to get a guaranteed service
Advantages of N-STEP SCAN
Here are some of the advantages of the N-Step Algorithm.
 It eliminates the starvation of requests completely
10. F-SCAN
This algorithm uses two sub-queues. During the scan, all requests in the
first queue are serviced and the new incoming requests are added to the
second queue. All new requests are kept on halt until the existing
requests in the first queue are serviced.
Advantages of F-SCAN
Here are some of the advantages of the F-SCAN Algorithm.
 F-SCAN along with N-Step-SCAN prevents “arm stickiness”
(phenomena in I/O scheduling where the scheduling algorithm
continues to service requests at or near the current sector and thus
prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on
the number and type of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational
latency).

You might also like