0% found this document useful (0 votes)
10 views

Explain Functions of Operating System (5M) Functions of An Operating System Memory Management

The document discusses several key functions and services of operating systems including memory management, processor management, device management, file management, job accounting, program execution, input/output handling, file system manipulation, error detection and handling, resource allocation, accounting, information protection, and system services. It also covers process state transition diagrams, process control blocks, categories of system calls, and layered operating system architecture.

Uploaded by

kingsourabh1074
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Explain Functions of Operating System (5M) Functions of An Operating System Memory Management

The document discusses several key functions and services of operating systems including memory management, processor management, device management, file management, job accounting, program execution, input/output handling, file system manipulation, error detection and handling, resource allocation, accounting, information protection, and system services. It also covers process state transition diagrams, process control blocks, categories of system calls, and layered operating system architecture.

Uploaded by

kingsourabh1074
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1.

Explain functions of Operating System [5M]


Functions of an Operating System

 Memory Management

 It keeps track of primary memory, i.e., which bytes of memory are used by
which user program. The memory addresses that have already been allocated and
the memory addresses of the memory that has not yet been used.
 In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long.
 It Allocates the memory to a process when the process requests it and
deallocates the memory when the process has terminated or is performing an I/O
operation.

 Processor Management

In a multi-programming environment, the OS decides the order in which processes


have access to the processor, and how much processing time each process has. This
function of OS is called Process Scheduling. An Operating System performs the
following activities for Processor Management.
Keeps track of the status of processes. The program which performs this task is known
as a traffic controller. Allocates the CPU that is a processor to a process. De-allocates
processor when a process is no more required.

 Device Management

An OS manages device communication via its respective drivers. It performs the


following activities for device management. Keeps track of all devices connected to the
system. designates a program responsible for every device known as the Input/Output
controller. Decides which process gets access to a certain device and for how long.
Allocates devices effectively and efficiently. Deallocates devices when they are no
longer required.

 File Management

A file system is organized into directories for efficient or easy navigation and usage.
These directories may contain other directories and other files. An Operating System
carries out the following file management activities. It keeps track of where information
is stored, user access settings, the status of every file, and more. These facilities are
collectively known as the file system.
 Job Accounting

The operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users.

2. Explain Services of Operating System


Services Provided by an Operating System
Program Execution:
The Operating System is responsible for the execution of all types of programs
whether it be user programs or system programs. The Operating System utilizes
various resources available for the efficient running of all types of functionalities.

Handling Input/Output Operations:


The Operating System is responsible for handling all sorts of inputs, i.e., from
the keyboard, mouse, desktop, etc. The Operating System does all interfacing most
appropriately regarding all kinds of Inputs and Outputs.
For example, there is a difference between all types of peripheral devices such as mice
or keyboards, the Operating System is responsible for handling data between them.

Manipulation of File System:


The Operating System is responsible for making decisions regarding the storage
of all types of data or files, i.e., floppy disk/hard disk/pen drive, etc. The Operating
System decides how the data should be manipulated and stored.

Error Detection and Handling:


The Operating System is responsible for the detection of any type of error or
bugs that can occur while any task. The well-secured OS sometimes also acts as a
countermeasure for preventing any sort of breach of the Computer System from any
external source and probably handling them.

Resource Allocation:
The Operating System ensures the proper use of all the resources available by
deciding which resource to be used by whom for how much time. All the decisions are
taken by the Operating System.
Accounting:
The Operating System tracks an account of all the functionalities taking place in
the computer system at a time. All the details such as the types of errors that occurred
are recorded by the Operating System.

Information and Resource Protection:


The Operating System is responsible for using all the information and resources
available on the machine in the most protected way. The Operating System must foil
an attempt from any external resource to hamper any sort of data or information.

System Services:
The operating system provides various system services, such as printing, time
and date management, and event logging.

3. Draw and explain Process state transition diagram [5M]

4. Draw and explain Process Control Block


A process control block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. It is also known as (TCB) Task Control Block.

 Process state – It stores the respective state of the process, i.e. new, ready,
running, waiting or terminated.
 Process number – Every process is assigned with a unique id known as
process ID or PID which stores the process identifier.
This shows particular No. of Processess
 Program counter – It stores the counter which contains the address of the
next instruction that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator, base,
registers and general purpose registers.
This specifies the register that are used by the process.
 Memory limits – This field contains the information about memory
management system used by operating system. This may include the page tables,
segment tables etc.
 Open files list – This information includes the list of files opened for a
process.
1. CPU Scheduling Information
2. Memeory Management Information
3. Input Output Status Information
4. Accounting Information

4. Explain Different categories of system calls.


 Process control:
This System Call deals with processes such as process creation, process
termination. i.e. end, abort, create, terminate, allocate, and free memory.

 File management:
This system call is responsible for file manipulations such as create file, open file,
close file, delete file, read files, etc.

 Device management
This system call is responsible for device manipulations such as writing into
devices, reading from devices, etc.

 Information maintenance
This system call handle information and transferance between OS and User
Program.

 Communication
This system call are useful for interprocess communication. They also deal with
creating and deleting a connection.

5. Explain layered architecture of Operating System [5M]


Layered Architecture is a type of system structure in which the different services of
the operating system are split into various layers, where each layer has a specific well-
defined task to perform.
The whole Operating System is separated into several layers ( from 0 to n ) as the
diagram shows. Each of the layers must have its own specific function to perform.
There are some rules in the implementation of the layers as follows.

1. The outermost layer must be the User Interface layer.


2. The innermost layer must be the Hardware layer.
3. A particular layer can access all the layers present below it but it cannot access
the layers present above it. That is layer n-1 can access all the layers from n-2 to 0
but it cannot access the nth layer.

6. Consider the given table below and find Completion time (CT),
Turn-around time (TAT), Waiting time (WT), Response time (RT),
Average Turn-around time and Average Waiting time by using FCFS
algorithm.
Process ID Arrival Time Burst Time
P1 2 2
P2 5 6
P3 0 4
P4 0 7
P5 7 4
[5M
7. What is Inter Proess Commumication (IPC). Explain any one
classical problem of synchronization. [8M]
A process can be of two types:

 Independent process.

An independent process is not affected by the execution of other processes

 Co-operating process.

A co-operating process can be affected by other executing processes.

Inter-process communication (IPC) is a mechanism that allows processes to


communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them.

Processes can communicate with each other through both:

i) Shared Memory Method

Communication between processes using shared memory requires processes to


share some variable, and it completely depends on how the programmer will
implement it. One way of communication using shared memory can be imagined like
this: Suppose process1 and process2 are executing simultaneously, and they share
some resources or use some information from another process. Process1 generates
information about certain computations or resources being used and keeps it as a
record in shared memory. When process2 needs to use the shared information, it will
check in the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared memory for
extracting information as a record from another
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces
some items and the Consumer consumes that item. The two processes share a
common space or memory location known as a buffer where the item produced by the
Producer is stored and from which the Consumer consumes the item if needed.

There are two versions of this problem: the first one is known as the unbounded
buffer problem in which the Producer can keep on producing items and there is no limit
on the size of the buffer, the second one is known as the bounded buffer problem in
which the Producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it. We will discuss the bounded buffer problem. First,
the Producer and the Consumer will share some common memory, then the producer
will start producing items.

If the total produced item is equal to the size of the buffer, the producer will wait
to get it consumed by the Consumer. Similarly, the consumer will first check for the
availability of the item. If no item is available, the Consumer will wait for the Producer to
produce it. If there are items available, Consumer will consume them.

ii) Messaging Passing Method


In this method, processes communicate with each other without using any kind
of shared memory. If two processes p1 and p2 want to communicate with each other,
they proceed as follows:

Establish a communication link (if a link already exists, no need to establish it


again.)

Start exchanging messages using basic primitives. We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
8. Write short note on: a) Critical Section b) Semaphore [7M]
a) Critical Section:-
When more than one processes try to access the same code segment that
segment is known as the critical section. The critical section contains shared variables
or resources which are needed to be synchronized to maintain the consistency of data
variables.

1. The critical section must be executed as an atomic operation, which means


that once one thread or process has entered the critical section, all other threads or
processes must wait until the executing thread or process exits the critical section. The
purpose of synchronization mechanisms is to ensure that only one thread or process
can execute the critical section at a time.
2. The concept of a critical section is central to synchronization in computer
systems, as it is necessary to ensure that multiple threads or processes can execute
concurrently without interfering with each other. Various synchronization mechanisms
such as semaphores, mutexes, monitors, and condition variables are used to
implement critical sections and ensure that shared resources are accessed in a
mutually exclusive manner.
The use of critical sections in synchronization can be advantageous in improving
the performance of concurrent systems, as it allows multiple threads or processes to
work together without interfering with each other. However, care must be taken in
designing and implementing critical sections, as incorrect synchronization can lead to
race conditions, deadlocks.

b) Semaphores
Semaphores are compound data types with two fields one is a Non-
negative integer S.V and the second is Set of processes in a queue S.L. It is used to
solve critical section problems, and by using two atomic operations, it will be solved. In
this, wait and signal that is used for process synchronization.

States of the process :


Let’s go through the stages of the process that comes in its lifecycle. This will help in
understanding semaphore.

1. Running – It states that the Process in execution.


2. Ready – It states that the process wants to run.
3. Idle – The process runs when no processes are running
4. Blocked – The processes not ready not a candidate for a running process. It
can be awakened by some external actions.
5. Inactive – The initial state of the process. The process is activated at some
point and becomes ready.
6. Complete – When a process executes its final statement.

Semaphores are used to implement critical sections, which are regions of code
that must be executed by only one process at a time. By using semaphores, processes
can coordinate access to shared resources, such as shared memory or I/O devices.
Semaphores are of two types:

1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1.
Its value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.

9. Consider the following snapshot of a system

4 3 3 R1 R2 R3

7 2 4 2 1 1

4 2 5 7 2 3

5 3 3 3 2 2
1 1 3

Answer the following questions using banker’s algorithm


What is the contents of matrix need?
Is the system in a safe state?
Find out the safe sequences. [10 M]
10. Explain 4 conditions to occur deadlock [5M]
 Mutual Exclusion:
When two people meet in the landings, they can’t just walk through because
there is space only for one person. This condition allows only one person (or process)
to use the step between them (or the resource) is the first condition necessary for the
occurrence of the deadlock.
 Hold and Wait:
When the two people refuse to retreat and hold their ground, it is called holding.
This is the next necessary condition for deadlock.

 No Preemption:
For resolving the deadlock one can simply cancel one of the processes for other
to continue. But the Operating System doesn’t do so. It allocates the resources to the
processors for as much time as is needed until the task is completed. Hence, there is
no temporary reallocation of the resources. It is the third condition for deadlock.

 Circular Wait:
When the two people refuse to retreat and wait for each other to retreat so that
they can complete their task, it is called circular wait. It is the last condition for
deadlock to occur.

11. Explain free space Management techniques in detail [8M]


Free space management is a critical aspect of operating systems as it involves
managing the available storage space on the hard disk or other secondary storage
devices. The operating system uses various techniques to manage free space and
optimize the use of storage devices. Here are some of the commonly used free space
management techniques:
1. Linked Allocation:
In this technique, each file is represented by a linked list of disk blocks. When a
file is created, the operating system finds enough free space on the disk and links the
blocks of the file to form a chain. This method is simple to implement but can lead to
fragmentation and wastage of space.
2. Contiguous Allocation:
In this technique, each file is stored as a contiguous block of disk space. When
a file is created, the operating system finds a contiguous block of free space and
assigns it to the file. This method is efficient as it minimizes fragmentation but suffers
from the problem of external fragmentation.
3. Indexed Allocation:
In this technique, a separate index block is used to store the addresses of all the
disk blocks that make up a file. When a file is created, the operating system creates an
index block and stores the addresses of all the blocks in the file. This method is
efficient in terms of storage space and minimizes fragmentation.
4. File Allocation Table (FAT):
In this technique, the operating system uses a file allocation table to keep track
of the location of each file on the disk. When a file is created, the operating system
updates the file allocation table with the address of the disk blocks that make up the
file. This method is widely used in Microsoft Windows operating systems.
5. Volume Shadow Copy:
This is a technology used in Microsoft Windows operating systems to
create backup copies of files or entire volumes. When a file is modified, the
operating system creates a shadow copy of the file and stores it in a separate
location. This method is useful for data recovery and protection against accidental
file deletion.

12. Write short note on a) Paging b) Segmentation [7 M]


a) Paging
Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. The process of retrieving processes in the
form of pages from the secondary storage into the main memory is known as paging.
The basic purpose of paging is to separate each procedure into pages. Additionally,
frames will be used to split the main memory.This scheme permits the physical
address space of a process to be non – contiguous.

some of the important thing features of paging in pc reminiscence management:

1. Logical to bodily address mapping:

In paging, the logical address area of a technique is divided into constant-


sized pages, and each web page is mapped to a corresponding physical body
within the main reminiscence. This permits the working gadget to manipulate the
memory in a extra flexible way, as it is able to allocate and deallocate frames as
needed.

2. Fixed web page and frame length:

Paging makes use of a set web page length, which is usually identical to the
size of a frame within the most important memory. This facilitates to simplify the
reminiscence control technique and improves device performance.

3. Page desk entries:

Each page within the logical address area of a method is represented through a
page table entry (PTE), which contains facts approximately the corresponding
bodily body in the predominant memory. This consists of the frame range, in addition
to other manipulate bits which can be used by the running machine to
manage the reminiscence.
4. Number of page desk entries:

The range of page desk entries in a manner’s page desk is identical to the wide
variety of pages inside the logical deal with area of the technique.

b) Semgentation
A process is divided into Segments. The chunks that a program is divided into which
are not necessarily all of the same sizes are called segments. Segmentation gives the
user’s view of the process which paging does not give. Here the user’s view is mapped
to physical memory. There are types of segmentation:

1. Virtual memory segmentation – Each process is divided into a number of


segments, but the segmentation is not done all at once. This segmentation may or
may not take place at run time of the program.
2. Simple segmentation – Each process is divided into a number of segments, all
of which are loaded into memory at run time, though not necessarily contiguously.

Advantages of Segmentation –

 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU
utilization.
 The user’s perception of physical memory is quite similar to segmentation.
Users can divide user programs into modules via segmentation. These modules
are nothing more than the separate processes’ codes.
 The user specifies the segment size, whereas in paging, the hardware
determines the page size.
 Segmentation is a method that can be used to segregate data from security
operations.
 Flexibility: Segmentation provides a higher degree of flexibility than paging.
Segments can be of variable size, and processes can be designed to have multiple
segments, allowing for more fine-grained memory allocation.
 Sharing: Segmentation allows for sharing of memory segments between
processes. This can be useful for inter-process communication or for sharing code
libraries.
 Protection: Segmentation provides a level of protection between segments,
preventing one process from accessing or modifying another process’s memory
segment. This can help increase the security and stability of the system.
13. Explain Contiguous and non-contiguous Memory allocation

S.NO. Contiguous Memory Allocation Non-Contiguous Memory Allocation

Contiguous memory allocation Non-Contiguous memory allocation


1. allocates consecutive blocks of allocates separate blocks of memory to a
memory to a file/process. file/process.

2. Faster in Execution. Slower in Execution.

3. It is easier for the OS to control. It is difficult for the OS to control.

Overhead is minimum as not much


More Overheads are there as there are
4. address translations are there while
more address translations.
executing a process.

Both Internal fragmentation and


Only External fragmentation occurs in
external fragmentation occurs in
5. Non-Contiguous memory allocation
Contiguous memory allocation
method.
method.

It includes single partition allocation


6. It includes paging and segmentation.
and multi-partition allocation.
S.NO. Contiguous Memory Allocation Non-Contiguous Memory Allocation

7. Wastage of memory is there. No memory wastage is there.

In contiguous memory allocation, In non-contiguous memory allocation,


8. swapped-in processes are arranged in swapped-in processes can be arranged in
the originally allocated space. any place in the memory.

It is of five types:
It is of two types: 1. Paging
9. 2. Multilevel Paging
1. Fixed(or static) partitioning
3. Inverted Paging
2. Dynamic partitioning
4. Segmentation
5. Segmented Paging

It could be visualized and It could be implemented using Linked


10.
implemented using Arrays. Lists.

Degree of multiprogramming is fixed


11. Degree of multiprogramming is not fixed
as fixed partitions

14. Explain Memory allocation algorithms with an example. [5M]

Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions and each partition contains exactly one process. Thus,
the degree of multiprogramming is obtained by the number of partitions.
 Multiple partition allocation:
In this method, a process is selected from the input queue and loaded into the
free partition. When the process terminates, the partition becomes available for other
processes.
 Fixed partition allocation:
In this method, the operating system maintains a table that indicates which parts
of memory are available and which are occupied by processes. Initially, all memory is
available for user processes and is considered one large block of available memory.
This available memory is known as a “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process, otherwise keeping the rest
available to satisfy future requests. While allocating a memory sometimes dynamic
storage allocation problems occur, which concerns how to satisfy a request of size n
from a list of free holes.

15. Consider Page references 7,0,1,2,0,3,0,4,2,3,0,3,2,3 with 4 page


frames. Find the number of page fault by using
FIFO b) LRU c) Optimal page replacement [12M]

2. Least Recently Used: In this algorithm, page will be replaced which is least
recently used.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it
is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.

3. Optimal Page replacement:


In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it
is not used for the longest duration of time in the future.—>1 Page fault. 0 is already
there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already
available in the memory.

16. Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in
order), how would each of the First-fit, Best-fit, and Worst-fit
algorithms place processes of 212K, 417K, 112K, and 426K (in order)?
Which algorithm makes the most efficient use of memory?
[8M]
First-Fit:

212K is put in 500K partition.


417K is put in 600K partition.
112K is put in 288K partition (new partition 288K = 500K - 212K). 426K must wait.

Best-Fit:
212K is put 417K is put 112K is put 426K is put

Worst-Fit:

in 300K partition. in 500K partition. in 200K partition. in 600K partition.

212K is put 417K is put 112K is put 426K must wait.

in 600K partition. in 500K partition. in 388K partition.

In this example, Best-Fit turns out to be the best.

17. Explain I/O Buffering in detail [8M]

A buffer is a memory area that stores data being transferred between two
devices or between a device and an application.

Uses of I/O Buffering :

Buffering is done to deal effectively with a speed mismatch between the


producer and consumer of the data stream.

A buffer is produced in main memory to heap up the bytes received from


modem.

After receiving the data in the buffer, the data get transferred to disk from buffer
in a single operation.

This process of data transfer is not instantaneous, therefore the modem needs
another buffer in order to store additional incoming data.

When the first buffer got filled, then it is requested to transfer the data to disk.

The modem then starts filling the additional incoming data in the second buffer
while the data in the first buffer getting transferred to disk.

When both the buffers completed their tasks, then the modem switches back to
the first buffer while the data from the second buffer get transferred to the disk.

The use of two buffers disintegrates the producer and the consumer of the data,
thus minimizes the time requirements between them.

Buffering also provides variations for devices that have different data transfer
sizes.
18. Explain Direct Memory Access (DMA) in detail [7M]

DMA Controller is a hardware device that allows I/O devices to directly access
memory with less participation of the processor. DMA controller needs the same old
circuits of an interface to communicate with the CPU and Input/Output devices.

What is a DMA Controller?


Direct Memory Access uses hardware for accessing the memory, that hardware
is called a DMA Controller. It has the work of transferring the data between Input
Output devices and main memory with very less interaction with the processor. The
direct Memory Access Controller is a control unit, which has the work of transferring
data.

DMA Controller Diagram in Computer Architecture


DMA Controller is a type of control unit that works as an interface for the data
bus and the I/O Devices. As mentioned, DMA Controller has the work of transferring
the data without the intervention of the processors, processors can control the data
transfer. DMA Controller also contains an address unit, which generates the address
and selects an I/O device for the transfer of data. Here we are showing the block
diagram of the DMA Controller.

There are four popular types of DMA.


Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing
from a single memory address. They are the simplest DMA.

Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory
addresses. Dual-ended DMA is more advanced than single-ended DMA.

Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to


several memory addresses. It is more advanced than Dual-Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one memory
address and write from another memory address.

19. Write short Note on :


Directory Structure and Access method

Directory Structure:-
A directory is a container that is used to contain folders and files. It organizes
files and folders in a hierarchical manner.

There are several logical structures of a directory, these are given below.
Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are contained
in the same directory which makes it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files
increases or when the system has more than one user. Since all the files are in the
same directory, they must have a unique name. if two users call their dataset test, then
the unique name rule violated.
Memory access methods:
These are 4 types of memory access methods:

1. Sequential Access:-
In this method, the memory is accessed in a specific linear sequential manner,
like accessing in a single Linked List. The access time depends on the location of the
data.

Applications of this sequential memory access are magnetic tapes, magnetic disk and
optical memories.

2. Random Access:

In this method, any location of the memory can be accessed randomly like
accessing in Array. Physical locations are independent in this access method.

Applications of this random memory access are RAM and ROM.

3. Direct Access:

In this method, individual blocks or records have a unique address based on


physical location. access is accomplished by direct access to reach a general vicinity
plus sequential searching, counting or waiting to reach the final destination. This
method is a combination of above two access methods. The access time depends on
both the memory organization and characteristics of storage technology. The access is
semi-random or direct.
Application of thus direct memory access is magnetic hard disk, read/write header.

4. Associate Access:

In this memory, a word is accessed rather than its address. This access method
is a special type of random access method. Application of thus Associate memory
access is Cache memory.

20. Suppose the order of request is- (82,170,43,140,24,16,190)


And current position of Read/Write head is: 50 . Calculate total
distance covered by disk arm by using disk scheduling policies such
as FIFO, LIFO, STTF,C-SCAN,C-SCAN. [15M]

FCFS:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50

So, total overhead movement (total distance covered by the disk arm) : =(82-50)+(170-
82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
SSTF:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

So,

total overhead movement (total distance covered by the disk arm) =(50-43)+(43-
24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208

SCAN:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards
the larger value”.

Therefore, the total overhead movement (total distance covered by the disk arm) is
calculated as:

1. =(199-50)+(199-16) =332
CSCAN:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards

the larger value”.


so, the total overhead movement (total distance covered by the disk arm) is calculated
as:

=(199-50)+(199-0)+(43-0) =391

LOOK:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”.

***************** ALL THE BEST *******************

You might also like