Os QB Answers
Os QB Answers
Operating Systems
UNIT-I
1
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Ans: The users of a batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched together
and run as a group. The programmers leave their programs with the operator and the
operator then sorts the programs with similar requirements into batches.
3
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Suppose there is a bank whose headquarter is in New Delhi. That bank has branch
offices in cities like Ludhiana, Noida, Faridabad, and Chandigarh. You can operate
your bank by going to any of these branches. How is this possible? It’s because
whatever changes you make at one branch office are reflected at all branches. This is
because of the distributed system.
Ans: In Real-Time Systems, each job carries a certain deadline within which the job is
supposed to be completed, otherwise, the huge loss will be there, or even if the result is
produced, it will be completely useless.
Real Time Operating System is that operating system in which computer which is very
fast in operation and has to perform the task in under the specified time .
Real time system means that the system is subjected to real time, i.e., response should
be guaranteed within a specified timing constraint or system should meet the specified
deadline. For example: flight control system, real time monitors etc.
Types of real time systems based on timing constraints: Hard real time system – This
type of system can never miss its deadline. Missing the deadline may have disastrous
consequences. Example: Flight controller system.
Soft real time system – This type of system can miss its deadline occasionally with
some acceptably low probability. Missing the deadline have no disastrous
consequences. Example: Telephone switches.
Advantages are the maximum utilization of devices and systems . Disadvantage is that
they are very costly to develop and consume critical CPU cycles.
4
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
5
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release device,
6
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
etc.
7
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Information Maintenance
Communication
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.
There are various examples of Windows and Unix system calls. These are as listed
below in the table:
Wait()
File Manipulation CreateFile() Open()
ReadFile()
WriteFile() Read()
CloseHandle() Write()
Close()
8
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
In an operating
9
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
10. Job accounting: Keeping track of time & resource used by various job and
users.
(Or)
Security
To safeguard user data, the operating system employs password protection and other
related measures. It also protects programs and user data from illegal access.
The operating system monitors the overall health of the system in order to optimise
performance. To get a thorough picture of the system’s health, keep track of the time
between system responses and service requests. This can aid performance by
providing critical information for troubleshooting issues.
Job Accounting
The operating system maintains track of how much time and resources are consumed
by different tasks and users, and this data can be used to measure resource utilisation
for a specific user or group of users.
The OS constantly monitors the system in order to discover faults and prevent a
computer system from failing.
Memory Management
The operating system is in charge of managing the primary memory, often known as
the main memory. The main memory consists of a vast array of bytes or words, each
of which is allocated an address. Main memory is rapid storage that the CPU can
access directly. A program must first be loaded into the main memory before it can be
executed. For memory management, the OS performs the following tasks:
The OS keeps track of primary memory – meaning, which user program can
use which bytes of memory, memory addresses that have already been
10
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
11
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
It allocates memory to the process when the process asks for it and deallocates
memory when the process exits or performs an I/O activity.
Process Management
The operating system determines which processes have access to the processor and
how much processing time every process has in a multiprogramming environment.
Process scheduling is the name for this feature of the operating system. For processor
management, the OS performs the following tasks:
Device Management
A file system is divided into directories to make navigation and usage more efficient.
Other directories and files may be found in these directories. The file management
tasks performed by an operating system are: it keeps track of where data is kept, user
access settings, and the state of each file, among other things. The file system is the
name given to all of these features.
1. I/O operation
2. Program execution
3. File system manipulation
4. Communication
5. Error Handling
6. Resource allocation
7. Accounting
8. Protection
1. I/O Operation: - To execute a program, needs I/O, which consists of a file, or I/O
device. Due to the protection and effectiveness, users are not able to manage the I/O
device, so the operating system helps the user to perform I/O operations such as read
12
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
and write operations in a file. The Operating system offers the facility to access the
I/O device when needed.
2. Program execution: - Operating System is responsible for loading a program into
memory and then executing that program. Operating System helps us to manage
different tasks from user programs to the system programs such as file server, name
server, printer spooler, etc. Each of these tasks is sum-up as a process. A process may
consist of complete execution context like data to manipulate, OS resources in
use, registers, code to execute, etc.
The operating system performs the following tasks for program management:
4. Communication
The Operating system offers the facility of communication. The Process requires
information exchange with another process. For executing a process on the same
computer or different computer systems, it communicates with the help of the
operating system. Communication between the processes is done with the help of
message passing and shared memory.
5. Error Handling
The Operating system provides the service of error handling. An error may arise
anywhere, like in I/O devices, Memory, CPU, and in the user program. The Operating
system takes appropriate action for each error to ensure consistency and correct
computing.
6. Resource allocation
In a system, when multiple jobs are executing concurrently, then resource allocation
must be needed for each job. Resources include main memory storage, file storage,
CPU cycles, and I/O devices. The operating system handles every type of resource by
using schedulers. With the help of CPU scheduling, the task of resource allocation can
be performed.
7. Accounting
13
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Accounting service of the operating system helps to keep track of the system usage
means which users use the resources for how much time and what type of resources
are used by the system.
8. Protection: - If the computer system has different users and permits the concurrent
execution of the various processes, then it is must to protect the processes from one
another’s activities.
(Or)
4. What are the different types of operating systems? Explain Simple Batch
operating systems
Ans:
There are various types of operating system:
15
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
In the simple batch operating system, there is no direct communication between the
user and the computer. In this, firstly, the user submits a job to the computer operator,
and after submitting the job, the computer operator creates a batch of the jobs on an
input device. The batch of jobs is created on the basis of the type of language and
needs. After the batch of the job is created, then a special program monitors and
manages each program in a batch. Example: Bank Statements, Payroll system, etc.
1. It is hard to debug.
16
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
17
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
In Multiprogramming Batch Operating System, the Operating system first selects the
job, and after selecting the job, it begins to execute one of the jobs from memory.
When this job requires an I/O operation operating system, it switches to another job
(operating system and CPU always busy). In this, the jobs present in memory are
always minimum than the jobs present in the job pool.
If different jobs are ready to execute at the same time, then the job is selected for CPU
scheduling. In a simple batch operating system, sometimes CPU is idle and doesn’t
perform any task, but in the multiprogramming batch operating system, CPU is busy
and will never sit idle and always keeps on processing.
Ans:
Multi tasking:
In early times, you wouldn’t be able to run two different applications at the same time.
But now you can work listening to your favourite music, this is because of the multi-
tasking ideology used in the operating system.
The Operating system acts as a bridge between your software and the hardware of
your computers. It assigns a small-time quantum for each task based on the time-
sharing technology.
Time Sharing :
Time-sharing is the extension of Multi-programming and Multi-tasking concepts. The
time-sharing operating system allows multiple users to access the computer resources
for a specified time slice.
It works like multitasking, but the difference here is that it allows multiple users to
access the computer resources whereas multi-tasking focuses on running different
applications at the same time.
18
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
19
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
No
1. Parallel systems are the systems
that can process the data
simultaneously, and increase the In these systems, applications are running
computational speed of a computer on multiple computers linked by
system. communication lines.
3. Tasks are performed with a more Tasks are performed with a less speedy
speedy process. process.
21
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
responsible for performing a part of the task. In a distributed system, the nodes
communicate with each other using a network, and the system is designed to
handle data and tasks that are geographically distributed. Examples of distributed
systems include the internet, cloud computing, and peer-to-peer networks.
2. On the other hand, a parallel system is a computer system that consists of
multiple processors that work together to perform a task. In a parallel system, the
processing is divided into multiple tasks, and each processor performs a separate
task simultaneously. The processors communicate with each other using shared
memory or message passing, and the system is designed to handle data and tasks
that require high computational power. Examples of parallel systems include
supercomputers and clusters.
Note: the main difference between a distributed system and a parallel system is
how they manage the processing and communication of tasks across multiple
processors. In a distributed system, the processing is distributed across multiple
nodes connected by a network, while in a parallel system, the processing is divided
among multiple processors that work together on a single task.
(Or)
computing.
Number of It occurs in a single It involves various
Computers computer system. computers.
22
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
23
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Note : There are two types of computations: parallel computing and distributed
computing. Parallel computing allows several processors to accomplish their tasks at
the same time. In contrast, distributed computing splits a single task among numerous
systems to achieve a common goal.
A Multiprocessor Operating System means the use of two or more processors within a
single computer system. These multiple processors are in close communication and
share the memory, computer bus, and other peripheral devices. These systems are
known as tightly coupled systems. It offers high speed and computing power. In
Multiprocessor operating system, all the processors work by using a single operating
system.
Advantages of Multiprocessor
Improved performance.
By maximizing the number of processors, more work is done in less time. In
this way, throughput is increased.
24
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Increased reliability.
Distributed systems are also known as loosely coupled systems. In this type of
operating system, multiple central processors are used to serve multiple real-time
applications and multiple users. In this, the jobs of data processing are shared in the
processors accordingly.
In this processor, interaction with each other takes place via communication lines like
telephone lines, high-speed buses, etc. The processors can be different in function and
size.
1. Client-server Systems.
2. Peer-to-Peer system.
25
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
9. What are the different types of operating systems? Explain Real time
operating systems.
Ans:
There are various types of operating system:
Real-time operating systems are the operating systems that are used in real-time
applications where the data processing must be done in a fixed interval of time. The
Real-time operating system gives the response very fast and quick. The Real-time
operating system is used when a large number of events are processed in a short
interval of time.
26
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
27
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
1. Hard Real-time
2. Soft Real-time
3. Firm Real-time
Hard-Real time: - In Hard-Real time system, there is some deadline for executing the
task, which means that the task must start its execution on the particular scheduled
time, and should complete within the assigned duration of time.
Example: - Aircraft systems, Medical critical care System, etc.
Soft-Real time: - In the Soft-Real time system also, we assign a time to each process,
but some delaying in time is acceptable. So, in Soft-real time, deadlines are handled
softly. That’s why it is called Soft-Real time. Example: - Live stock price and Online
Transaction System.
Firm-Real time: - In the Firm-Real time system, there is also a deadline for every
task to execute. But in this, due to missing deadlines, there may be no big impact, but
there can be chances of undesired effects such as problems in the quality of a product.
Example: - Multimedia Applications.
10. Define operating system and list the basic services provided by operating
system.
output, and controlling peripheral devices such as disk drives and printers.
Operating System
29
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
9. I/O operation
10. Program execution
11. File system manipulation
12. Communication
13. Error Handling
14. Resource allocation
15. Accounting
16. Protection
1. I/O Operation: - To execute a program, needs I/O, which consists of a file, or I/O
device. Due to the protection and effectiveness, users are not able to manage the I/O
device, so the operating system helps the user to perform I/O operations such as read
and write operations in a file. The Operating system offers the facility to access the
I/O device when needed.
2. Program execution: - Operating System is responsible for loading a program into
memory and then executing that program. Operating System helps us to manage
different tasks from user programs to the system programs such as file server, name
server, printer spooler, etc. Each of these tasks is sum-up as a process. A process may
consist of complete execution context like data to manipulate, OS resources in
use, registers, code to execute, etc.
The operating system performs the following tasks for program management:
31
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
4. Communication
The Operating system offers the facility of communication. The Process requires
information exchange with another process. For executing a process on the same
computer or different computer systems, it communicates with the help of the
operating system. Communication between the processes is done with the help of
message passing and shared memory.
5. Error Handling
The Operating system provides the service of error handling. An error may arise
anywhere, like in I/O devices, Memory, CPU, and in the user program. The Operating
system takes appropriate action for each error to ensure consistency and correct
computing.
6. Resource allocation
In a system, when multiple jobs are executing concurrently, then resource allocation
must be needed for each job. Resources include main memory storage, file storage,
CPU cycles, and I/O devices. The operating system handles every type of resource by
using schedulers. With the help of CPU scheduling, the task of resource allocation can
be performed.
7. Accounting
Accounting service of the operating system helps to keep track of the system usage
means which users use the resources for how much time and what type of resources
are used by the system.
8. Protection: - If the computer system has different users and permits the concurrent
execution of the various processes, then it is must to protect the processes from one
another’s activities.
32
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
UNIT-II
Short Question & Answers
1. Define process?
Ans:
In the Operating System, a Process is something that is currently under execution.
So, an active program can be called a Process.
For example, when you want to search something on web then you start a browser.
So, this can be process. Another example of process can be starting your music
player to listen to some cool music of your choice.
A Process has various attributes associated with it. Some of the attributes of a
Process are:
33
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
So, each process will be given a PCB which is a kind of identification card for a
process. All the processes present in the system will have a PCB associated with it
and all these PCBs are connected in a Linked List.
There are various attributes of a PCB that helps the CPU to execute a particular
process. These attributes are:
34
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
35
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
List of opened files: A process can deal with a number of files, so the CPU
should maintain a list of files that are being opened by a process to make
sure that no other process can open the file at the same time.
List of I/O devices: A process may need a number of I/O devices to perform
various tasks. So, a proper list should be maintained that shows which I/O
device is being used by which process.
These are the attributes of a Process Control Block and these pieces of information
are needed to have detailed info about the process and this, in turn, results in better
execution of the process
36
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
If the job scheduler selects more IO bound processes, all of the jobs may become
stuck, the CPU will be idle for the majority of the time, and multiprogramming will
be reduced as a result. Hence, the Long-Term scheduler’s job is crucial and could
have a Long-Term impact on the system.
CPU scheduler is another name for Short-Term scheduler. It chooses one job from the
ready queue and then sends it to the CPU for processing.
To determine which work will be dispatched for execution, a scheduling method is
utilised. The Short-Term scheduler’s task can be essential in the sense that if it
chooses a job with a long CPU burst time, all subsequent jobs will have to wait in a
ready queue for a long period. This is known as hunger, and it can occur if the Short-
Term scheduler makes a mistake when selecting the work.
3. Medium-Term Scheduler
5. What is Thread ?
Ans:
A thread has three components namely Program counter, register set, and
stack space.
Thread is also termed as the lightweight process as they share resources and
are faster compared to processes.
Context switching is faster in threads.
Threads are of two types:
1. User Level Thread: User-level threads are created and managed by the
user.
2. Kernel Level Thread: Kernel-level threads are created and managed
by the OS.
Issues related to threading are fork() and exec() system call, thread
cancellation, signal handling, etc.
Some of the advantages of threading include responsiveness, faster context
switching, faster communication, concurrency, efficient use of the
multiprocessor, etc.
A new process known as a "child process" is created with the fork system call which
runs concurrently with the process called the parent process.
The use of the fork() system call is to create a new process by duplicating the calling
process. The fork() system call is made by the parent process, and if it is successful,
a child process is created.
The fork() system call does not accept any parameters. It simply creates a child
process and returns the process ID. If a fork() call is successful
fork system call in OS returns an integer value and requires no arguments. After the
creation of a new child process, both processes then execute the next command
following the fork system call. Therefore, we must separate the parent from the child
by checking the returned value of the fork():
38
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
Output:
39
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
There are different CPU scheduling algorithms with different properties. The choice
of algorithm is dependent on various different factors. There are many criteria
suggested for comparing CPU schedule algorithms, some of which are:
CPU utilization
Throughput
Turnaround time
Waiting time
Response time
The aim of the scheduling algorithm is to maximize and minimize the following:
Maximize:
CPU utilization - It makes sure that the CPU is operating at its peak and is
busy.
Throughoutput - It is the number of processes that complete their execution
per unit of time.
Minimize:
Throughput- It is a measure of the work that is done by the CPU which is directly
proportional to the number of processes being executed and completed per unit of
time. It keeps on varying which relies on the duration or length of processes.
Waiting time- Once the execution starts, the scheduling process does not hinder the
time that is required for the completion of the process. The only thing that is affected
is the waiting time of the process, i.e the time that is spent by a process waiting in a
40
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
queue. The formula for calculating waiting Waiting time = TurnAroundTime −Burst
time.
41
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Response time- Turnaround time is not considered as the best criterion for comparing
scheduling algorithms in an interactive system. Some outputs of the process might
produce early while computing other results simultaneously. Another criterion is the
time that is taken from process submission to generate the first response. This is
called response time and the formula for calculating it is, Response time =At what
time process entered first inside – Arrival Time.
Ans:
Process vs Thread
Process simply means any program in execution while the thread is a segment of
a process. The main differences between process and thread are mentioned below:
Process Thread
Processes use more resources and
Threads share resources and hence they are
hence they are termed as heavyweight
termed as lightweight processes.
processes.
Creation and termination times of Creation and termination times of threads are
processes are slower. faster compared to processes.
Processes have their own code and Threads share code and data/file within a
data/file. process.
Communication between processes is
Communication between threads is faster.
slower.
Context Switching in processes is Context switching in threads is faster.
slower.
Threads, on the other hand, are
Processes are independent of each interdependent. (i.e they can read, write or
other. change another thread’s data)
Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
The below diagram shows how the resources are shared in two different
processes vs two threads in a single process.
42
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Preemptive scheduling is a method that may be used when a process switches from a
running state to a ready state or from a waiting state to a ready state. The resources are
assigned to the process for a particular time and then removed. If the resources still have
the remaining CPU burst time, the process is placed back in the ready queue. The
process remains in the ready queue until it is given a chance to execute again.
When a high-priority process comes in the ready queue, it doesn't have to wait for the
running process to finish its burst time. However, the running process is interrupted in
the middle of its execution and placed in the ready queue until the high-priority process
uses the resources. As a result, each process gets some CPU time in the ready queue.
Advantages
Disadvantages
43
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
When a non-preemptive process with a high CPU burst time is running, the other
process would have to wait for a long time, and that increases the process average
waiting time in the ready queue. However, there is no overhead in transferring processes
from the ready queue to the CPU under non-preemptive scheduling. The scheduling is
strict because the execution process is not even preempted for a higher priority process.
Advantages
Disadvantages
Note:
When a higher priority process comes in the CPU, the running process in preemptive
scheduling is halted in the middle of its execution. On the other hand, the running
process in non-preemptive scheduling doesn't interrupt in the middle of its execution
and waits until it is completed.
44
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
must wait in the ready queue. The old process’s execution begins at that particular
45
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
point at which another process happened to stop it. It describes the features of a
multitasking OS where multiple processes share a similar CPU to perform various
tasks without the requirement for further processors in the given system.
Several steps are involved in the context switching of a process. The diagram given
below represents context switching between two processes, P1 and P2, in case of an
interrupt, I/O need, or the occurrence of a priority-based process in the PCB’s ready
queue.
The process P1 is initially running on the CPU for the execution of its task. At the
very same time, P2, another process, is in its ready state. If an interruption or error has
occurred or if the process needs I/O, the P1 process would switch the state from
running to waiting.
Before the change of the state of the P1 process, context switching helps in saving the
context of the P1 process as registers along with the program counter (to PCB1).
Then it loads the P2 process state from its ready state (of PCB2) to its running state.
Here are the steps are taken to switch the P1 to P2:
1. The context switching must save the P1’s state as the program counter and
register to PCB that is in its running state.
2. Now it updates the PCB1 to the process P1 and then moves the process to its
appropriate queue, like the ready queue, waiting queue and I/O queue.
3. Then, another process enters the running state. We can also select a new
process instead of from the ready state that needs to be executed or when a
process has a higher priority of executing its task.
4. Thus, now we need to update the PCB for the P2 selected process. It involves
switching a given process state from its running state or from any other state,
such as exit, blocked, or suspended.
5. In case the CPU already performs the execution of the P2 process, then we
must get the P2 process’s status so as to resume the execution of it at the very
same time at the same point at which there’s a system interrupt.
In a similar manner, the P2 process is switched off from the system’s CPU to let the
process P1 resume its execution. The process P1 is reloaded to the running state from
PCB1 to resume its assigned task at the very same point. Else, the data is lost, so
46
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
when the process is again executed, it starts the execution at its initial level.
47
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
UNIT-II
1. Construct the Gantt chart for Shortest remaining time first (SRTF)scheduling
algorithm for the provided data And also find the Average Waiting Time &
Average Turnaround Time.
Process P1 P2 P3 P4 P5
Arrival time 0 0 2 1 3
CPU Burst Time (in ms) 10 6 12 8 5
Process AT BT CT TAT WT
P1 0 10 29 29 19
P2 0 6 6 6 0
P3 2 12 41 39 27
P4 1 8 19 18 10
P5 3 5 11 8 3
Gantt Chart :
48
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Quantum = 10
Process AT BT CT TAT WT
P1 0 10 10 10 0
P2 1 29 49 48 19
P3 2 3 23 21 18
P4 3 7 30 27 20
TAT = CT- AT Total TAT =106 m.s
Ready Queue :
P1 P2 P3 P4 P2
Gantt Chart :
3. Consider the following set of process, with the length of the CPU burst
given in milliseconds
Process P1 P2 P3 P4 P5
Burst time 10 1 2 1 5
49
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Priority 3 1 3 4 2
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
What is the turnaround time of each process by applying Priority scheduling algorithm?
(Lower the number higher the priority )
Gantt Chart :
NOTE:
50
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Process AT BT CT TAT WT
P1 3 5 30 27 22
P2 1 15 21 20 5
P3 0 6 6 6 0
P4 2 4 25 23 19
Gantt Chart :
5. Consider the following set of process, with the length of the CPU burst
given in milliseconds
Process P1 P2 P3 P4 P5
Burst Time 6 2 8 3 4
Arrival Time 2 5 1 0 4
Draw the Gantt chart and calculate the average turnaround time and average waiting
time of the jobs for SJF (Non Preemptive) scheduling algorithm.
Process AT BT CT TAT WT
P1 2 6 9 7 1
P2 5 2 11 6 4
P3 1 8 23 22 14
P4 0 3 3 3 0
P5 4 4 15 11 7
51
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Gantt Chart :
6. What are the process states in operating system ? explain with diagram.
Ans:
When a process runs, it modifies the state of the system. The current activity of a
given process determines the state of the process
52
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
New State
When a program in secondary memory is started for execution, the process is said to
be in a new state.
Ready State
After being loaded into the main memory and ready for execution, a process
transitions from a new to a ready state. The process will now be in the ready state,
waiting for the processor to execute it. Many processes may be in the ready stage in a
multiprogramming environment.
Run State
After being allotted the CPU for execution, a process passes from the ready state to
the run state.
Terminate State
When a process’s execution is finished, it goes from the run state to the terminate
state. The operating system deletes the process control box (or PCB) after it enters the
terminate state.
If a process with a higher priority needs to be executed while the main memory is full,
the process goes from ready to suspend ready state. Moving a lower-priority process
from the ready state to the suspend ready state frees up space in the ready state for a
higher-priority process.
Until the main memory becomes available, the process stays in the suspend-ready
state. The process is brought to its ready state when the main memory becomes
accessible.
If a process with a higher priority needs to be executed while the main memory is full,
the process goes from the wait state to the suspend wait state. Moving a lower-priority
process from the wait state to the suspend wait state frees up space in the ready state
for a higher-priority process.
The process gets moved to the suspend-ready state once the resource becomes
accessible. The process is shifted to the ready state once the main memory is available.
53
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Note points :
The new state, run state, ready state, and terminate state are the four states.
However, in case a process also requires I/O, the minimum number of states
required is 5.
It is much more preferable to move a given process from its wait state to its suspend
wait state.
Consider the situation where a high-priority process comes, and the main
memory is full.
Then there are two options for making space for it. They are:
1. Suspending the processes that have lesser priority than the ready state.
2. Transferring the lower-priority processes from wait to the suspend wait state.
Now, out of these:
Moving a process from a wait state to a suspend wait state is the superior
option.
It is because this process is waiting already for a resource that is currently
unavailable.
1. User Level Thread: User-level threads are created and managed by the
user.
2. Kernel Level Thread: Kernel-level threads are created and managed
by the OS.
Issues related to threading are fork() and exec() system call, thread
cancellation, signal handling, etc.
Some of the advantages of threading include responsiveness, faster context
switching, faster communication, concurrency, efficient use of the
multiprocessor, etc.
55
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
A thread is a sequential flow of tasks within a process. Each thread has its own set of
registers and stack space. There can be multiple threads in a single process having
the same or different functionality. Threads are also termed lightweight processes.
A human body has different parts having different functionalities which are working
parallelly ( Eg: Eyes, ears, hands, etc). Similarly in computers, a single process might
have multiple functionalities running parallelly where each functionality can be
considered as a thread.
Threads in OS can be of the same or different types. Threads are used to increase the
performance of the applications.
Each thread has its own program counter, stack, and set of registers. But the threads
of a single process might share the same code and data/file. Threads are also
termed as lightweight processes as they share common resources.
Eg: While playing a movie on a device the audio and video are controlled by
different threads in the background.
The above diagram shows the difference between a single-threaded process and a
multithreaded process and the resources that are shared among threads in a
multithreaded process.
Components of Thread
1. Program Counter
2. Register Set
3. Stack space
Some of the reasons threads are needed in the operating system are:
56
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Since threads use the same data and code, the operational cost between
threads is low.
57
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Why Multithreading?
In Multithreading, the idea is to divide a single process into multiple threads instead
of creating a whole new process. Multithreading is done to achieve parallelism
Process vs Thread
Process simply means any program in execution while the thread is a segment of
a process. The main differences between process and thread are mentioned below:
Process Thread
Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
The below diagram shows how the resources are shared in two different
processes vs two threads in a single process.
58
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Types of Thread
User-level threads are implemented and managed by the user and the kernel is not
aware of it.
Kernel level threads are implemented using system calls and Kernel level
threads are recognized by the OS.
Kernel-level threads are slower to create and manage compared to user-
level threads.
Context switching in a kernel-level thread is slower.
Even if one kernel-level thread performs a blocking operation, it does not
affect other threads. Eg: Window Solaris.
59
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
60
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
The above diagram shows the functioning of user-level threads in user space and
kernel-level threads in kernel space.
Advantages of Threading
61
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
62
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Pipes
63
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Shared Memory
64
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Message Passing
Message Queues
65
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Indirect Communication
FIFO
Pipes: Pipes are a simple form of IPC used to allow communication between
two processes. A pipe is a unidirectional communication channel that allows
one process to send data to another process. The receiving process can read
the data from the pipe, and the sending process can write data to the pipe.
Pipes are commonly used for shell pipelines, where the output of one
command is piped as input to another command.
Shared Memory: Shared Memory is a type of IPC mechanism that allows two
or more processes to access the same portion of memory. This can be useful
for sharing large amounts of data between processes, such as video or audio
streams. Shared Memory is faster than other IPC mechanisms since data is
directly accessible in memory, but it requires careful management to avoid
synchronization issues.
Message Queues: Message Queues are another type of IPC mechanism used
to allow processes to send and receive messages. A message queue is a buffer
that stores messages until the receiver is ready to receive them. The sender
can place messages in the queue, and the receiver can retrieve messages from
the queue.
66
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
67
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
Data Sharing: IPC allows processes to share data with each other. This can be
useful in situations where one process needs to access data that is held by
another process.
Overall, IPC is a powerful tool for building complex, distributed systems that require
communication and coordination between different processes.
Overhead: IPC can introduce additional overhead, such as the need to serialize
and deserialize data, and the need to synchronize access to shared resources.
This can impact the performance of the system.
Scalability: IPC can also limit the scalability of a system, as it may be difficult
to manage and coordinate large numbers of processes communicating with
each other.
environments.
69
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
The multiple CPUs in the system are in close communication, which shares a common
bus, memory, and other peripheral devices. So we can say that the system is tightly
coupled. These systems are used when we want to process a bulk amount of data, and
these systems are mainly used in satellite, weather forecasting, etc.
There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to
run any process in the queue.
There is no policy or rule which can be declared as the best scheduling solution to a
system with a single processor. Similarly, there is no best scheduling solution for a
system with multiple processors as well.
There are two approaches to multiple processor scheduling in the operating system:
Symmetric Multiprocessing and Asymmetric Multiprocessing.
I/O processing are handled by a single processor called the Master Server. The
other processors execute only the user code. This is simple and reduces
71
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
the need for data sharing, and this entire scenario is called Asymmetric
Multiprocessing.
Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is
currently running. When a process runs on a specific processor, there are certain effects
on the cache memory. The data most recently accessed by the process populate the
cache for the processor. As a result, successive memory access by the process is often
satisfied in the cache memory.
Now, suppose the process migrates to another processor. In that case, the contents of
the cache memory must be invalidated for the first processor, and the cache for the
second processor must be repopulated. Because of the high cost of invalidating and
repopulating caches, most SMP(symmetric multiprocessing) systems try to avoid
migrating processes from one processor to another and keep a process running on the
same processor. This is known as processor affinity. There are two types of processor
affinity, such as:
Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly distributed across
all processors in an SMP system. Load balancing is necessary only on systems where
each processor has its own private queue of a process that is eligible to execute.
will sit idle while other processors have high workloads along with lists of processors
awaiting the CPU. There are two general approaches to load balancing:
73
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
1. Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each
processor by moving the processes from overloaded to idle or less busy
processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting
task from a busy processor for its execution.
Multi-core Processors
In multi-core processors, multiple processor cores are placed on the same physical chip.
Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor. SMP systems that use multi- core
processors are faster and consume less power than systems in which each processor has
its own physical chip.
However, multi-core processors may complicate the scheduling problems. When the
processor accesses memory, it spends a significant amount of time waiting for the data
to become available. This situation is called a Memory stall. It occurs for various
reasons, such as cache miss, which is accessing the data that is not in the cache memory.
In such cases, the processor can spend upto 50% of its time waiting for data to become
available from memory. To solve this problem, recent hardware designs have
implemented multithreaded processor cores in which two or more hardware threads are
assigned to each core. Therefore if one thread stalls while waiting for the memory, the
core can switch to another thread. There are two ways to multithread a processor:
Symmetric Multiprocessor
Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in
memory in this model, but any central processing unit can run it. Now, when a system
call is made, the central processing unit on which the system call was made traps the
kernel and processed that system call. This model balances processes and memory
dynamically. This approach uses Symmetric Multiprocessing, where each processor is
self-scheduling.
The scheduling proceeds further by having the scheduler for each processor examine
the ready queue and select a process to execute. In this system, this is possible that all
the process may be in a common ready queue or each processor may have its private
queue for the ready process. There are mainly three sources of contention that can be
found in a multiprocessor operating system.
Master-Slave Multiprocessor
In this multiprocessor model, there is a single data structure that keeps track of the ready
processes. In this model, one central processing unit works as a master and another as
a slave. All the processors are handled by a single processor, which is called the master
server.
75
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
76
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
The master server runs the operating system process, and the slave server runs the user
processes. The memory and input-output devices are shared among all the processors,
and all the processors are connected to a common bus. This system is simple and
reduces data sharing, so this system is called Asymmetric multiprocessing.
In this type of multiple processor scheduling, even a single CPU system acts as a
multiple processor system. In a system with virtualization, the virtualization presents
one or more virtual CPUs to each of the virtual machines running on the system. It then
schedules the use of physical CPUs among the virtual machines.
o Most virtualized environments have one host operating system and many guest
operating systems, and the host operating system creates and manages the
virtual machines.
o Each virtual machine has a guest operating system installed, and applications
run within that guest.
o Each guest operating system may be assigned for specific use cases,
applications, or users, including time-sharing or real-time operation.
o Any guest operating-system scheduling algorithm that assumes a certain
amount of progress in a given amount of time will be negatively impacted by
the virtualization.
o A time-sharing operating system tries to allot 100 milliseconds to each time
slice to give users a reasonable response time. A given 100 millisecond time
slice may take much more than 100 milliseconds of virtual CPU time.
Depending on how busy the system is, the time slice may take a second or more,
which results in a very poor response time for users logged into that virtual
machine.
o The net effect of such scheduling layering is that individual virtualized
operating systems receive only a portion of the available CPU cycles, even
though they believe they are receiving all cycles and scheduling all of those
cycles. The time-of-day clocks in virtual machines are often incorrect because
timers take no longer to trigger than they would on dedicated CPUs.
o Virtualizations can thus undo the good scheduling algorithm efforts of the
operating systems within virtual machines.
System call provides an interface between user program and operating system.
(Or)
System call provides an interface between user program and the kernel .
The structure of system call is as follows −
77
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE
When the user wants to give an instruction to the OS then it will do it through system
calls. Or a user program can access the kernel which is a part of the OS through system
calls.
It is a programmatic way in which a computer program requests a service from the
kernel of the operating system.
Example
78
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
fork() − A parent process always uses a fork for creating a new child process. The child
process is generally called a copy of the parent. After execution of fork, both parent and
child execute the same program in separate processes.
exec() − This function is used to replace the program executed by a process. The child
sometimes may use exec after a fork for replacing the process memory space with a new
program executable making the child execute a different program than the parent.
exit() − This function is used to terminate the process.
wait() − The parent uses a wait function to suspend execution till a child terminates.
Using wait the parent can obtain the exit status of a terminated child.
79
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
OPERATING SYSTEMS
UNIT-3
Short Questions & Answers:
1 What is a critical section? Give an example.
2 What are the necessary and sufficient conditions to occur
deadlock?
3 What is deadlock? What is starvation? How do they differ from each
other?
4 What is Race Condition ?
Unit 3
5 What is the importance of process synchronization?
6 What are the disadvantages of semaphore?
7 Define mutual exclusion?
8 Differentiate Unsafe state and Deadlocked State.
9 What is RAG?
10 Write short note on Monitors.
80
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
4. Circular Wait
Deadlock will happen if all the above four conditions happen simultaneously.
3. What is deadlock? What is starvation? How do they differ from each other?
Ans: A deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other process.
Example : when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other.
Starvation : It is a problem when the low-priority process gets jammed for a long duration of
time because of high-priority requests being executed. Starvation happens usually when the
process is delayed for an infinite period of duration.
81
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
9. What is RAG?
Ans:
Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.
It is a directed graph that represents the processes in the system, the resources available,
82
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
and the relationships between them. A process node in the RAG has two types of edges,
request edges, and assignment edges. A request edge represents a request by a process
for a resource, while an assignment edge represents the assignment of a resource to a
process.
83
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
wait(S)
{
while(S<=0); // busy waiting
S--;
}
signal(S)
84
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
{
S++;
}
Semaphores are of two types:
1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problem with multiple processes.
2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.
Problem Statement :
We have a buffer of fixed size. A producer can produce an item and can place in the buffer. A
consumer can pick items and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not consume any item. In
this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.
Initialization of Semaphores:
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
do
{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
85
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
}while(true);
When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed and
consumer can access the buffer.
Do
{
wait(full);
wait(mutex);
// consume item from buffer
signal(mutex);
signal(empty);
}while(true);
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by
1. The value of mutex is also increased so that producer can access the buffer now.
86
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Now, there exist some algorithms that can solve the Dining Philosophers problem but may have
a deadlock situation. Also, deadlock free situation is not necessarily starvation free. In solving,
the Dining Philosophers problem Semaphores can be used but it can cause into a deadlock. Thus,
to avoid these circumstances we use Monitors with conditional variables.
Dining Philosophers Solution using Monitors
Monitors are used because they give a deadlock free solution to the Dining Philosophers problem.
It is used to gain access over all the state variables and condition variables. After implying
monitors, it imposes a restriction that a philosopher may pickup his chopsticks only if both of
them are available at the same time.
To code the solution, we need to distinguish among three states in which may find a philosopher.
THINKING
HUNGRY
EATING
Example
Here is implementation of the Dining Philosophers problem using Monitors –
monitor DiningPhilosophers
{
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i)
{
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
{
self[i].wait();
}
}
void putdown(int i)
{
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
}
87
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
void test(int i)
{
if (state[(i + 4) % 5] != EATING && state[i] == HUNGRY &&state[(i + 1) % 5] !EATING)
{
state[i] = EATING;
self[i].signal();
}
}
initialization code()
{
for(int i=0;i<5;i++)
state[i] = THINKING;
}
}
DiningPhilosophers dp;
do
{
entry section
critical section
exit section
remainder section
88
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
} while (TRUE);
Any solution to the critical section problem must satisfy three requirements:
1. Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
2. Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their
remainder section can participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
3. Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Peterson’s Solution:
Peterson’s Solution is a classical software based solution to the critical section problem.
89
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
4. Explain Critical Section problem and apply Hardware methods of Solution to the
problem.
Ans:
Synchronization Hardware
Synchronization hardware i.e. hardware-based solution for the critical section problem which
introduces the hardware instructions that can be used to resolve the critical section problem
effectively. Hardware solutions are often easier and also improves the efficiency of the system.
The hardware-based solution to critical section problem is based on a simple tool i.e. lock. The
solution implies that before entering into the critical section the process must acquire a lock and
must release the lock when it exits its critical section. Using of lock also prevent the race
condition.
1. Mutual Exclusion: The hardware instruction must verify that at a point in time only one process
can be in its critical section.
2. Bounded Waiting: The processes interested to execute their critical section must not wait for
long to enter their critical section.
3. Progress: The process not interested in entering its critical section must not block other
processes from entering into their critical section.
There are three algorithms in the hardware approach of solving Process Synchronization
90
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
problem:
1. Test and Set
2. Swap
3. Unlock and Lock
Hardware instructions in many operating systems help in the effective solution of critical section
problems.
1. Test and Set:
Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works
in this way – it always returns whatever value is sent to it and sets lock to true. The first process
will enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of
the while loop. The other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one. Progress is
also ensured. However, after the first process, any process can go in. There is no queue
maintained, so any new process that finds the lock to be false again can enter. So bounded
waiting is not ensured.
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in
the swap function, key is set to true and then swapped with lock. First process will be
executed, and in while(key), since key=true , swap will take place and hence lock=true and
key=false. Again next iteration takes place while(key) but key=false , so while loop breaks and
first process will enter in critical section. Now another process will try to enter in Critical
section, so again key=true and hence while(key) loop will run and swap takes place so,
lock=true and key=true (since lock=true in first process). Again on next iteration while(key) is
true so this will keep on executing and another process will not be able to enter in critical
section. Therefore Mutual exclusion is ensured. Again, out of the critical section, lock is
changed to false, so any process finding it gets t enter the critical section. Progress is ensured.
However, again bounded waiting is not ensured for the very same reason.
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially. Once the ith process gets out of the critical section, it does not turn
lock to false so that any process can avail the critical section now, which was the problem with
the previous algorithms. Instead, it checks if there is any process waiting in the queue. The
queue is taken to be a circular queue. j is considered to be the next process in line and the
while loop checks from jth process to the last process and again from 0 to (i-1)th process if
there is any process waiting to access the critical section. If there is no process waiting then the
lock value is changed to false and any process which comes next can enter the critical section.
If there is, then that process’ waiting value is turned to false, so that the first while loop
becomes false and it can enter the critical section. This ensures bounded waiting. So the
problem of process synchronization can be solved through this algorithm.
91
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans: A Deadlock is a situation where each of the computer process waits for a resource
which is being assigned to some another process.
A deadlock happens in operating system when two or more processes need some resource
to complete their execution that is held by the other process.
Necessary conditions for Deadlocks:
1. Mutual Exclusion: A resource can only be shared in mutually exclusive manner. It
implies, if two process cannot use the same resource at the same time.
2. Hold and Wait: A process waits for some resources while holding another resource at
the same time.
3. No preemption: The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.
4. Circular Wait: All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held by the first process.
It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even
a slight chance that a transaction may lead to deadlock in the future, it is never allowed to
execute.
It can be done using Banker’s Algorithm.
Banker’s Algorithm:
1. It is used to avoid deadlock and allocate resources safely to each process in the computer
system.
2. The 'S-State' examines all possible tests or activities before deciding whether the allocation
should be allowed to each process. It also helps the operating system to successfully share
the resources between all the processes.
3. The banker's algorithm is named because it checks whether a person should be sanctioned
a loan amount or not to help the bank system safely simulate allocation resources.
4. When working with a banker's algorithm, it requests to know about three things:
How much each process can request for each resource in the system. It is denoted by the
[MAX] request.
How much each process is currently holding each resource in a system. It is denoted by the
[ALLOCATED] resource.
It represents the number of each resource currently available in the system. It is denoted by
the [AVAILABLE] resource.
Following are the important data structures terms applied in the banker's algorithm as
follows: Suppose n is the number of processes, and m is the number of each type of resource
used in a computer system.
1. Available: It is an array of length 'm' that defines each type of resource available in the
system. When Available[j] = K, means that 'K' instances of Resources type R[j] are
available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum
number of resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently
allocated to each process in the system. When Allocation [i, j] = K, it means that process
P[i] is currently allocated K instances of Resources type R[j] in the system.
92
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
4. If Finish[i] == true; it means that the system is safe for all processes
c).Is the system currently deadlocked? Why or Why not? Which process, if any, or may
become deadlocked if the whole request is granted immediately? [2+3+2+3]
Ans:
93
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
ABCD
P0 0000
P1 0750
P2 6622
P3 2011
P4 0320
We need to find a safety sequence such that it satisfies the criteria Need≤Available.
Since Need(P0)≤Available, we select
P0[AVAILABLE]=AVAILABLE+P0[ALLOCATION].
1. P0:
as (0000)≤2100 <=> 2100+0012=2112
2. P1:
as (0750)≤(2112)<=>2112+2000=4112
3. P2:
as (6622) is not less than or equal to(4112). Allocation condition fails.
4. P3:
as (2011)≤(4112)<=>4112+2345=6457
5. P4:
as (0320)≤(6457)<=>6457+0332=6789
6. P2:
as (6622)≤(6789)<=>6789+2000=8789
Hence the system is in safe state with safe sequence is: P0,P2,P3,P4,P1
c). Is the system currently deadlocked? Why or Why not? Which process, if any, or may
become deadlocked if the whole request is granted immediately?[2+3+2+3]
Ans:
Currently it is not deadlocked because the safe sequence obtained ensures the deadlock
prevention.
If the whole request is granted immediately, the process which will become deadlocked is P1
94
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans:
It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle.
Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.
It is a directed graph that represents the processes in the system, the resources available,
and the relationships between them. A process node in the RAG has two types of edges,
request edges, and assignment edges. A request edge represents a request by a process
for a resource, while an assignment edge represents the assignment of a resource to a
process.
To determine whether the system is in a safe state or not, the RAG is analyzed to check
for cycles. If there is a cycle in the graph, it means that the system is in an unsafe state,
and granting a resource request can lead to a deadlock. In contrast, if there are no cycles
in the graph, it means that the system is in a safe state, and resource allocation can
processed.
95
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
The readers-writers problem is used to manage synchronization so that there are no problems with
96
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
97
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
There are several algorithms for detecting deadlocks in an operating system, including:
98
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
6. Deadlock Detection using RAG: If a cycle is being formed in a Resource allocation graph
where all the resources have the single instance then the system is deadlocked. In Case of
Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition. The following example contains
three processes P1, P2, P3 and three resources R2, R2, R3. All the resources are having
single instances each. If we analyze the graph then we can find out that there is a cycle
formed in the graph since the system is satisfying all the four conditions of deadlock.
7. Deadlock Detection and Recovery: In this approach, The OS doesn't apply any
mechanism to avoid or prevent the deadlocks. Therefore the system considers that the
deadlock will definitely occur. In order to get rid of deadlocks, The OS periodically checks
the system for any deadlock. In case, it finds any of the deadlock then the OS will recover
the system using some recovery techniques. The main task of the OS is detecting the
deadlocks. The OS can detect the deadlocks with the help of Resource allocation graph.
8. In single instanced resource types, if a cycle is being formed in the system then there will
definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system
by converting the resource allocation graph into the allocation matrix and request matrix.
In order to recover the system from deadlocks, either OS considers resources or processes.
9. For Resource: Preempt the resource We can snatch one of the resources from the owner
of the resource (process) and give it to the other process with the expectation that it will
complete the execution and will release this resource sooner. Well, choosing a resource
which will be snatched is going to be a bit difficult. Rollback to a safe state System passes
through various states to get into the deadlock state. The operating system can rollback the
system to the previous safe state. For this purpose, OS needs to implement check pointing
at every state. The moment, we get into deadlock, we will rollback all the allocations to get
into the previous safe state.
10. For Process: Kill a process can solve our problem but the bigger concern is to decide
which process to kill. Generally, Operating system kills a process which has done least
amount of work until now. This is not a suggestible approach but can be implemented if
the problem becomes very serious. Killing all process will lead to inefficiency in the system
because all the processes will execute again from starting.
In the deadlock prevention process, the OS will prevent the deadlock from occurring by
avoiding any one of the four conditions that caused the deadlock. If the OS can avoid any of
the necessary conditions, a deadlock will not occur.
No Mutual Exclusion
It means more than one process can have access to a single resource at the same time. It’s
impossible because if multiple processes access the same resource simultaneously, there will be
chaos. Additionally, no process will be completed. So this is not feasible. Hence, the OS can’t
99
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
To avoid the hold and wait, there are many ways to acquire all the required resources before
starting the execution. But this is also not feasible because a process will use a single resource
at a time. Here, the resource utilization will be very less.
Before starting the execution, the process does not know how many resources would be required
to complete it. In addition to that, the bus time, in which a process will complete and free the
resource, is also unknown.
Another way is if a process is holding a resource and wants to have additional resources,
then it must free the acquired resources. This way, we can avoid the hold and wait condition,
but it can result in starvation.
Removal of No Preemption
One of the reasons that cause the deadlock is the no preemption. It means the CPU can’t take
acquired resources from any process forcefully even though that process is in a waiting
state. If we can remove the no preemption and forcefully take resources from a waiting process,
we can avoid the deadlock. This is an implementable logic to avoid deadlock.
For example, it’s like taking the bowl from Jones and give it to Jack when he comes to have
soup. Let’s assume Jones came first and acquired a resource and went into the waiting state. Now
when Jack came, the caterer took the bowl from Jones forcefully and told him not to hold the
bowl if you are in a waiting state.
In the circular wait, two processes are stuck in the waiting state for the resources which
have been held by each other. To avoid the circular wait, we assign a numerical integer value
to all resources, and a process has to access the resource in increasing or decreasing order.
If the process acquires resources in increasing order, it’ll only have access to the new additional
resource if that resource has a higher integer value. And if that resource has a lesser integer
value, it must free the acquired resource before taking the new resource and vice-versa for
decreasing order.
100
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans:
The Sleeping Barber problem is a classic problem in process synchronization that is used to
illustrate synchronization issues that can arise in a concurrent system. The problem is as
follows:
There is a barber shop with one barber and a number of chairs for waiting customers.
Customers arrive at random times and if there is an available chair, they take a seat and wait
for the barber to become available. If there are no chairs available, the customer leaves. When
the barber finishes with a customer, he checks if there are any waiting customers. If there are,
he begins cutting the hair of the next customer in the queue. If there are no customers waiting,
he goes to sleep.
The problem is to write a program that coordinates the actions of the customers and the barber
in a way that avoids synchronization problems, such as deadlock or starvation.
101
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans:
102
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
1. Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
3. Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
4. If one of them forgets to call signal(S) after a critical section, the program can deadlock
and the cause of the failure will be difficult to isolate.
If we were to build a large system using semaphores alone, the responsibility for the correct use
of the semaphores would be diffused among all the implementers of the system .
103
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
OPERATING SYSTEMS
UNIT-4
Short Questions & Answers:
1 Compare internal and external fragmentation.
2 List the first fit & best fit memory allocation techniques.
3 What are the disadvantages of virtual memory?
4 What is Thrashing?
5 What is Compaction?
Unit 4
6 What is Virtual Memory? Why is it required?
7 Differentiate logical and physical address.
8 What is a Page fault?
9 What is Demand paging?
10 What is the difference between Page and Frame?
2. List the first fit & best fit memory allocation techniques.
Ans:
First Fit: “search for the first hole that is big enough “
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Best Fit: “search for the smallest hole that is big enough “
The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers the
smallest hole that is adequate. It then tries to find a hole which is close to actual process size
needed.
4. What is Thrashing?
104
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans: Thrashing is a condition or a situation when the system is spending a major portion of its
time servicing the page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.
5. What is Compaction ?
Ans: Compaction is a technique to collect all the free memory present in form of fragments
into one large chunk of free memory, which can be used to run other processes.
(Or) Compaction refers to combining of all the empty spaces together and processes.
Compaction helps to solve the problem of fragmentation, but it requires a lot of CPU time. It
moves all the occupied areas of storage to one end and leaves one large free space for
incoming jobs, instead of numerous small ones.
Ans: Virtual memory is a memory management technique where secondary memory can be used
as if it were a part of the main memory. Virtual memory is a common technique used in a
computer's operating system (OS).
Virtual memory uses both hardware and software to enable a computer to compensate for
physical memory shortages, temporarily transferring data from random access memory (RAM)
to disk storage. Mapping chunks of memory to disk files enables a computer to treat secondary
memory as though it were main memory.
105
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Today, most personal computers (PCs) come with at least 8 GB (gigabytes) of RAM. But,
sometimes, this is not enough to run several programs at one time. This is where virtual memory
comes in. Virtual memory frees up RAM by swapping data that has not been used recently over
to a storage device, such as a hard drive or solid-state drive (SSD).
Virtual memory is important for improving system performance, multitasking and using large
programs. However, users should not overly rely on virtual memory, since it is considerably
slower than RAM. If the OS has to swap data between virtual memory and RAM too often, the
computer will begin to slow down -- this is called thrashing.
106
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans: Swapping is a memory management technique and is used to temporarily remove the inactive
programs from the main memory of the computer system. Any process must be in the memory for
107
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
its execution, but can be swapped temporarily out of memory to a backing store and then again
brought back into the memory to complete its execution. Swapping is done so that other processes
get memory for their execution. Due to the swapping technique performance usually gets affected,
but it also helps in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processes may be swapped
out so that processes with a higher priority may be loaded and executed.
The above diagram shows swapping of two processes where the disk is used as a Backing store.
The swapping of processes by the memory manager is fast enough that some processes will be in
memory, ready to execute, when the CPU scheduler wants to reschedule the CPU.
A variant of the swapping technique is the priority-based scheduling algorithm. If any higher-
priority process arrives and wants service, then the memory manager swaps out lower priority
processes and then load the higher priority processes and then execute them. When the process
with higher priority finishes .then the process with lower priority swapped back in and continues
its execution. This variant is sometimes known as roll in and roll out.
There are two more concepts that come in the swapping technique and these are: swap in and swap
out.
The procedure by which any process gets removed from the hard disk and placed in the main
memory or RAM commonly known as Swap In.
On the other hand, Swap Out is the method of removing a process from the main memory or
RAM and then adding it to the Hard Disk.
108
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Advantages of Swapping
1. The swapping technique mainly helps the CPU to manage multiple processes within a
single main memory.
2. This technique helps to create and use virtual memory.
3. With the help of this technique, the CPU can perform several tasks simultaneously. Thus,
processes need not wait too long before their execution.
4. This technique is economical.
5. This technique can be easily applied to priority-based scheduling in order to improve its
performance.
Disadvantages of Swapping
1. There may occur inefficiency in the case if a resource or a variable is commonly used by
those processes that are participating in the swapping process.
2. If the algorithm used for swapping is not good then the overall method can increase the
number of page faults and thus decline the overall performance of processing.
3. If the computer system loses power at the time of high swapping activity then the user
might lose all the information related to the program.
Virtual Memory is a space where large programs can store themselves in form of pages while
their execution and only the required pages or portions of processes are loaded into the main
memory. This technique is useful as a large virtual memory is provided for user programs when a
very small physical memory is there. Thus Virtual memory is a technique that allows the execution
of processes that are not in the physical memory completely.
Virtual Memory mainly gives the illusion of more physical memory than there really is with the
help of Demand Paging.
In real scenarios, most processes never need all their pages at once, for the following reasons :
Error handling code is not needed unless that specific error occurs, some of which are quite
rare.
Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays
are actually used in practice.
Certain features of certain programs are rarely used.
109
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
In an Operating system, the memory is usually stored in the form of units that are known as pages.
Basically, these are atomic units used to store large programs.
1. Demand Paging
2. Demand Segmentation
In case, if a computer running the Windows operating system needs more memory or RAM
than the memory installed in the system then it uses a small portion of the hard drive for this
purpose.
Suppose there is a situation when your computer does not have space in the physical memory,
then it writes things that it needs to remember into the hard disk in a swap file and that as
virtual memory.
1. Large programs can be written, as the virtual space available is huge compared to physical
memory.
2. Less I/O required leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
4. Therefore, the Logical address space can be much more larger than that of physical address
space.
5. Virtual memory allows address spaces to be shared by several processes.
6. During the process creation, virtual memory allows: copy-on-write and Memory-mapped
files
110
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
1.Internal Fragmentation: Suppose the size of the process is lesser than the size of the partition
in that case some size of the partition gets wasted and remains unused. This wastage inside the
memory is generally termed as Internal fragmentation. As we have shown in the above diagram
the 70 KB partition is used to load a process of 50 KB so the remaining 20 KB got wasted.
2.Limitation on the size of the process: If in a case size of a process is more than that of a
maximum-sized partition then that process cannot be loaded into the memory. Due to this, a
condition is imposed on the size of the process and it is: the size of the process cannot be larger
than the size of the largest partition.
4.Degree of multiprogramming is less: In this partition scheme, as the size of the partition cannot
change according to the size of the process. Thus the degree of multiprogramming is very less and
is fixed.
Ans:
111
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
3. Page size is determined by hardware. Here, the section size is given by the user.
It is faster in comparison to
4. segmentation. Segmentation is slow.
In paging, the logical address is split Here, the logical address is split into section
6. into a page number and page offset. number and section offset.
Paging comprises a page table that While segmentation also comprises the
encloses the base address of every segment table which encloses segment
7. page. number and segment offset.
In paging, the processor needs the page In segmentation, the processor uses
number, and offset to calculate the segment number, and offset to calculate the
11. absolute address. full address.
112
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
segmentation.
The size of the page needs always be There is no constraint on the size of
15. equal to the size of frames. segments.
4. Discuss the Least Recently Used page replacement algorithm with example.
Ans:
1. Least Recently Used (LRU) algorithm is a page replacement technique used for memory
management. According to this method, the page which is least recently used is replaced.
Therefore, in memory, any page that has been unused for a longer period of time than the
others is replaced.The Least Recently Used (LRU) page replacement policy replaces the
page that has not been used for the longest period of time. It is one of the algorithms
that were made to approximate if not better the efficiency of the optimal page replacement
algorithm. The optimal algorithm assumes the entire reference string to be present at the
time of allocation and replaces the page that will not be used for the longest period of time.
LRU page replacement policy is based on the observation that pages that have been heavily
used in the last few instructions will probably be heavily used again in the next few.
Conversely, pages that have not been used for ages will probably remain unused for a long
time.
2. It is rather expensive to implement in practice in many cases and hence alternatives to LRU
or even variants to the original LRU are continuously being sought.
3. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory,
with the most recently used page at the front and the least recently used page at the rear.
The difficulty is that the list must be updated on every memory reference. Finding a page
in the list, deleting it, and then moving it to the front is a very time consuming operation,
even in hardware (assuming that such hardware could be built) or special hardware
resources need to be in place for LRU implementation which again is not satisfactory.
113
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
4. One important advantage of the LRU algorithm is that it is amenable to full statistical
analysis. It has been proven, for example, that LRU can never result in more than N-times
more page faults than Optimal (OPT) algorithm, where N is proportional to the number of
pages in the managed pool.
5. On the other hand, LRU's weakness is that its performance tends to degenerate under many
quite common reference patterns.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 0 is
already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for
the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault
because they are already available in the memory.
Ans:
114
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
A A A A A A A A A A
B B B B B B B B B
C C C C E E E E
D D D D D D C
D E
A E
B B
D D
C C
* *
Page fault dominates more like an error. It mainly occurs when any program tries to access the
data or the code that is in the address space of the program, but that data is not currently located in
the RAM of the system.
So basically when the page referenced by the CPU is not found in the main memory then
the situation is termed as Page Fault.
Whenever any page fault occurs, then the required page has to be fetched from the
secondary memory into the main memory.
In case if the required page is not loaded into the memory, then a page fault trap arises
The page fault mainly generates an exception, which is used to notify the operating system that it
must have to retrieve the "pages" from the virtual memory in order to continue the execution. Once
all the data is moved into the physical memory the program continues its execution normally. The
Page fault process takes place in the background and thus goes unnoticed by the user.
115
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
The hardware of the computer tracks to the kernel and the program counter (PC) is
generally saved on the stack. CPU registers store the information of the current state of
instruction.
An assembly program is started that usually saves the general registers and also saves the
other volatile information to prevent the OS from destroying it.
If you will access a page that is marked as invalid then it also causes a Page Fault. Then the Paging
hardware during translating the address through the page table will notice that the invalid bit is set
that will cause a trap to the Operating system.
This trap is mainly the result of the failure of the Operating system in order to bring the desired
page into memory.
The procedure to handle the page fault as shown with the help of the above diagram:
1. First of all, internal table (that is usually the process control block) for this process in order
to determine whether the reference was valid or invalid memory access.
2. If the reference is invalid, then we will terminate the process. If the reference is valid, but
we have not bought in that page so now we just page it in.
3. Then we locate the free frame list in order to find the free frame.
4. Now a disk operation is scheduled in order to read the desired page into the newly
allocated frame.
5. When the disk is completely read, then the internal table is modified that is kept with the
process, and the page table that mainly indicates the page is now in memory.
116
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
6. Now we will restart the instruction that was interrupted due to the trap. Now the process
can access the page as though it had always been in memory.
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. Memory Management function keeps track of the status
of each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.
a) Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the main
memory. In this partitioning, the number of partitions (non-overlapping) in RAM is fixed
but the size of each partition may or may not be the same. As it is
a contiguous allocation, hence no spanning is allowed. Here partitions are made before
execution or during system configure.
117
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.
1. Easy to implement
2. Little OS overhead
1. Internal Fragmentation
2. External Fragmentation
3. Limit process size
4. Limitation on Degree of Multiprogramming
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed
Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or
during system configure. Various features associated with variable Partitioning-
1. Initially RAM is empty and partitions are made during the run-time according to
process’s need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
118
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
3. The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilisation of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming
process and Main Memory’s size.
1. No Internal Fragmentation
2. No restriction on Degree of Multiprogramming
3. No Limitation on the size of the process
1. Difficult Implementation
2. External Fragmentation
For example, suppose in above example- process P1(2MB) and process P3(1MB)
completed their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose
process P5 of size 3MB comes. The empty space in memory cannot be allocated as no
spanning is allowed in contiguous allocation. The rule says that process must be
contiguously present in main memory to get executed. Hence it results in External
Fragmentation.
119
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans:
Segmentation :
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the same sizes are called segments. Segmentation gives user’s view of the
process which paging does not give. Here the user’s view is mapped to physical memory.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are resident at
any one point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into
memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical
address. It’s each table entry has:
Base Address: It contains the starting physical address where the segments reside in
memory.
Limit: It specifies the length of the segment.
120
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
121
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans: Bélády’s anomaly is the name given to the phenomenon where increasing the number of
page frames results in an increase in the number of page faults for a given memory access
pattern.
This phenomenon is commonly experienced in the following page replacement algorithms:
1. First in first out (FIFO)
2. Second chance algorithm
3. Random page replacement algorithm
Example: Consider the following diagram to understand the behavior of a stack-based page
replacement algorithm
The diagram illustrates that given the set of pages i.e. {0, 1, 2} in 3 frames of memory is not a
subset of the pages in memory – {0, 1, 4, 5} with 4 frames and it is a violation in the property of
stack based algorithms. This situation can be frequently seen in FIFO algorithm.
Belady’s Anomaly in FIFO –
Assuming a system that has no pages loaded in the memory and uses the FIFO Page
replacement algorithm. Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string the using FIFO page replacement
algorithm yields a total of 9 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.
122
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Case-2: If the system has 4 frames, the given reference string using the FIFO page replacement
algorithm yields a total of 10 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.
It can be seen from the above example that on increasing the number of frames while using the
FIFO page replacement algorithm, the number of page faults increased from 9 to 10.
Note – It is not necessary that every string reference pattern cause Belady anomaly in FIFO but
there is certain kind of string references that worsen the FIFO performance on increasing the
number of frames.
b) hardware support required to implement paging
Each operating system has its own techniques for storing page tables. The majority allocates a
page table for each process and a pointer to the page table is stored with the other register values
in the process control block.
The hardware execution of the page table can be done in several ways. In the simplest case the
page table is executing as a set of dedicated registers. These registers must be built with very
high-speed logic to make the paging-address translation efficient.
The use of registers for the page table is adequate if the page table is reasonably small. For other
cases the page table is kept in main memory also a page table base register (PTBR) points to the
page table. Changing page tables needs changing only one register substantially reducing
context-switch time.
Though the standard solution to this problem is to use a fast-lookup, special, small hardware
cache called translation look-aside buffer (TLB). The TLB is an associative high speed memory.
Every entry in the TLB consists of 2 parts: a value and a key. Only some TLB's stores address-
123
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
space identifiers (ASID's) in an every entry of the TLB. An ASID uniquely identifies every
process and is used to provide address space protection for that process. While TLB attempts to
resolve virtual page numbers, it makes sure the ASID for the currently running process matches
the ASID associated with the virtual page.
1. Page Number(p)
2. Page Offset (d)
where,
Page Number is used as an index into the page table that generally contains the base address of
each page in the physical memory.
Page offset is combined with base address in order to define the physical memory address which
is then sent to the memory unit.
If the size of logical address space is 2 raised to the power m and page size is 2 raised to the
power n addressing units then the high order m-n bits of logical address designates the page
number and the n low-order bits designate the page offset.
124
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
where p indicates the index into the page table, and d indicates the displacement within the page.
The Page size is usually defined by the hardware. The size of the page is typically the power of 2
that varies between 512 bytes and 16 MB per page.
Ans:
125
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
126
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
OPERATING SYSTEMS
UNIT-5
Short Questions & Answers:
1 What are the various file accessing methods?
2 Define the terms Seek Time and Rotational Latency.
3 Define File. List down the operations that may be performed on File.
4 What are the file attributes?
5 Define mounting. What is the need for mounting in a file system?
Unit 5
6 What is directory structure?
7 Define Disk Scheduling.
8 Discuss about Free Space Management
9 List the file Allocation methods.
10 Define Boot Block and Bad blocks.
5. Define File. List down the operations that may be performed on File.
Ans: File : A file can be defined as a collection of data or information.
1. Program files
2. Data Files.
Operations on a file :
127
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
• Name. The symbolic file name is the only information kept in human readable form.
• Identifier. This unique tag, usually a number, identifies the file within the file system; it is
the non-human-readable name for the file.
• Type. This information is needed for systems that support different types of files.
• Location. This information is a pointer to a device and to the location of the file on that
device. • Size. The current size of the file (in bytes, words, or blocks) and possibly the
maximum allowed size are included in this attribute.
• Protection. Access-control information determines who can do reading, writing, executing,
and so on.
• Time, date, and user identification. This information may be kept for creation, last
modification, and last use. These data can be useful for protection, security, and usage
monitoring.
Mounting a file system attaches that file system to a directory (mount point) and makes it
available to the system. The root (/) file system is always mounted. Any other file system can be
connected or disconnected from the root (/) file system.
When you mount a file system, any files or directories in the underlying mount point directory
are unavailable as long as the file system is mounted.
Ans: The directory structure is the organization of files into a hierarchy of folders.
On a computer, a directory is used to store, arrange, and segregate files and folders.
There are several logical structures of a directory, these are given below.
Single level directory
128
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Two-level directory
Tree structure or hierarchical directory
Acyclic graph directory
Ans: Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling. Disk scheduling is important because: Hard
drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
Following are four methods of doing free space management in operating systems:
129
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Indexed Allocation
The main idea behind these methods is to provide:
Efficient disk space utilization.
Fast access to the file blocks.
(or)
130
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Contiguous Allocation:
In this scheme, each file occupies a contiguous set of blocks on the disk.
For example: if a file requires n blocks and is given a block b as the starting location, then the
blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting
block address and the length of the file (in terms of blocks required), we can determine the
blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
131
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks.
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous memory
at a particular instance.
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last
block (25) contains -1 indicating a null pointer and does not point to any other block.
132
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
It does not support random or direct access. We cannot directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access)
from the starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra overhead.
2.Compare and Contrast Free space management and Swap space management.
Ans:
Free Space Management:
As we know in our system, the hard disk space is limited. We need to use this space wisely. A file
system is responsible to allocate the free blocks to the file therefore it has to keep track of all the
free blocks present in the disk. So, the operating system manages the free space in the hard disk
created by the leftover spaces in memory or by deleting files using the free space management
techniques.
There are four methods of doing free space management in operating systems. These are as
follows-
Bit Vector
133
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Linked List
Grouping
Counting
Bit Vector:
The first method that we will discuss is the bit vector method. Also known as the bit map, this is
the most frequently used method to implement the free space list. In this method, each block in
the hard disk is represented by a bit (either 0 or 1). If a block has a bit 0 means that the block is
allocated to a file, and if a block has a bit 1 means that the block is not allocated to any file, i.e.,
the block is free.
For example, consider a disk having 16 blocks where block numbers 2, 3, 4, 5, 8, 9, 10, 11, 12,
and 13 are free, and the rest of the blocks, i.e., block numbers 0, 1, 6, 7, 14 and 15 are allocated
to some files. The bit vector for this disk will look like this-
We can find the free block number from the bit vector using the following method-
Block number = (Number of bits per word )* (number of 0-value words) + (offset of first
bit)
We will now find the first free block number in the above example.
The first group of 8 bits (00111100) constitutes a non-zero word since all bits are not 0. After
finding the non-zero word, we will look for the first 1 bit. This is the third character of the non-
zero word. Hence, offset = 3.
Therefore, the first free block number = 8 * 0 + 3 = 3.
Linked List:
Another method of doing free space management in operating systems is a linked list. In this
method, all the free blocks existing in the disk are linked together in a linked list. The address of
the first free block is stored somewhere in the memory. Each free block contains a pointer that
contains the address to the next free block. The last free block points to null, indicating the end
of the linked list.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files. If we maintain a linked list, then Block 3 will contain a pointer to Block 4, and
Block 4 will contain a pointer to Block 5.
Similarly, Block 5 will point to Block 6, Block 6 will point to Block 9, Block 9 will point to
Block 10, Block 10 will point to Block 11, Block 11 will point to Block 12, Block 12 will point
to Block 13 and Block 13 will point to Block 14. Block 14 will point to null. The address of the
first free block, i.e., Block 3, will be stored somewhere in the memory. This is also represented
134
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Grouping:
The third method of free space management in operating systems is grouping. This method is the
modification of the linked list method. In this method, the first free block stores the addresses of
the n free blocks. The first n-1 of these blocks is free. The last block in these n free blocks
contains the addresses of the next n free blocks, and so on.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files.
If we apply the Grouping method considering n to be 3, Block 3 will store the addresses of Block
4, Block 5, and Block 6. Similarly, Block 6 will store the addresses of Block 9, Block 10, and
Block 11. Block 11 will store the addresses of Block 12, Block 13, and Block 14. This is also
represented in the following figure-
This method overcomes the disadvantages of the linked list method. The addresses of a large
number of free blocks can be found quickly, just by going to the first free block or the nth free
block. There is no need to traverse the whole list, which was the situation in the linked list
method.
Counting:
This is the fourth method of free space management in operating systems. This method is also a
modification of the linked list method. This method takes advantage of the fact that several
contiguous blocks may be allocated or freed simultaneously. In this method, a linked list is
maintained but in addition to the pointer to the next free block, a count of free contiguous blocks
135
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
that follow the first block is also maintained. Thus each free block in the disk will contain two
things-
1. A pointer to the next free block.
2. The number of free contiguous blocks following it.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files.
If we apply the counting method, Block 3 will point to Block 4 and store the count 4 (since
Block 3, 4, 5, and 6 are contiguous). Similarly, Block 9 will point to Block 10 and keep the count
of 6 (since Block 9, 10, 11, 12, 13, and 14 are contiguous). This is also represented in the
following figure-
This method also overcomes the disadvantages of the linked list method since there is no need to
traverse the whole list.
Note:
In the grouping method, the first free block stores the addresses of the next n free blocks, and in
the counting method, a free block stores the count of the next contiguous free blocks along with a
pointer to the next free block. Both these methods are used to overcome the drawbacks of the
linked list method.
Indexed Allocation: In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file. Each file has its own index block. The ith entry in
the index block contains the disk address of the ith file block. The directory entry contains the
address of the index block as shown in the image:
136
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked allocation.
For very small files, say files that expand only 2-3 blocks, the indexed allocation would
keep one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.
Mounting refers to the grouping of files in a file system structure accessible to the user
of the group of users. It can be local or remote, in the local mounting, it connects disk
drivers as one machine, while in the remote mounting it uses Network File System (NFS)
to connect to directories on other machines so that they can be used as if they are the part
137
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
138
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
1. Sequential-Access,
2. Direct Access,
3. Index sequential Method.
Sequential Access –
It is the simplest access method. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read
next- read the next position of the file and automatically advance a file pointer, which
keeps track I/O location. Similarly, for the -write next- append to the end of the file and
advance to the newly written material.
Key points:
Data is accessed one record right after another record in an order.
When we use read command, it move ahead pointer by one
When we use write command, it will allocate memory and move the pointer to the end
of the file
Such a method is reasonable for tape.
Direct Access –
Another method is direct access method also known as relative access method. A fixed-length
logical record that allows the program to read and write record rapidly. in no particular order.
The direct access is based on the disk model of a file since disk allows random access to any
file block. For direct access, the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59, and then we can write block 17. There is no
restriction on the order of reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a relative block
number, the first relative block of the file is 0 and then 1 and so on.
Advantages of Direct Access Method:
The files can be immediately accessed decreasing the average access time.
In the direct access method, in order to access a block, there is no need of traversing all the
blocks present before it.
It is the other method of accessing a file that is built on the top of the sequential access method.
These methods construct an index for the file. The index, like an index in the back of a book,
contains the pointer to the various blocks. To find a record in the file, we first search the index,
and then by the help of pointer we access the file directly.
Key points:
It is built on top of Sequential access.
It control the pointer by using index.
139
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Ans:
First Come First Serve (FCFS) Disk Scheduling Algorithm:
FCFS is the simplest disk scheduling algorithm. As the name suggests, this algorithm
entertains requests in the order they arrive in the disk queue. The algorithm looks very fair and
there is no starvation (all requests are serviced sequentially) but generally, it does not provide
the fastest service.
Example:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
The following chart shows the sequence in which requested tracks are serviced using FCFS.
Note: The FCFS process the first request then the next and so on…
= (176-50)+(176-79)+(79-34)+(60-34)+(92-60)+(92-11)+(41-11)+(114-41)
= 510
Total number of seek operations = 510
Basic idea is the tracks which are closer to current disk head position should be serviced first
in order to minimise the seek operations.
Example –
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
The following chart shows the sequence in which requested tracks are serviced using SSTF.
140
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Note:
Service all requests close (nearer) to the current head position.
Advantages:
Disadvantages:
Starvation is possible for some requests as it favors easy to reach request and ignores the far
away processes.
1. There is lack of predictability because of high variance of response time.
2. Switching direction slows things down.
In SCAN disk scheduling algorithm, head starts from one end of the disk and moves towards
141
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
the other end, servicing requests in between one by one and reach the other end. Then the
direction of the head is reversed and the process continues as head continuously scan back and
forth to access the disk. So, this algorithm works as an elevator and hence also known as
the elevator algorithm. As a result, the requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait.
Example:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
The following chart shows the sequence in which requested tracks are serviced using SCAN.
Note:
Head starts at one end and moves towards the other end, servicing the requests on its way. At
the end the head movement direction is reversed and servicing continues.
142
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
C-scan algorithm also known as Circular Elevator algorithm is the modified version of SCAN
algorithm. In this algorithm, the head pointer starts from one end of the disk and moves
towards the other end, serving all requests in between. After reaching the other end, the head
reverses its direction and go to the starting point. It then satisfies the remaining requests, in the
same direction as before.
Example :
Consider a disk with 200 tracks (0-199)
and the disk queue having I/O requests in the following order as follows:
98, 183, 40, 122, 10, 124, 65
The current head position of the Read/Write head is 53
and will move in Right direction.
The following chart shows the sequence in which requested tracks are serviced using C-
SCAN.
143
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Directory implementation in the operating system can be done using Singly Linked List and Hash
table. The efficiency, reliability, and performance of a file system are greatly affected by the
selection of directory-allocation and directory-management algorithms. There are numerous ways
in which the directories can be implemented. But we need to choose an appropriate directory
implementation algorithm that enhances the performance of the system.
144
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
The implementation of directories using a singly linked list is easy to program but is time-
consuming to execute. Here we implement a directory by using a linear list of filenames with
pointers to the data blocks.
To create a new file the entire list has to be checked such that the new directory does not exist
previously.
The new directory then can be added to the end of the list or at the beginning of the list.
In order to delete a file, we first search the directory with the name of the file to be deleted.
After searching we can delete that file by releasing the space allocated to it.
To reuse the directory entry we can mark that entry as unused or we can append it to the list of
free directories.
To delete a file linked list is the best choice as it takes less time.
Disadvantage
The main disadvantage of using a linked list is that when the user needs to find a file the user has
to do a linear search. In today’s world directory information is used quite frequently and linked
list implementation results in slow access to a file. So the operating system maintains a cache to
store the most recently used directory information.
An alternative data structure that can be used for directory implementation is a hash table. It
overcomes the major drawbacks of directory implementation using a linked list. In this method,
we use a hash table along with the linked list. Here the linked list stores the directory entries, but
a hash data structure is used in combination with the linked list.
In the hash table for each pair in the directory key-value pair is generated. The hash function on
the file name determines the key and this key points to the corresponding file stored in the
directory. This method efficiently decreases the directory search time as the entire list will not be
searched on every operation. Using the keys the hash table entries are checked and when the file
145
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
is found it is fetched.
Disadvantage:
The major drawback of using the hash table is that generally, it has a fixed size and its dependency
on size. But this method is usually faster than linear search through an entire directory using a
linked list.
A file is a collection of related information. The file system resides on secondary storage and
provides efficient and convenient access to the disk by allowing data to be stored, located, and
retrieved.
File system implementation in an operating system refers to how the file system manages the
storage and retrieval of data on a physical storage device such as a hard drive, solid-state drive, or
flash drive. The file system implementation includes several components, including:
1. File System Structure: The file system structure refers to how the files and directories are
organized and stored on the physical storage device. This includes the layout of file systems
data structures such as the directory structure, file allocation table, and inodes.
2. File Allocation: The file allocation mechanism determines how files are allocated on the
storage device. This can include allocation techniques such as contiguous allocation, linked
allocation, indexed allocation, or a combination of these techniques.
146
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
3. Data Retrieval: The file system implementation determines how the data is read from and
written to the physical storage device. This includes strategies such as buffering and caching
to optimize file I/O performance.
4. Security and Permissions: The file system implementation includes features for managing
file security and permissions. This includes access control lists (ACLs), file permissions, and
ownership management.
5. Recovery and Fault Tolerance: The file system implementation includes features for
recovering from system failures and maintaining data integrity. This includes techniques such
as journaling and file system snapshots.
File system implementation is a critical aspect of an operating system as it directly impacts the
performance, reliability, and security of the system. Different operating systems use different file
system implementations based on the specific needs of the system and the intended use cases.
Some common file systems used in operating systems include NTFS and FAT in Windows, and
ext4 and XFS in Linux.
The file system is organized into many layers:
1. I/O Control level – Device drivers act as an interface between devices and OS, they help to
transfer data between disk and main memory. It takes block number as input and as output, it
gives low-level hardware-specific instruction.
2. Basic file system – It Issues general commands to the device driver to read and write physical
blocks on disk. It manages the memory buffers and caches. A block in the buffer can hold the
contents of the disk block and the cache stores frequently used file system metadata.
3. File organization Module – It has information about files, the location of files and their logical
and physical blocks. Physical blocks do not match with logical numbers of logical blocks
numbered from 0 to N. It also has a free space that tracks unallocated blocks.
4. Logical file system – It manages metadata information about a file i.e includes all details about
a file except the actual contents of the file. It also maintains via file control blocks. File Control
Block (FCB) has information about a file – owner, size, permissions, and location of file
contents.
147
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
Advantages:
The main advantage is there can be more than two files with same name, and would be very
helpful if there are multiple users.
A security would be there which would prevent user to access other user’s files.
Searching of the files becomes very easy in this directory structure.
Disadvantages:
As there is advantage of security, there is also disadvantage that the user cannot share the
file with the other users.
Unlike the advantage users can create their own files, users don’t have the ability to create
subdirectories.
Actually other directory structures like Single,Two-Level & Tree structure none of them have
the capability to access one file from multiple directories. The file or the subdirectory could be
accessed through the directory it was present in, but not from the other directory.
This problem is solved in acyclic graph directory structure, where a file in one directory can be
accessed from multiple directories. In this way, the files could be shared in between the users.
It is designed in a way that multiple directories point to a particular directory or file with the
148
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE
help of links.
In the below figure, this explanation can be nicely observed, where a file is shared between
multiple users. If any user makes a change, it would be reflected to both the users.
Advantages:
Sharing of files and directories is allowed between multiple users.
Searching becomes too easy.
Flexibility is increased as file sharing and editing access is there for multiple users.
Disadvantages:
Because of the complex structure it has, it is difficult to implement this directory structure.
The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.
If we need to delete the file, then we need to delete all the references of the file in order to
delete it permanently.
149