0% found this document useful (0 votes)
15 views149 pages

Os QB Answers

The document is a question bank for Operating Systems aimed at II-B.Tech students, covering key concepts such as the definition of an operating system, its services, types (batch, time-sharing, distributed, and real-time), and system calls. It includes detailed explanations of various functions of operating systems like process management, memory management, and file management, as well as the role of the kernel. Additionally, it discusses the importance of system calls for interacting with the OS and outlines the advantages and disadvantages of different operating system types.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views149 pages

Os QB Answers

The document is a question bank for Operating Systems aimed at II-B.Tech students, covering key concepts such as the definition of an operating system, its services, types (batch, time-sharing, distributed, and real-time), and system calls. It includes detailed explanations of various functions of operating systems like process management, memory management, and file management, as well as the role of the kernel. Additionally, it discusses the importance of system calls for interacting with the OS and outlines the advantages and disadvantages of different operating system types.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

Operating Systems –Question Bank with Answers for II-B.

Tech –II-Sem AIDS/CSM/CSC/CSE

Operating Systems

Year/Sem II-II Branch: AIDS /CSM/CSC/CSE

UNIT-I

1. What is an operating system?


Ans: An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a system software which performs all the basic tasks
like file management, memory management, process management, handling input and
output, and controlling peripheral devices such as disk drives and printers.

2. What are operating system services?


Ans: An Operating System provides services to both the users and to the programs.
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

3. Why is the Operating System viewed as a resource allocator ?


Ans: A computer system has many resources – hardware and software that may be
required to solve a problem, like CPU time, memory space, file-storage space, I/O
devices & so on. The OS acts asa manager for these resources so it is viewed as a
resource allocator.

1
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

4. What are Batch operating systems?

Ans: The users of a batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched together
and run as a group. The programmers leave their programs with the operator and the
operator then sorts the programs with similar requirements into batches.

The problems with Batch Systems are as follows −


 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower
than the CPU.
 Difficult to provide the desired priority.

5. What are Time sharing operating systems?


Ans: Processor's time which is shared among multiple users simultaneously is
termed as time-sharing. Time-sharing or multitasking is a logical extension of multi
programming.
Time-Sharing Systems, the objective is to minimize response time. CPU is shared
among multiple users for some amount of time.

Advantages of Times haring operating systems are as follows −


 Provides the advantage of quick response.
 Avoids duplication of software.
 Reduces CPU idle time.
2
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

6. What are various system components ?


Ans: The various system components to perform well defined tasks are :
• Process management
• Main-memory management
• File management
• I/O-system management
• Secondary-storage management
• Networking
• Protection system
• Command-interpreter system

7. Describe the operating system structure ?


Ans:

8. Describe distributed operating system?


Ans:

A distributed operating system is one in which multiple computer systems are


connected through a single communication channel. Additionally, these systems have
separate processors and memory. In addition, these processors communicate over
high-speed buses or telephone lines. These individual systems connected by a single
channel are considered a single entity. We can also call them loosely coupled systems.
Each component or system of the network is a node. In this fig, we have different
computers (at different locations) attached to each other through the network and can
communicate.

3
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Example : Banking system

Suppose there is a bank whose headquarter is in New Delhi. That bank has branch
offices in cities like Ludhiana, Noida, Faridabad, and Chandigarh. You can operate
your bank by going to any of these branches. How is this possible? It’s because
whatever changes you make at one branch office are reflected at all branches. This is
because of the distributed system.

9. What are Real time operating Systems (RTOS)?

Ans: In Real-Time Systems, each job carries a certain deadline within which the job is
supposed to be completed, otherwise, the huge loss will be there, or even if the result is
produced, it will be completely useless.

The Application of a Real-Time system exists in the case of military applications, if


you want to drop a missile, then the missile is supposed to be dropped with a certain
precision.

Real Time Operating System is that operating system in which computer which is very
fast in operation and has to perform the task in under the specified time .

Real time system means that the system is subjected to real time, i.e., response should
be guaranteed within a specified timing constraint or system should meet the specified
deadline. For example: flight control system, real time monitors etc.

Types of real time systems based on timing constraints: Hard real time system – This
type of system can never miss its deadline. Missing the deadline may have disastrous
consequences. Example: Flight controller system.

Soft real time system – This type of system can miss its deadline occasionally with
some acceptably low probability. Missing the deadline have no disastrous
consequences. Example: Telephone switches.

Advantages are the maximum utilization of devices and systems . Disadvantage is that
they are very costly to develop and consume critical CPU cycles.

10. What is kernel ?

Ans: Kernel is a computer program that is a core or heart of an operating


system.

4
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

5
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

o It acts as a bridge between applications and data processing done at the


hardware level. It is the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables
the communication between software and hardware components.

UNIT-I LONG ANSWERS

1. What is system calls in OS? Explain in detail with its types.


Ans:
A system calls in computing is the programmatic method by which a computer
programme asks the kernel of the operating system it is running on for a service.
Programs can interact with the operating system by making a system call. When a
computer programme requests something from the kernel of the operating system, it
performs a system call.

Process Control

Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.

File Management

File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.

Device Management

Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release device,
6
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

etc.

7
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Information Maintenance

Information maintenance is a system call that is used to maintain information. There


are some examples of information maintenance, including getting system data, set time
or date, get time or date, set system data, etc.

Communication

Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.

Examples of Windows and Unix system calls

There are various examples of Windows and Unix system calls. These are as listed
below in the table:

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess()
WaitForSingleObject() Exit()

Wait()
File Manipulation CreateFile() Open()
ReadFile()
WriteFile() Read()
CloseHandle() Write()
Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information Maintenance GetCurrentProcessID() Getpid()


SetTimer() Alarm()
Sleep()
Sleep()
Communication CreatePipe() Pipe()
CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

Protection SetFileSecurity() Chmod()


InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

8
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

2. Explain operating system functions in detail.


Ans:

Functions of Operating Systems:

In an operating

system software performs each of the function:

1. Process management:- Process management helps OS to create and delete


processes. Italso provides mechanisms for synchronization and
communication among processes.

2. Memory management:- Memory management module performs the task


of allocationand de-allocation of memory space to programs in need of
this resources.

3. File management:- It manages all the file-related activities such as


organization storage,retrieval, naming, sharing, and protection of files.

4. Device Management: Device management keeps tracks of all devices.


This module alsoresponsible for this task is known as the I/O controller. It
also performs the task of allocation and de-allocation of the devices.

5. I/O System Management: One of the main objects of any OS is to hide


the peculiaritiesof that hardware devices from the user.
6. Secondary-Storage Management: Systems have several levels of
storage which includes primary storage, secondary storage, and cache
storage. Instructions and data must be stored in primary storage or cache
so that a running program can reference it.

7. Security:- Security module protects the data and information of a


computer systemagainst malware threat and authorized access.

8. Command interpretation: This module is interpreting commands


given by the andacting system resources to process that commands.

9
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

9. Networking: A distributed system is a group of processors which do not


share memory, hardware devices, or a clock. The processors communicate
with one another through thenetwork.

10. Job accounting: Keeping track of time & resource used by various job and
users.

Communication management: Coordination and assignment of compilers,


interpreters, and another software resource of the various users of the computer
systems.

(Or)

Security

To safeguard user data, the operating system employs password protection and other
related measures. It also protects programs and user data from illegal access.

Control over System Performance

The operating system monitors the overall health of the system in order to optimise
performance. To get a thorough picture of the system’s health, keep track of the time
between system responses and service requests. This can aid performance by
providing critical information for troubleshooting issues.

Job Accounting

The operating system maintains track of how much time and resources are consumed
by different tasks and users, and this data can be used to measure resource utilisation
for a specific user or group of users.

Error Detecting Aids

The OS constantly monitors the system in order to discover faults and prevent a
computer system from failing.

Coordination between Users and Other Software

Operating systems also organise and assign interpreters, compilers, assemblers, as


well as other software to computer users.

Memory Management

The operating system is in charge of managing the primary memory, often known as
the main memory. The main memory consists of a vast array of bytes or words, each
of which is allocated an address. Main memory is rapid storage that the CPU can
access directly. A program must first be loaded into the main memory before it can be
executed. For memory management, the OS performs the following tasks:
 The OS keeps track of primary memory – meaning, which user program can
use which bytes of memory, memory addresses that have already been
10
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

assigned, as well as memory addresses yet to be used.


 The OS determines the order in which processes would be permitted memory
access and for how long in multiprogramming.

11
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 It allocates memory to the process when the process asks for it and deallocates
memory when the process exits or performs an I/O activity.

Process Management

The operating system determines which processes have access to the processor and
how much processing time every process has in a multiprogramming environment.
Process scheduling is the name for this feature of the operating system. For processor
management, the OS performs the following tasks:

 It keeps track of how processes are progressing.


 A traffic controller is a program that accomplishes this duty.
 Allocates a processor-based CPU to a process. When a process is no longer
needed, the processor is deallocated.

Device Management

A file system is divided into directories to make navigation and usage more efficient.
Other directories and files may be found in these directories. The file management
tasks performed by an operating system are: it keeps track of where data is kept, user
access settings, and the state of each file, among other things. The file system is the
name given to all of these features.

3. Explain Operating system services in detail.


Ans:
The Operating System provides various types of services:

1. I/O operation
2. Program execution
3. File system manipulation
4. Communication
5. Error Handling
6. Resource allocation
7. Accounting
8. Protection

1. I/O Operation: - To execute a program, needs I/O, which consists of a file, or I/O
device. Due to the protection and effectiveness, users are not able to manage the I/O
device, so the operating system helps the user to perform I/O operations such as read
12
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

and write operations in a file. The Operating system offers the facility to access the
I/O device when needed.
2. Program execution: - Operating System is responsible for loading a program into
memory and then executing that program. Operating System helps us to manage
different tasks from user programs to the system programs such as file server, name
server, printer spooler, etc. Each of these tasks is sum-up as a process. A process may
consist of complete execution context like data to manipulate, OS resources in
use, registers, code to execute, etc.
The operating system performs the following tasks for program management:

 Executes the program.


 Load the program into memory.
 Operating system offers a procedure for process synchronization.
 Operating system offers a procedure for deadlock handling.
 Operating system offers a method for process communication.
 Manages the program’s execution.

3. File System Manipulation


A file is a collection of information. For long term storage computer stores, the file is
placed on the disk, and disk is the secondary storage. For example - Magnetic disk,
CD, DVD, Magnetic tape. Each storage media has different properties or capabilities
such as capacity, speed, data access, and data transfer method.
For easy and effective usage, the File system is organized in the form of directories,
and the directories contain files and other directions.
The operating system performs the following activities for File System Manipulation.

 It offers an interface so that we can easily create and delete files.


 It offers an interface to create and delete directories.
 It offers an interface so that we can create a backup of the file.
 With the help of the operating system, we can access the program for
performing an operation on a file.

4. Communication
The Operating system offers the facility of communication. The Process requires
information exchange with another process. For executing a process on the same
computer or different computer systems, it communicates with the help of the
operating system. Communication between the processes is done with the help of
message passing and shared memory.
5. Error Handling
The Operating system provides the service of error handling. An error may arise
anywhere, like in I/O devices, Memory, CPU, and in the user program. The Operating
system takes appropriate action for each error to ensure consistency and correct
computing.
6. Resource allocation
In a system, when multiple jobs are executing concurrently, then resource allocation
must be needed for each job. Resources include main memory storage, file storage,
CPU cycles, and I/O devices. The operating system handles every type of resource by
using schedulers. With the help of CPU scheduling, the task of resource allocation can
be performed.
7. Accounting

13
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Accounting service of the operating system helps to keep track of the system usage
means which users use the resources for how much time and what type of resources
are used by the system.
8. Protection: - If the computer system has different users and permits the concurrent
execution of the various processes, then it is must to protect the processes from one
another’s activities.

(Or)

OS provides users with a number of services, which can be summarised as follows:


1. Program Execution: The OS is in charge of running all types of programs,
whether they are user or system programs. The operating system makes use of a
variety of resources to ensure that all types of functions perform smoothly.
2. Handling Input/Output Operations: The operating system is in charge of
handling various types of inputs, such as those from the keyboard, mouse, and
desktop. Regarding all types of inputs and outputs, the operating system handles all
interfaces in the most appropriate manner.
For instance, the nature of all types of peripheral devices, such as mice or keyboards,
differs, and the operating system is responsible for transferring data between them.
3. Manipulation of File System: The OS is in charge of deciding where data or files
should be stored, such as on a floppy disk, hard disk, or pen drive. The operating
system determines how data should be stored and handled.
4. Error Detection and Handling: The OS is in charge of detecting any errors or
flaws that may occur during any task. The well-secured OS can also operate as a
countermeasure, preventing and possibly handling any type of intrusion into the
computer system from an external source.
5. Resource Allocation: The operating system guarantees that all available resources
are properly utilised by determining which resource should be used by whom and for
how long. The operating system makes all of the choices.
6. Accounting: The operating system keeps track of all the functions that are active in
the computer system at any one time. The operating system keeps track of all the facts,
including the types of mistakes that happened.
7. Information and Resource Protection: The operating system is in charge of
making the most secure use of all the data and resources available on the machine.
Any attempt by an external resource to obstruct data or information must be foiled by
the operating system.

4. What are the different types of operating systems? Explain Simple Batch
operating systems

Ans:
There are various types of operating system:

1. Simple Batch Operating System


14
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

2. Multi programming Batch Operating System

15
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

3. Time-sharing Operating System


4. Multiprocessor Operating System
5. Distributed Operating System
6. Network Operating System
7. Real-time Operating System
8. Mobile Operating System

Simple Batch operating system

In the simple batch operating system, there is no direct communication between the
user and the computer. In this, firstly, the user submits a job to the computer operator,
and after submitting the job, the computer operator creates a batch of the jobs on an
input device. The batch of jobs is created on the basis of the type of language and
needs. After the batch of the job is created, then a special program monitors and
manages each program in a batch. Example: Bank Statements, Payroll system, etc.

Advantages of Simple Batch Operating System

1. There is no mechanism to prioritize the processes.


2. There is no communication between the user and the computer.
3. The ideal time is very less for a batch operating system.

Disadvantages of a Simple Batch Operating System

1. It is hard to debug.
16
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

2. The Batch operating systems are costly.

17
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Multiprogramming Batch Operating System

In Multiprogramming Batch Operating System, the Operating system first selects the
job, and after selecting the job, it begins to execute one of the jobs from memory.
When this job requires an I/O operation operating system, it switches to another job
(operating system and CPU always busy). In this, the jobs present in memory are
always minimum than the jobs present in the job pool.
If different jobs are ready to execute at the same time, then the job is selected for CPU
scheduling. In a simple batch operating system, sometimes CPU is idle and doesn’t
perform any task, but in the multiprogramming batch operating system, CPU is busy
and will never sit idle and always keeps on processing.

5. Explain difference between Multi-Tasking & Time sharing operating systems?

Ans:

Multi tasking:
In early times, you wouldn’t be able to run two different applications at the same time.
But now you can work listening to your favourite music, this is because of the multi-
tasking ideology used in the operating system.
The Operating system acts as a bridge between your software and the hardware of
your computers. It assigns a small-time quantum for each task based on the time-
sharing technology.
Time Sharing :
Time-sharing is the extension of Multi-programming and Multi-tasking concepts. The
time-sharing operating system allows multiple users to access the computer resources
for a specified time slice.
It works like multitasking, but the difference here is that it allows multiple users to
access the computer resources whereas multi-tasking focuses on running different
applications at the same time.

18
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

19
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

6. Distinguish Parallel vs Distributed operating systems.


Ans:

Difference Between Parallel System and Distributed System :

S. Parallel System Distributed System

No
1. Parallel systems are the systems
that can process the data
simultaneously, and increase the In these systems, applications are running
computational speed of a computer on multiple computers linked by
system. communication lines.

2. Parallel systems work with the The distributed system consists of a


simultaneous use of multiple number of computers that are connected
computer resources which can and managed so that they share the job
include a single computer with processing load among various computers
multiple processors. distributed over the network.

3. Tasks are performed with a more Tasks are performed with a less speedy
speedy process. process.

4. These systems are multiprocessor In Distributed Systems, each processor


systems. has its own memory.

5. It is also known as a tightly coupled Distributed systems are also known as


system. loosely coupled systems.

6. These systems communicate with one


another through various communication
These systems have close
lines, such as high-speed buses or
communication with more than one
telephone lines.
processor.
7. These systems share a memory, These systems do not share memory or
clock, and peripheral devices clock in contrast to parallel systems.

8. In this there is no global clock in


distributed computing, it uses various
In this, all processors share a single
synchronization algorithms.
master clock for synchronization.
9. E.g:- Hadoop, MapReduce, Apache E.g:- High-Performance Computing
Cassandra clusters.
A distributed system and a parallel system are two different types of computer
systems, and the main difference between them is how they manage the processing
and communication of tasks across multiple processors.
1. A distributed system is a computer system that consists of multiple
20
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

interconnected computers or nodes, that work together to perform a task or a set


of tasks. The processing is distributed across multiple nodes, and each node is

21
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

responsible for performing a part of the task. In a distributed system, the nodes
communicate with each other using a network, and the system is designed to
handle data and tasks that are geographically distributed. Examples of distributed
systems include the internet, cloud computing, and peer-to-peer networks.
2. On the other hand, a parallel system is a computer system that consists of
multiple processors that work together to perform a task. In a parallel system, the
processing is divided into multiple tasks, and each processor performs a separate
task simultaneously. The processors communicate with each other using shared
memory or message passing, and the system is designed to handle data and tasks
that require high computational power. Examples of parallel systems include
supercomputers and clusters.

Note: the main difference between a distributed system and a parallel system is
how they manage the processing and communication of tasks across multiple
processors. In a distributed system, the processing is distributed across multiple
nodes connected by a network, while in a parallel system, the processing is divided
among multiple processors that work together on a single task.

(Or)

Difference Between Parallel System and Distributed System :

Features Parallel Distributed Computing


Computing

Definition It is a type of It is that type of computing


computation in in which the components are
which various located on various
processes runs networked systems that
simultaneously. interact and coordinate their
actions by passing messages
to one another.
Communication The processors The computer systems
communicate with connect with one another via
one another via a a network.
bus.
Functionality Several processors Several computers execute
execute various tasks simultaneously.
tasks simultaneously
in parallel

computing.
Number of It occurs in a single It involves various
Computers computer system. computers.

22
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Memory The system may Each computer system in


have distributed or distributed computing has
shared memory. its own memory.

23
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Usage It helps to improve It allows for scalability,


the system resource sharing, and the
efficient completion of
performance computation tasks.

Note : There are two types of computations: parallel computing and distributed
computing. Parallel computing allows several processors to accomplish their tasks at
the same time. In contrast, distributed computing splits a single task among numerous
systems to achieve a common goal.

7. What are the different types of operating systems? Explain multi-processor


operating systems.
Ans:
There are various types of operating system:

9. Simple Batch Operating System


10. Multi programming Batch Operating System
11. Time-sharing Operating System
12. Multiprocessor Operating System
13. Distributed Operating System
14. Network Operating System
15. Real-time Operating System
16. Mobile Operating System

Multiprocessor Operating System

A Multiprocessor Operating System means the use of two or more processors within a
single computer system. These multiple processors are in close communication and
share the memory, computer bus, and other peripheral devices. These systems are
known as tightly coupled systems. It offers high speed and computing power. In
Multiprocessor operating system, all the processors work by using a single operating
system.

Advantages of Multiprocessor

 Improved performance.
 By maximizing the number of processors, more work is done in less time. In
this way, throughput is increased.

24
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Increased reliability.

8. What are the different types of operating systems? Explain distributed


operating systems.
Ans:
There are various types of operating system:

17. Simple Batch Operating System


18. Multi programming Batch Operating System
19. Time-sharing Operating System
20. Multiprocessor Operating System
21. Distributed Operating System
22. Network Operating System
23. Real-time Operating System
24. Mobile Operating System

Distributed Operating System

Distributed systems are also known as loosely coupled systems. In this type of
operating system, multiple central processors are used to serve multiple real-time
applications and multiple users. In this, the jobs of data processing are shared in the
processors accordingly.

In this processor, interaction with each other takes place via communication lines like
telephone lines, high-speed buses, etc. The processors can be different in function and
size.

Types of Distributed Operating System

There are two types of Operating System:

1. Client-server Systems.
2. Peer-to-Peer system.

Advantages of Distributed Operating System

The advantages of a distributed system are:

25
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Speed is increased by the exchange of information with the help of electronic


mail.
 It offers better services to customers.
 Reduce delays in the processing of data.
 By resource sharing ability, a user at one site can access the resources that are
available at another site.
 It offers reliability. If, in any case, one site fails then, the rest of the other sites
work properly.
 Reduces load on the host computer.

Disadvantages of Distributed Operating System

 Distributed systems are more expensive.


 Failure of the central network stops the whole communication.

9. What are the different types of operating systems? Explain Real time
operating systems.
Ans:
There are various types of operating system:

25. Simple Batch Operating System


26. Multi programming Batch Operating System
27. Time-sharing Operating System
28. Multiprocessor Operating System
29. Distributed Operating System
30. Network Operating System
31. Real-time Operating System
32. Mobile Operating System

Real-Time Operating System

Real-time operating systems are the operating systems that are used in real-time
applications where the data processing must be done in a fixed interval of time. The
Real-time operating system gives the response very fast and quick. The Real-time
operating system is used when a large number of events are processed in a short
interval of time.

26
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

A real-time operating system is different from another operating system because, in


this, time concept is the most crucial part. It is based on clock interrupts. In the real-
time system, the process is executed on the basis of priority. The high priority process
always executes first. When a high priority process enters into the system, the low
priority process preempts to serve a high priority process. The task of synchronizing
the process is done by the real-time operating system so that the process can interact
with each other efficiently. In this way, resources are used effectively without time-
wasting. Example of the real-time operating system: Medical imaging systems,
Industrial system, Nuclear reactors control scientific experiments, Traffic controlling

27
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

signal, Military software system, Airline resolution system, Networked multimedia


systems, Internet telephony, etc.

Types of Real-Time Operating System

There are three types of Real-time operating system:

1. Hard Real-time
2. Soft Real-time
3. Firm Real-time

Hard-Real time: - In Hard-Real time system, there is some deadline for executing the
task, which means that the task must start its execution on the particular scheduled
time, and should complete within the assigned duration of time.
Example: - Aircraft systems, Medical critical care System, etc.
Soft-Real time: - In the Soft-Real time system also, we assign a time to each process,
but some delaying in time is acceptable. So, in Soft-real time, deadlines are handled
softly. That’s why it is called Soft-Real time. Example: - Live stock price and Online
Transaction System.
Firm-Real time: - In the Firm-Real time system, there is also a deadline for every
task to execute. But in this, due to missing deadlines, there may be no big impact, but
there can be chances of undesired effects such as problems in the quality of a product.
Example: - Multimedia Applications.

Advantages of Real-Time Operating System

 Real-time operating systems are error-free.


 Real-time operating systems offer the facility of memory allocation
management.
 It offers better utilization of devices and systems and produces more output
from all the resources.
 The real-time operating system more focuses on running the applications and
give less importance to those applications which are present in the queue.

Disadvantages of a Real-Time Operating System

 In Real-Time Operating System, the task of writing the algorithm is very


challenging and complex.

Real-Time Operating System is expensive because it is using heavy system resources.

10. Define operating system and list the basic services provided by operating
system.

Ans: Operating system:


An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a system software which performs all the basic tasks
like file management, memory management, process management, handling input and
28
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

output, and controlling peripheral devices such as disk drives and printers.
Operating System

29
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

The Operating System provides various types of services:

9. I/O operation
10. Program execution
11. File system manipulation
12. Communication
13. Error Handling
14. Resource allocation
15. Accounting
16. Protection

1. I/O Operation: - To execute a program, needs I/O, which consists of a file, or I/O
device. Due to the protection and effectiveness, users are not able to manage the I/O
device, so the operating system helps the user to perform I/O operations such as read
and write operations in a file. The Operating system offers the facility to access the
I/O device when needed.
2. Program execution: - Operating System is responsible for loading a program into
memory and then executing that program. Operating System helps us to manage
different tasks from user programs to the system programs such as file server, name
server, printer spooler, etc. Each of these tasks is sum-up as a process. A process may
consist of complete execution context like data to manipulate, OS resources in
use, registers, code to execute, etc.
The operating system performs the following tasks for program management:

 Executes the program.


 Load the program into memory.
30
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Operating system offers a procedure for process synchronization.

31
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Operating system offers a procedure for deadlock handling.


 Operating system offers a method for process communication.
 Manages the program’s execution.

3. File System Manipulation


A file is a collection of information. For long term storage computer stores, the file is
placed on the disk, and disk is the secondary storage. For example - Magnetic disk,
CD, DVD, Magnetic tape. Each storage media has different properties or capabilities
such as capacity, speed, data access, and data transfer method.
For easy and effective usage, the File system is organized in the form of directories,
and the directories contain files and other directions.
The operating system performs the following activities for File System Manipulation.

 It offers an interface so that we can easily create and delete files.


 It offers an interface to create and delete directories.
 It offers an interface so that we can create a backup of the file.
 With the help of the operating system, we can access the program for
performing an operation on a file.

4. Communication
The Operating system offers the facility of communication. The Process requires
information exchange with another process. For executing a process on the same
computer or different computer systems, it communicates with the help of the
operating system. Communication between the processes is done with the help of
message passing and shared memory.
5. Error Handling
The Operating system provides the service of error handling. An error may arise
anywhere, like in I/O devices, Memory, CPU, and in the user program. The Operating
system takes appropriate action for each error to ensure consistency and correct
computing.
6. Resource allocation
In a system, when multiple jobs are executing concurrently, then resource allocation
must be needed for each job. Resources include main memory storage, file storage,
CPU cycles, and I/O devices. The operating system handles every type of resource by
using schedulers. With the help of CPU scheduling, the task of resource allocation can
be performed.
7. Accounting
Accounting service of the operating system helps to keep track of the system usage
means which users use the resources for how much time and what type of resources
are used by the system.
8. Protection: - If the computer system has different users and permits the concurrent
execution of the various processes, then it is must to protect the processes from one
another’s activities.

….………………… * Unit-I End *……………………..

32
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

UNIT-II
Short Question & Answers

1. Define process?
Ans:
In the Operating System, a Process is something that is currently under execution.
So, an active program can be called a Process.

For example, when you want to search something on web then you start a browser.
So, this can be process. Another example of process can be starting your music
player to listen to some cool music of your choice.

A Process has various attributes associated with it. Some of the attributes of a
Process are:

 Process Id: Every process will be given an id called Process Id to uniquely


identify that process from the other processes.
 Process state: Each and every process has some states associated with it at a
particular instant of time. This is denoted by process state. It can be ready,
waiting, running, etc.
 CPU scheduling information: Each process is executed by using some
process scheduling algorithms like FCSF, Round-Robin, SJF, etc.
 I/O information: Each process needs some I/O devices for their execution.
So, the information about device allocated and device need is crucial.
2. Draw a process state diagram.
Ans:

3. What is process control block (PCB) ?


Ans:
A Process Control Block or simple PCB is a data structure that is used to store the
information of a process that might be needed to manage the scheduling of a particular
process.

33
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

So, each process will be given a PCB which is a kind of identification card for a
process. All the processes present in the system will have a PCB associated with it
and all these PCBs are connected in a Linked List.

Attributes of a Process Control Block

There are various attributes of a PCB that helps the CPU to execute a particular
process. These attributes are:

 Process Id: A process id is a unique identity of a process. Each process is


identified with the help of the process id.
 Program Counter: The program counter, points to the next instruction that is
to be executed by the CPU. It is used to find the next instruction that is to be
executed.
 Process State: A process can be in any state out of the possible states of a
process. So, the CPU needs to know about the current state of a process, so
that its execution can be done easily.
 Priority: There is a priority associated with each process. Based on that
priority the CPU finds which process is to be executed first. Higher priority
process will be executed first.
 General-purpose Registers: During the execution of a process, it deals with
a number of data that are being used and changed by the process. But in most
of the cases, we have to stop the execution of a process to start another
process and after some times, the previous process should be resumed once
again. Since the previous process was dealing with some data and had
changed the data so when the process resumes then it should use that data
only. These data are stored in some kind of storage units called registers.

34
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 CPU Scheduling Information: It indicates the information about the


process scheduling algorithms that are being used by the CPU for the process.

35
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 List of opened files: A process can deal with a number of files, so the CPU
should maintain a list of files that are being opened by a process to make
sure that no other process can open the file at the same time.
 List of I/O devices: A process may need a number of I/O devices to perform
various tasks. So, a proper list should be maintained that shows which I/O
device is being used by which process.
These are the attributes of a Process Control Block and these pieces of information
are needed to have detailed info about the process and this, in turn, results in better
execution of the process

4. What are the 3 different types of schedulers?


Ans: Process Scheduling handles the selection of a process for the processor on the
basis of a scheduling algorithm and also the removal of a process from the processor.
It is an important part of multiprogramming operating system.
There are many scheduling queues that are used in process scheduling. When the
processes enter the system, they are put into the job queue. The processes that are ready
to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the I/O device queue.
The different schedulers that are used for process scheduling are − divided into three
categories.

1. Long-Term Scheduler or Job Scheduler


2. Short-Term Scheduler or CPU Scheduler
3. Medium-Term Scheduler

1. Long-Term Scheduler or Job Scheduler


The job scheduler is another name for Long-Term scheduler. It selects processes from
the pool (or the secondary memory) and then maintains them in the primary
memory’s ready queue.
The Multiprogramming degree is mostly controlled by the Long-Term Scheduler. The
goal of the Long-Term scheduler is to select the best mix of IO and CPU bound
processes from the pool of jobs.

36
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

If the job scheduler selects more IO bound processes, all of the jobs may become
stuck, the CPU will be idle for the majority of the time, and multiprogramming will
be reduced as a result. Hence, the Long-Term scheduler’s job is crucial and could
have a Long-Term impact on the system.

2. Short-Term Scheduler or CPU Scheduler

CPU scheduler is another name for Short-Term scheduler. It chooses one job from the
ready queue and then sends it to the CPU for processing.
To determine which work will be dispatched for execution, a scheduling method is
utilised. The Short-Term scheduler’s task can be essential in the sense that if it
chooses a job with a long CPU burst time, all subsequent jobs will have to wait in a
ready queue for a long period. This is known as hunger, and it can occur if the Short-
Term scheduler makes a mistake when selecting the work.

3. Medium-Term Scheduler

The switched-out processes are handled by the Medium-Term scheduler. If the


running state processes require some IO time to complete, the state must be changed
from running to waiting.
This is accomplished using a Medium-Term scheduler. It stops the process from
executing in order to make space for other processes. Swapped out processes are
examples of this, and the operation is known as swapping. The Medium-Term
scheduler here is in charge of stopping and starting processes.
The degree of multiprogramming is reduced. To have a great blend of operations in
the ready queue, swapping is required.

5. What is Thread ?
Ans:

 Thread is a sequential flow of tasks within a process.


 There can be multiple threads in a single process.
37
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 A thread has three components namely Program counter, register set, and
stack space.
 Thread is also termed as the lightweight process as they share resources and
are faster compared to processes.
 Context switching is faster in threads.
 Threads are of two types:
1. User Level Thread: User-level threads are created and managed by the
user.
2. Kernel Level Thread: Kernel-level threads are created and managed
by the OS.
 Issues related to threading are fork() and exec() system call, thread
cancellation, signal handling, etc.
 Some of the advantages of threading include responsiveness, faster context
switching, faster communication, concurrency, efficient use of the
multiprocessor, etc.

6. What is fork() system call ?


Ans:
fork()
The fork() creates a new process (child process) that is an identical copy of the
original process.
Both the parent and the child processes are run at the same time.
Parent and child processes are in different address spaces.

A new process known as a "child process" is created with the fork system call which
runs concurrently with the process called the parent process.

The use of the fork() system call is to create a new process by duplicating the calling
process. The fork() system call is made by the parent process, and if it is successful,
a child process is created.

The fork() system call does not accept any parameters. It simply creates a child
process and returns the process ID. If a fork() call is successful

fork system call in OS returns an integer value and requires no arguments. After the
creation of a new child process, both processes then execute the next command
following the fork system call. Therefore, we must separate the parent from the child
by checking the returned value of the fork():

 Negative: A child process could not be successfully created if


the fork() returns a negative value.
 Zero: A new child process is successfully created if the fork() returns a zero.
 Positive: The positive value is the process ID of a child's process to the
parent.

38
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

C-program for fork() system call

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>

//main function begins


int main()
{
fork();
fork();
fork();
printf("this process is created by fork() system call");
return 0;
}
// fork() is used 3 times:

Output:

// Since the commands will be executed 2n times


// 2^3^ = 8
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call
this process is created by fork() system call

7. What is scheduling criteria ?


Ans:

Types of Scheduling Criteria in an Operating System :

39
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

There are different CPU scheduling algorithms with different properties. The choice
of algorithm is dependent on various different factors. There are many criteria
suggested for comparing CPU schedule algorithms, some of which are:

 CPU utilization
 Throughput
 Turnaround time
 Waiting time
 Response time

The aim of the scheduling algorithm is to maximize and minimize the following:

Maximize:

 CPU utilization - It makes sure that the CPU is operating at its peak and is
busy.
 Throughoutput - It is the number of processes that complete their execution
per unit of time.

Minimize:

 Waiting time- It is the amount of waiting time in the queue.


 Response time- Time retired for generating the first request after submission.
 Turnaround time- It is the amount of time required to execute a specific
process.

CPU utilization- The object of any CPU scheduling algorithm is to keep


the CPU busy if possible and to maximize its usage. In theory, the range
of CPU utilization is in the range of 0 to 100 but in real-time, it is
actually 50 to 90% which relies on the system’s load.

Throughput- It is a measure of the work that is done by the CPU which is directly
proportional to the number of processes being executed and completed per unit of
time. It keeps on varying which relies on the duration or length of processes.

Turnaround time- An important Scheduling criterion in OS for any process is how


long it takes to execute a process. A turnaround time is the elapsed from the time of
submission to that of completion. It is the summation of time spent waiting to get into
the memory, waiting for a queue to be ready, for the I/O process, and for the
execution of the CPU. The formula for
calculating .TurnAroundTime=Compilationtime−Arrivaltime.

Waiting time- Once the execution starts, the scheduling process does not hinder the
time that is required for the completion of the process. The only thing that is affected
is the waiting time of the process, i.e the time that is spent by a process waiting in a
40
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

queue. The formula for calculating waiting Waiting time = TurnAroundTime −Burst
time.

41
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Response time- Turnaround time is not considered as the best criterion for comparing
scheduling algorithms in an interactive system. Some outputs of the process might
produce early while computing other results simultaneously. Another criterion is the
time that is taken from process submission to generate the first response. This is
called response time and the formula for calculating it is, Response time =At what
time process entered first inside – Arrival Time.

In conclusion, CPU scheduling criteria play a crucial role in optimizing system


performance and user satisfaction. By evaluating and prioritizing factors such as CPU
utilization, throughput, turnaround time, waiting time, and response time, CPU
scheduling algorithms can ensure efficient use of system resources and effective task
processing. Choosing the right algorithm for a particular situation is critical for
maximizing system efficiency and productivity.

8. Define process vs thread ?

Ans:

Process vs Thread

Process simply means any program in execution while the thread is a segment of
a process. The main differences between process and thread are mentioned below:

Process Thread
Processes use more resources and
Threads share resources and hence they are
hence they are termed as heavyweight
termed as lightweight processes.
processes.
Creation and termination times of Creation and termination times of threads are
processes are slower. faster compared to processes.
Processes have their own code and Threads share code and data/file within a
data/file. process.
Communication between processes is
Communication between threads is faster.
slower.
Context Switching in processes is Context switching in threads is faster.
slower.
Threads, on the other hand, are
Processes are independent of each interdependent. (i.e they can read, write or
other. change another thread’s data)

Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.

The below diagram shows how the resources are shared in two different
processes vs two threads in a single process.

42
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

9. What is preemptive & non preemptive Scheduling?


Ans:

What is Preemptive Scheduling?

Preemptive scheduling is a method that may be used when a process switches from a
running state to a ready state or from a waiting state to a ready state. The resources are
assigned to the process for a particular time and then removed. If the resources still have
the remaining CPU burst time, the process is placed back in the ready queue. The
process remains in the ready queue until it is given a chance to execute again.

When a high-priority process comes in the ready queue, it doesn't have to wait for the
running process to finish its burst time. However, the running process is interrupted in
the middle of its execution and placed in the ready queue until the high-priority process
uses the resources. As a result, each process gets some CPU time in the ready queue.

Example : Round Robin (RR) ,Shortest Remaining Time (SRTF) Scheduling


algorithms.

Advantages

1. It is a more robust method because a process may not monopolize the


processor.
2. Each event causes an interruption in the execution of ongoing tasks.
3. It improves the average response time.
4. It is more beneficial when you use this method in a multi-programming
environment.
5. The operating system ensures that all running processes use the same amount
of CPU.

Disadvantages

1. It requires the use of limited computational resources.


2. It takes more time suspending the executing process, switching the
context, and dispatching the new incoming process.

43
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

What is Non-Preemptive Scheduling?

Non-preemptive scheduling is a method that may be used when a process terminates or


switches from a running to a waiting state. When processors are assigned to a process,
they keep the process until it is eliminated or reaches a waiting state. When the
processor starts the process execution, it must complete it before executing the other
process, and it may not be interrupted in the middle.

When a non-preemptive process with a high CPU burst time is running, the other
process would have to wait for a long time, and that increases the process average
waiting time in the ready queue. However, there is no overhead in transferring processes
from the ready queue to the CPU under non-preemptive scheduling. The scheduling is
strict because the execution process is not even preempted for a higher priority process.

Example : FCFS Scheduling Algorithm

Advantages

1. It provides a low scheduling overhead.


2. It is a very simple method.
3. It uses less computational resources.
4. It offers high throughput.

Disadvantages

1. It has a poor response time for the process.


2. A machine can freeze up due to bugs.

Note:

When a higher priority process comes in the CPU, the running process in preemptive
scheduling is halted in the middle of its execution. On the other hand, the running
process in non-preemptive scheduling doesn't interrupt in the middle of its execution
and waits until it is completed.

Preemptive scheduling is flexible in processing. On the other side, non-preemptive is


strict

10. Define context switching ?


Ans:

What is Context Switching in OS?


Context switching refers to a technique/method used by the OS to switch processes
from a given state to another one for the execution of its function using the CPUs
present in the system. When switching is performed in the system, the old running
process’s status is stored as registers, and the CPU is assigned to a new process for the
execution of its tasks. While new processes are running in a system, the previous ones

44
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

must wait in the ready queue. The old process’s execution begins at that particular

45
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

point at which another process happened to stop it. It describes the features of a
multitasking OS where multiple processes share a similar CPU to perform various
tasks without the requirement for further processors in the given system.

Steps of Context Switching

Several steps are involved in the context switching of a process. The diagram given
below represents context switching between two processes, P1 and P2, in case of an
interrupt, I/O need, or the occurrence of a priority-based process in the PCB’s ready
queue.

The process P1 is initially running on the CPU for the execution of its task. At the
very same time, P2, another process, is in its ready state. If an interruption or error has
occurred or if the process needs I/O, the P1 process would switch the state from
running to waiting.
Before the change of the state of the P1 process, context switching helps in saving the
context of the P1 process as registers along with the program counter (to PCB1).
Then it loads the P2 process state from its ready state (of PCB2) to its running state.
Here are the steps are taken to switch the P1 to P2:

1. The context switching must save the P1’s state as the program counter and
register to PCB that is in its running state.
2. Now it updates the PCB1 to the process P1 and then moves the process to its
appropriate queue, like the ready queue, waiting queue and I/O queue.
3. Then, another process enters the running state. We can also select a new
process instead of from the ready state that needs to be executed or when a
process has a higher priority of executing its task.
4. Thus, now we need to update the PCB for the P2 selected process. It involves
switching a given process state from its running state or from any other state,
such as exit, blocked, or suspended.
5. In case the CPU already performs the execution of the P2 process, then we
must get the P2 process’s status so as to resume the execution of it at the very
same time at the same point at which there’s a system interrupt.
In a similar manner, the P2 process is switched off from the system’s CPU to let the
process P1 resume its execution. The process P1 is reloaded to the running state from
PCB1 to resume its assigned task at the very same point. Else, the data is lost, so

46
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

when the process is again executed, it starts the execution at its initial level.

47
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

UNIT-II

LONG QUESTION & ANSWERS

1. Construct the Gantt chart for Shortest remaining time first (SRTF)scheduling
algorithm for the provided data And also find the Average Waiting Time &
Average Turnaround Time.
Process P1 P2 P3 P4 P5
Arrival time 0 0 2 1 3
CPU Burst Time (in ms) 10 6 12 8 5

Ans: Shortest Remaining Time First (SRTF) Scheduling Algorithm:

Process AT BT CT TAT WT
P1 0 10 29 29 19
P2 0 6 6 6 0
P3 2 12 41 39 27
P4 1 8 19 18 10
P5 3 5 11 8 3

TAT = CT- AT Total TAT = 100 m.s

WT = TAT- BT Total WT = 59 m.s

Gantt Chart :

Average Turnaround Time AVG(TAT) = 20.00 m.s

Average Waiting Time AVG (WT) = 11.8 m.s

48
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

2.Following is the snapshot of a CPU


Process CPU Burst Arrival Time
P1 10 0
P2 29 1
P3 3 2
P4 7 3
Draw the Gantt chart and calculate the average turnaround time and average waiting
time of the jobs for RR (Round Robin with time quantum=10) scheduling algorithm.

Ans: Round Robin (RR)- Scheduling Algorithm: Time

Quantum = 10

Process AT BT CT TAT WT
P1 0 10 10 10 0
P2 1 29 49 48 19
P3 2 3 23 21 18
P4 3 7 30 27 20
TAT = CT- AT Total TAT =106 m.s

WT = TAT- BT Total WT = 57 m.s

Ready Queue :

P1 P2 P3 P4 P2

Gantt Chart :

Average Turnaround Time AVG(TAT) = 26.5 m.s

Average Waiting Time AVG (WT) = 14.25 m.s

3. Consider the following set of process, with the length of the CPU burst
given in milliseconds
Process P1 P2 P3 P4 P5
Burst time 10 1 2 1 5

49
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Priority 3 1 3 4 2

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
What is the turnaround time of each process by applying Priority scheduling algorithm?
(Lower the number higher the priority )

Ans: Priority - Scheduling Algorithm:

Process Priority AT BT CT TAT WT


P1 3 0 10 16 16 6
P2 1 0 1 1 1 0
P3 3 0 2 18 18 16
P4 4 0 1 19 19 18
P5 2 0 5 6 6 1

TAT = CT- AT Total TAT = 60 m.s

WT = TAT- BT Total WT = 41 m.s

Gantt Chart :

Average Turnaround Time AVG(TAT) = 12.0 m.s

Average Waiting Time AVG (WT) = 8.2 m.s

NOTE:

1. P2 is having Highest Priority so, P2 will be sent first.


2. Next P5 will be sent
3. Next P1 & P3 are in Tie (with same priority -3) but all process Arrived at 0 m.s
only. But still ,they arrived in ascending order so, P1 will be given first priority.

4. Following is the snapshot of a CPU


Process CPU Burst Arrival Time
P1 5 3
P2 15 1
P3 6 0
P4 4 2
Draw the Gantt chart and calculate the average turnaround time and average waiting
time of the jobs for FCFS scheduling algorithm.

50
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Ans: First Come First Serve (FCFS) Scheduling Algorithm:

Process AT BT CT TAT WT
P1 3 5 30 27 22
P2 1 15 21 20 5
P3 0 6 6 6 0
P4 2 4 25 23 19

TAT = CT- AT Total TAT = 76 m.s

WT = TAT- BT Total WT =46 m.s

Gantt Chart :

Average Turnaround Time AVG(TAT) = 19.0 m.s

Average Waiting Time AVG (WT) = 11.5 m.s

5. Consider the following set of process, with the length of the CPU burst
given in milliseconds
Process P1 P2 P3 P4 P5
Burst Time 6 2 8 3 4
Arrival Time 2 5 1 0 4
Draw the Gantt chart and calculate the average turnaround time and average waiting
time of the jobs for SJF (Non Preemptive) scheduling algorithm.

Ans: Shortest Job First (SJF- Non Preemptive) Scheduling Algorithm:

Process AT BT CT TAT WT
P1 2 6 9 7 1

P2 5 2 11 6 4
P3 1 8 23 22 14
P4 0 3 3 3 0
P5 4 4 15 11 7

TAT = CT- AT Total TAT = 49 m.s

51
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

WT = TAT- BT Total WT = 26 m.s

Gantt Chart :

Average Turnaround Time AVG(TAT) = 9.8 m.s

Average Waiting Time AVG (WT) = 5.2 m.s

6. What are the process states in operating system ? explain with diagram.
Ans:
When a process runs, it modifies the state of the system. The current activity of a
given process determines the state of the process

 the following are the Process States in Operating System?


 New State
 Ready State
 Run State
 Terminate State
 Block or Wait State
 Suspend Ready State
 Suspend Wait State

52
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

New State

When a program in secondary memory is started for execution, the process is said to
be in a new state.

Ready State

After being loaded into the main memory and ready for execution, a process
transitions from a new to a ready state. The process will now be in the ready state,
waiting for the processor to execute it. Many processes may be in the ready stage in a
multiprogramming environment.

Run State

After being allotted the CPU for execution, a process passes from the ready state to
the run state.

Terminate State

When a process’s execution is finished, it goes from the run state to the terminate
state. The operating system deletes the process control box (or PCB) after it enters the
terminate state.

Block or Wait State

If a process requires an Input/Output operation or a blocked resource during execution,


it changes from run to block or the wait state.
The process advances to the ready state after the I/O operation is completed or the
resource becomes available.

Suspend Ready State

If a process with a higher priority needs to be executed while the main memory is full,
the process goes from ready to suspend ready state. Moving a lower-priority process
from the ready state to the suspend ready state frees up space in the ready state for a
higher-priority process.
Until the main memory becomes available, the process stays in the suspend-ready
state. The process is brought to its ready state when the main memory becomes
accessible.

Suspend Wait State

If a process with a higher priority needs to be executed while the main memory is full,
the process goes from the wait state to the suspend wait state. Moving a lower-priority
process from the wait state to the suspend wait state frees up space in the ready state
for a higher-priority process.
The process gets moved to the suspend-ready state once the resource becomes
accessible. The process is shifted to the ready state once the main memory is available.
53
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Note points :

A process must pass through at least four states.

 A process must go through a minimum of four states to be considered


complete.

 The new state, run state, ready state, and terminate state are the four states.
 However, in case a process also requires I/O, the minimum number of states
required is 5.

Only one process can run at a time on a single CPU.

 Any processor can only handle one process at a time.


 When there are n processors in a system, only n processes can run at the same
time.

It is much more preferable to move a given process from its wait state to its suspend
wait state.

 Consider the situation where a high-priority process comes, and the main
memory is full.
 Then there are two options for making space for it. They are:

1. Suspending the processes that have lesser priority than the ready state.
2. Transferring the lower-priority processes from wait to the suspend wait state.
Now, out of these:

 Moving a process from a wait state to a suspend wait state is the superior
option.
 It is because this process is waiting already for a resource that is currently
unavailable.

7. What is thread ? explain their types in detail ?


Ans: Thread :

 Thread is a sequential flow of tasks within a process.


 There can be multiple threads in a single process.
 A thread has three components namely Program counter, register set, and
stack space.
 Thread is also termed as the lightweight process as they share resources and
are faster compared to processes.
 Context switching is faster in threads.
 Threads are of two types:
54
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

1. User Level Thread: User-level threads are created and managed by the
user.
2. Kernel Level Thread: Kernel-level threads are created and managed
by the OS.
 Issues related to threading are fork() and exec() system call, thread
cancellation, signal handling, etc.
 Some of the advantages of threading include responsiveness, faster context
switching, faster communication, concurrency, efficient use of the
multiprocessor, etc.

55
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

A thread is a sequential flow of tasks within a process. Each thread has its own set of
registers and stack space. There can be multiple threads in a single process having
the same or different functionality. Threads are also termed lightweight processes.

Example : Hhuman body.

A human body has different parts having different functionalities which are working
parallelly ( Eg: Eyes, ears, hands, etc). Similarly in computers, a single process might
have multiple functionalities running parallelly where each functionality can be
considered as a thread.

Threads in OS can be of the same or different types. Threads are used to increase the
performance of the applications.

Each thread has its own program counter, stack, and set of registers. But the threads
of a single process might share the same code and data/file. Threads are also
termed as lightweight processes as they share common resources.

Eg: While playing a movie on a device the audio and video are controlled by
different threads in the background.

The above diagram shows the difference between a single-threaded process and a
multithreaded process and the resources that are shared among threads in a
multithreaded process.

Components of Thread

A thread has the following three components:

1. Program Counter
2. Register Set
3. Stack space

Some of the reasons threads are needed in the operating system are:

56
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Since threads use the same data and code, the operational cost between
threads is low.

57
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

 Creating and terminating a thread is faster compared to creating or


terminating a process.
 Context switching is faster in threads compared to processes.

Why Multithreading?

In Multithreading, the idea is to divide a single process into multiple threads instead
of creating a whole new process. Multithreading is done to achieve parallelism

 Resource Sharing: Threads of a single process share the same resources


such as code, data/file.
 Responsiveness: Program responsiveness enables a program to run even if
part of the program is blocked or executing a lengthy operation. Thus,
increasing the responsiveness to the user.
 Economy: It is more economical to use threads as they share the resources of
a single process. On the other hand, creating processes is expensive.

Process vs Thread

Process simply means any program in execution while the thread is a segment of
a process. The main differences between process and thread are mentioned below:

Process Thread

Processes use more resources and


Threads share resources and hence they are
hence they are termed as heavyweight
termed as lightweight processes.
processes.
Creation and termination times of Creation and termination times of threads are
processes are slower. faster compared to processes.
Processes have their own code and Threads share code and data/file within a
data/file. process.
Communication between processes is
Communication between threads is faster.
slower.
Context Switching in processes is Context switching in threads is faster.
slower.
Threads, on the other hand, are
Processes are independent of each interdependent. (i.e they can read, write or
other. change another thread’s data)

Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.

The below diagram shows how the resources are shared in two different
processes vs two threads in a single process.

58
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Types of Thread

1. User Level Thread:

User-level threads are implemented and managed by the user and the kernel is not
aware of it.

 User-level threads are implemented using user-level libraries and the OS


does not recognize these threads.
 User-level thread is faster to create and manage compared to kernel-level
thread.
 Context switching in user-level threads is faster.
 If one user-level thread performs a blocking operation then the entire process
gets blocked. Eg: POSIX threads, Java threads, etc.

2. Kernel level Thread:

Kernel level threads are implemented and managed by the OS.

 Kernel level threads are implemented using system calls and Kernel level
threads are recognized by the OS.
 Kernel-level threads are slower to create and manage compared to user-
level threads.
 Context switching in a kernel-level thread is slower.
 Even if one kernel-level thread performs a blocking operation, it does not
affect other threads. Eg: Window Solaris.

59
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

60
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

The above diagram shows the functioning of user-level threads in user space and
kernel-level threads in kernel space.

Advantages of Threading

 Threads improve the overall performance of a program.


 Threads increases the responsiveness of the program
 Context Switching time in threads is faster.
 Threads share the same memory and resources within a process.
 Communication is faster in threads.
 Threads provide concurrency within a process.
 Enhanced throughput of the system.
 Since different threads can run parallelly, threading enables the utilization of
the multiprocessor architecture to a greater extent and increases efficiency.

8. Explain Inter process communication -IPC .


Ans:
A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think
that those processes, which are running independently, will execute very efficiently, in
reality, there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter- process communication
(IPC) is a mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be seen as
a method of co-operation between them.

Inter process communication in OS is the way by which multiple processes can


communicate with each other. Shared memory in OS, message queues, FIFO, etc. are
some of the ways to achieve IPC in os.

61
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Interprocess Communication or IPC provides a mechanism to exchange data and


information across multiple processes, which might be on single or multiple computers
connected by a network.

62
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Approaches for Inter Process Communication

Pipes

 It is a half-duplex method (or one-way communication) used for IPC between


two related processes.
 It is like a scenario like filling the water with a tap into a bucket. The filling
process is writing into the pipe and the reading process is retrieved from the
pipe.

63
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Shared Memory

Multiple processes can access a common shared memory. Multiple processes


communicate by shared memory, where one process makes changes at a time and
then others view the change. Shared memory does not use kernel.

64
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Message Passing

 In IPC, this is used by a process for communication and synchronization.


 Processes can communicate without any shared variables, therefore it can be
used in a distributed environment on a network.
 It is slower than the shared memory technique.
 It has two actions sending (fixed size message) and receiving messages.

Message Queues

We have a linked list to store messages in a kernel of OS and a message queue is


identified using "message queue identifier".

65
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Indirect Communication

 Pairs of communicating processes have shared mailboxes.


 Link (uni-directional or bi-directional) is established between pairs of
processes.
 Sender process puts the message in the port or mailbox of a receiver process
and the receiver process takes out (or deletes) the data from the mailbox.

FIFO

 Used to communicate between two processes that are not related.


 Full-duplex method - Process P1 is able to communicate with Process P2, and
vice versa.

Examples of Inter Process Communication

There are many examples of Inter-Process Communication (IPC) mechanisms used


in modern Operating Systems (OS). Here are some common examples:

Pipes: Pipes are a simple form of IPC used to allow communication between
two processes. A pipe is a unidirectional communication channel that allows
one process to send data to another process. The receiving process can read
the data from the pipe, and the sending process can write data to the pipe.
Pipes are commonly used for shell pipelines, where the output of one
command is piped as input to another command.

Shared Memory: Shared Memory is a type of IPC mechanism that allows two
or more processes to access the same portion of memory. This can be useful
for sharing large amounts of data between processes, such as video or audio
streams. Shared Memory is faster than other IPC mechanisms since data is
directly accessible in memory, but it requires careful management to avoid
synchronization issues.

Message Queues: Message Queues are another type of IPC mechanism used
to allow processes to send and receive messages. A message queue is a buffer
that stores messages until the receiver is ready to receive them. The sender
can place messages in the queue, and the receiver can retrieve messages from
the queue.

Sockets: Sockets are a type of IPC mechanism used for communication


between processes on different machines over a network. A socket is a
combination of an IP address and a port number, which allows a process to
connect to another process over the network. Sockets are commonly used for
client-server applications, where a server listens on a socket for incoming
connections, and clients connect to the server over the socket.

66
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Advantages of Inter-Process Communication (IPC)

67
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

Inter-Process Communication (IPC) allows different processes running on the


same or different systems to communicate with each other. There are several
advantages of using IPC, which are:

Data Sharing: IPC allows processes to share data with each other. This can be
useful in situations where one process needs to access data that is held by
another process.

Resource Sharing: IPC allows processes to share resources such as memory,


files, and devices. This can help reduce the amount of memory or disk space
that is required by a system.

Synchronization: IPC allows processes to synchronize their activities. For


example, one process may need to wait for another process to complete its task
before it can continue.

Modularity: IPC allows processes to be designed in a modular way, with each


process performing a specific task. This can make it easier to develop and
maintain complex systems.

Scalability: IPC allows processes to be distributed across multiple systems,


which can help improve performance and scalability.

Overall, IPC is a powerful tool for building complex, distributed systems that require
communication and coordination between different processes.

Disadvantages of Inter-Process Communication (IPC)

Complexity: IPC can add complexity to the design and implementation of


software systems, as it requires careful coordination and synchronization
between processes. This can lead to increased development time and
maintenance costs.

Overhead: IPC can introduce additional overhead, such as the need to serialize
and deserialize data, and the need to synchronize access to shared resources.
This can impact the performance of the system.

Scalability: IPC can also limit the scalability of a system, as it may be difficult
to manage and coordinate large numbers of processes communicating with
each other.

Security: IPC can introduce security vulnerabilities, as it creates additional


attack surfaces for malicious actors to exploit. For example, a malicious
process could attempt to gain unauthorized access to shared resources or data.

Compatibility: IPC can also create compatibility issues between different


systems, as different operating systems and programming languages may have
different IPC mechanisms and APIs. This can make it difficult to develop
cross-platform applications that work seamlessly across different
68
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

environments.

69
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

9. What is Multiple -Processor Scheduling ? explain.


Ans:

Multiple Processors Scheduling in Operating System

Multiple processor scheduling or multiprocessor scheduling focuses on designing the


system's scheduling function, which consists of more than one processor. Multiple
CPUs share the load (load sharing) in multiprocessor scheduling so that various
processes run simultaneously. In general, multiprocessor scheduling is complex as
compared to single processor scheduling. In the multiprocessor scheduling, there are
many processors, and they are identical, and we can run any process at any time.

The multiple CPUs in the system are in close communication, which shares a common
bus, memory, and other peripheral devices. So we can say that the system is tightly
coupled. These systems are used when we want to process a bulk amount of data, and
these systems are mainly used in satellite, weather forecasting, etc.

There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to
run any process in the queue.

Multiprocessor systems may be heterogeneous (different kinds of CPUs)


or homogeneous (the same CPU). There may be special scheduling constraints, such
as devices connected via a private bus to only one CPU.

There is no policy or rule which can be declared as the best scheduling solution to a
system with a single processor. Similarly, there is no best scheduling solution for a
system with multiple processors as well.

Approaches to Multiple Processor Scheduling

There are two approaches to multiple processor scheduling in the operating system:
Symmetric Multiprocessing and Asymmetric Multiprocessing.

1. Symmetric Multiprocessing: It is used where each processor is self-


scheduling. All processes may be in a common ready queue, or each processor
may have its private queue for ready processes. The scheduling proceeds further
by having the scheduler for each processor examine the ready queue and select
a process to execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions and
70
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

I/O processing are handled by a single processor called the Master Server. The
other processors execute only the user code. This is simple and reduces

71
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

the need for data sharing, and this entire scenario is called Asymmetric
Multiprocessing.

Processor Affinity

Processor Affinity means a process has an affinity for the processor on which it is
currently running. When a process runs on a specific processor, there are certain effects
on the cache memory. The data most recently accessed by the process populate the
cache for the processor. As a result, successive memory access by the process is often
satisfied in the cache memory.

Now, suppose the process migrates to another processor. In that case, the contents of
the cache memory must be invalidated for the first processor, and the cache for the
second processor must be repopulated. Because of the high cost of invalidating and
repopulating caches, most SMP(symmetric multiprocessing) systems try to avoid
migrating processes from one processor to another and keep a process running on the
same processor. This is known as processor affinity. There are two types of processor
affinity, such as:

1. Soft Affinity: When an operating system has a policy of keeping a process


running on the same processor but not guaranteeing it will do so, this situation
is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors
on which it may run. Some Linux systems implement soft affinity and provide
system calls like sched_setaffinity() that also support hard affinity.

Load Balancing

Load Balancing is the phenomenon that keeps the workload evenly distributed across
all processors in an SMP system. Load balancing is necessary only on systems where
each processor has its own private queue of a process that is eligible to execute.

Load balancing is unnecessary because it immediately extracts a runnable process from


the common run queue once a processor becomes idle. On SMP (symmetric
multiprocessing), it is important to keep the workload balanced among all processors
to utilize the benefits of having more than one processor fully. One or more processors
72
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

will sit idle while other processors have high workloads along with lists of processors
awaiting the CPU. There are two general approaches to load balancing:

73
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

1. Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each
processor by moving the processes from overloaded to idle or less busy
processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting
task from a busy processor for its execution.

Multi-core Processors

In multi-core processors, multiple processor cores are placed on the same physical chip.
Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor. SMP systems that use multi- core
processors are faster and consume less power than systems in which each processor has
its own physical chip.

However, multi-core processors may complicate the scheduling problems. When the
processor accesses memory, it spends a significant amount of time waiting for the data
to become available. This situation is called a Memory stall. It occurs for various
reasons, such as cache miss, which is accessing the data that is not in the cache memory.

In such cases, the processor can spend upto 50% of its time waiting for data to become
available from memory. To solve this problem, recent hardware designs have
implemented multithreaded processor cores in which two or more hardware threads are
assigned to each core. Therefore if one thread stalls while waiting for the memory, the
core can switch to another thread. There are two ways to multithread a processor:

1. Coarse-Grained Multithreading: A thread executes on a processor until a


long latency event such as a memory stall occurs in coarse-grained
multithreading. Because of the delay caused by the long latency event, the
processor must switch to another thread to begin execution. The cost of
switching between threads is high as the instruction pipeline must be terminated
before the other thread can begin execution on the processor core. Once this
new thread begins execution, it begins filling the pipeline with its instructions.
2. Fine-Grained Multithreading: This multithreading switches between threads
at a much finer level, mainly at the boundary of an instruction cycle. The
74
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

architectural design of fine-grained systems includes logic for thread


switching, and as a result, the cost of switching between threads is small.

Symmetric Multiprocessor

Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in
memory in this model, but any central processing unit can run it. Now, when a system
call is made, the central processing unit on which the system call was made traps the
kernel and processed that system call. This model balances processes and memory
dynamically. This approach uses Symmetric Multiprocessing, where each processor is
self-scheduling.

The scheduling proceeds further by having the scheduler for each processor examine
the ready queue and select a process to execute. In this system, this is possible that all
the process may be in a common ready queue or each processor may have its private
queue for the ready process. There are mainly three sources of contention that can be
found in a multiprocessor operating system.

o Locking system: As we know that the resources are shared in the


multiprocessor system, there is a need to protect these resources for safe access
among the multiple processors. The main purpose of the locking scheme is to
serialize access of the resources by the multiple processors.
o Shared data: When the multiple processors access the same data at the same
time, then there may be a chance of inconsistency of data, so to protect this, we
have to use some protocols or locking schemes.
o Cache coherence: It is the shared resource data that is stored in multiple local
caches. Suppose two clients have a cached copy of memory and one client
change the memory block. The other client could be left with an invalid cache
without notification of the change, so this conflict can be resolved by
maintaining a coherent view of the data.

Master-Slave Multiprocessor

In this multiprocessor model, there is a single data structure that keeps track of the ready
processes. In this model, one central processing unit works as a master and another as
a slave. All the processors are handled by a single processor, which is called the master
server.

75
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

76
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

The master server runs the operating system process, and the slave server runs the user
processes. The memory and input-output devices are shared among all the processors,
and all the processors are connected to a common bus. This system is simple and
reduces data sharing, so this system is called Asymmetric multiprocessing.

Virtualization and Threading

In this type of multiple processor scheduling, even a single CPU system acts as a
multiple processor system. In a system with virtualization, the virtualization presents
one or more virtual CPUs to each of the virtual machines running on the system. It then
schedules the use of physical CPUs among the virtual machines.

o Most virtualized environments have one host operating system and many guest
operating systems, and the host operating system creates and manages the
virtual machines.
o Each virtual machine has a guest operating system installed, and applications
run within that guest.
o Each guest operating system may be assigned for specific use cases,
applications, or users, including time-sharing or real-time operation.
o Any guest operating-system scheduling algorithm that assumes a certain
amount of progress in a given amount of time will be negatively impacted by
the virtualization.
o A time-sharing operating system tries to allot 100 milliseconds to each time
slice to give users a reasonable response time. A given 100 millisecond time
slice may take much more than 100 milliseconds of virtual CPU time.
Depending on how busy the system is, the time slice may take a second or more,
which results in a very poor response time for users logged into that virtual
machine.
o The net effect of such scheduling layering is that individual virtualized
operating systems receive only a portion of the available CPU cycles, even
though they believe they are receiving all cycles and scheduling all of those
cycles. The time-of-day clocks in virtual machines are often incorrect because
timers take no longer to trigger than they would on dedicated CPUs.
o Virtualizations can thus undo the good scheduling algorithm efforts of the
operating systems within virtual machines.

10. Explain System call interface for process management.


Ans:

Process Management (control) System Calls:

System call provides an interface between user program and operating system.
(Or)
System call provides an interface between user program and the kernel .
The structure of system call is as follows −

77
Operating Systems –Question Bank with Answers for II-B.Tech –II-Sem AIDS/CSM/CSC/CSE

When the user wants to give an instruction to the OS then it will do it through system
calls. Or a user program can access the kernel which is a part of the OS through system
calls.
It is a programmatic way in which a computer program requests a service from the
kernel of the operating system.

Types of system calls

The different system calls are as follows −


System calls for Process management
System calls for File management
System calls for Directory management

System calls for Process management

A system is used to create a new process or a duplicate process called a fork.


The duplicate process consists of all data in the file description and registers common.
The original process is also called the parent process and the duplicate is called the child
process.
The fork call returns a value, which is zero in the child and equal to the child’s PID
(Process Identifier) in the parent. The system calls like exit would request the services
for terminating a process.
Loading of programs or changing of the original image with duplicate needs execution
of exec. Pid would help to distinguish between child and parent processes.

Example

Process management system calls in Linux.


1. Fork () − For creating a duplicate process from the parent process.
2. Wait () − Processes are supposed to wait for other processes to complete their
work.
3. Exec () − Loads the selected program into the memory.
4. Exit () − Terminates the process.

The pictorial representation of process management system calls is as follows −

78
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

fork() − A parent process always uses a fork for creating a new child process. The child
process is generally called a copy of the parent. After execution of fork, both parent and
child execute the same program in separate processes.
exec() − This function is used to replace the program executed by a process. The child
sometimes may use exec after a fork for replacing the process memory space with a new
program executable making the child execute a different program than the parent.
exit() − This function is used to terminate the process.
wait() − The parent uses a wait function to suspend execution till a child terminates.
Using wait the parent can obtain the exit status of a terminated child.

….…………….. ……UNIT-II END ……………………………

79
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

OPERATING SYSTEMS
UNIT-3
Short Questions & Answers:
1 What is a critical section? Give an example.
2 What are the necessary and sufficient conditions to occur
deadlock?
3 What is deadlock? What is starvation? How do they differ from each
other?
4 What is Race Condition ?
Unit 3
5 What is the importance of process synchronization?
6 What are the disadvantages of semaphore?
7 Define mutual exclusion?
8 Differentiate Unsafe state and Deadlocked State.
9 What is RAG?
10 Write short note on Monitors.

1. What is a critical section? Give an example.


Ans: In process synchronization, a critical section is a section of code that accesses shared
resources such as variables or data structures, and which must be executed by only one process
at a time to avoid race conditions and other synchronization-related issues.
A critical section can be any section of code where shared resources are accessed, and it
typically consists of two parts: the entry section and the exit section. The entry section is where
a process requests access to the critical section, and the exit section is where it releases the
resources and exits the critical section.

2. What are the necessary and sufficient conditions to occur deadlock?


Ans: Necessary Conditions of Deadlock
There are four different conditions that result in Deadlock. These four conditions are
also known as Coffman conditions.
1. Mutual Exclusion
2. Hold and Wait
3. No preemption

80
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

4. Circular Wait
Deadlock will happen if all the above four conditions happen simultaneously.
3. What is deadlock? What is starvation? How do they differ from each other?
Ans: A deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other process.
Example : when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other.

Starvation : It is a problem when the low-priority process gets jammed for a long duration of
time because of high-priority requests being executed. Starvation happens usually when the
process is delayed for an infinite period of duration.

Difference between Deadlock & Starvation :


Deadlock happens when every process holds a resource and waits for another process to hold
another resource. Starvation happens when a low priority program requests a system resource but
cannot run because a higher priority program has been employing that resource for a long time.
4. What is Race Condition ?
Ans: When several process access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place is
called Race Condition.
(Or) A race condition is an undesirable situation that occurs when a device or system
attempts to perform two or more operations at the same time, but because of the nature of the
device or system, the operations must be done in the proper sequence to be done correctly.
(Or) A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section differs according to the
order in which the threads execute.

81
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

5. What is the importance of Process Synchronization?


Ans: The main purpose of synchronization is the sharing of resources without interference
using mutual exclusion. The other purpose is the coordination of the process interactions in
an operating system. Semaphores and monitors are the most powerful and most commonly
used mechanisms to solve synchronization problems.
Process Synchronization: Coordinating the execution of processes so that no two processes
access the same shared resources and data is known as process synchronization. It is
necessary for a multi-process system where several processes coexist and concurrently
attempt to access the same shared resource or piece of data.

6. What are the disadvantages of semaphore?


Ans: Semaphores are prone to programming errors. Due to the complexity of semaphore
programming, mutual exclusion may not be achieved. One of the most significant limitations
of semaphores is priority inversion. Low priority processes get into a critical section, and
high priority processes keep waiting.
The main disadvantage of the semaphore is that it requires busy waiting. Busy waiting
wastes CPU cycles that some other process might be able to use productively. This type of
semaphore is also called a spinlock because the process spins while waiting for the lock.
7. Define mutual exclusion?
Ans: Mutual exclusion is a property of process synchronization that states that “no two
processes can exist in the critical section at any given point of time”.
The term was first coined by Dijkstra. Any process synchronization technique being used
must satisfy the property of mutual exclusion, without which it would not be possible to get
rid of a race condition.

8. Differentiate unsafe state and Deadlocked State.


Ans: Unsafe State - If Operating System is not able to prevent Processes from requesting
resources which can also lead to Deadlock, then the System is said to be in an Unsafe State.
Unsafe State does not necessarily cause deadlock it may or may not causes deadlock.
Deadlock means something specific: there are two (or more) processes that are currently blocked
waiting for each other.
In an unsafe state you can also be in a situation where there might be a deadlock sometime in the
future, but it hasn't happened yet because one or both of the processes haven't actually started
waiting.
Deadlock can be a similar concept to a race condition that happens intermittently. The unsafe
code only triggers the deadlock when a particular sequence lines up. That sequence could
"happen at any time" or "accident waiting to happen".

9. What is RAG?
Ans:
Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.
It is a directed graph that represents the processes in the system, the resources available,

82
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

and the relationships between them. A process node in the RAG has two types of edges,
request edges, and assignment edges. A request edge represents a request by a process
for a resource, while an assignment edge represents the assignment of a resource to a
process.

10.Write short note on Monitors.


Ans: The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
1. It is the collection of condition variables and procedures combined together in a special
kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor
but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Long Questions & Answers:

What is Semaphore? Give the implementation of Bounded Buffer /Producer


Unit 3 1
Consumer Problem using Semaphore.

83
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

2 Explain Dining philosopher problem using Monitors.


3 Explain Critical Section Problem using Peterson’s Solution.
Explain Critical Section problem and apply Hardware methods of Solution
4
to the problem.
5 Explain the Bankers Algorithm for Deadlock Prevention
Consider the following snapshot of a system:
Process Allocation Max Available
ABCD ABCD ABCD
P0 0 0 1 2 0 0 1 2 2 1 0 0
P1 2 0 0 0 2 7 5 0
P2 0 0 3 4 6 6 56
P3 2 3 4 5 4 3 56
6
P4 0 3 3 2 0 6 5 2
Answer the following questions using the banker’s algorithm:
a. What is the content of the matrix Need?
b. Is the system in a safe state? Why?
c. Is the system currently deadlocked? Why or Why not? Which process,
if any, or may become deadlocked if the whole request is granted
immediately? [2+3+2+3]
7 Explain the Resource Allocation G r a p h with an example
8 Explain Readers-Writers Problem with an example.
a) Explain deadlock detection algorithm with an example.
9
b) Explain the technique used to prevent the deadlock.
a) Construct solution to Sleeping Barber problem by using semaphores.
10
b) Discuss the draw backs of Semaphores.

2. What is Semaphore? Give the implementation of Bounded Buffer /Producer Consumer


Problem using Semaphore.
Ans:
A semaphore S is an integer variable that can be accessed only through two standard
/atomic operations: wait() and signal().

The wait() operation reduces the value of semaphore by 1 and


The signal() operation increases its value by 1.

wait(S)
{
while(S<=0); // busy waiting
S--;
}

signal(S)

84
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

{
S++;
}
Semaphores are of two types:
1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problem with multiple processes.

2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.

Bounded Buffer / Producer Consumer problem:

Problem Statement :

We have a buffer of fixed size. A producer can produce an item and can place in the buffer. A
consumer can pick items and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not consume any item. In
this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.
Initialization of Semaphores:

mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially

Solution for Producer:

do
{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);

85
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

}while(true);

When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed and
consumer can access the buffer.

Solution for Consumer :

Do
{
wait(full);
wait(mutex);
// consume item from buffer
signal(mutex);
signal(empty);
}while(true);

As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by
1. The value of mutex is also increased so that producer can access the buffer now.

2. Explain Dining philosopher problem using Monitors.


Ans:
What is Dining Philosophers Problem?
The story behind Dining Philosophers problem is that it represents a scenario where a group of
philosophers sat around a round table and spend their lives either thinking or eating spaghetti. For
the sake of simplicity, let’s consider that there are five philosophers sitting in the table. The
circular table has five forks. To eat, each philosophers need two forks; one on his left side and
other in his right. A philosopher may pick up only one chopstick at a time. He can’t pick up a
chopstick that is already in the hand of another philosopher sitting next to him. When the
philosopher has both chopsticks at the same time, he starts eating without releasing the chopsticks.
The problem is to design an algorithm for allocating these limited resources (chopsticks) among
the processes(philosophers) without causing a deadlock or starvation.

86
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Now, there exist some algorithms that can solve the Dining Philosophers problem but may have
a deadlock situation. Also, deadlock free situation is not necessarily starvation free. In solving,
the Dining Philosophers problem Semaphores can be used but it can cause into a deadlock. Thus,
to avoid these circumstances we use Monitors with conditional variables.
Dining Philosophers Solution using Monitors
Monitors are used because they give a deadlock free solution to the Dining Philosophers problem.
It is used to gain access over all the state variables and condition variables. After implying
monitors, it imposes a restriction that a philosopher may pickup his chopsticks only if both of
them are available at the same time.
To code the solution, we need to distinguish among three states in which may find a philosopher.
 THINKING
 HUNGRY
 EATING
Example
Here is implementation of the Dining Philosophers problem using Monitors –

monitor DiningPhilosophers
{
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i)
{
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
{
self[i].wait();
}
}
void putdown(int i)
{
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
}

87
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

void test(int i)
{
if (state[(i + 4) % 5] != EATING && state[i] == HUNGRY &&state[(i + 1) % 5] !EATING)
{
state[i] = EATING;
self[i].signal();
}
}
initialization code()
{
for(int i=0;i<5;i++)
state[i] = THINKING;
}
}
DiningPhilosophers dp;

Explanation of the above code :

Here in the implementation, the distribution of chopstick is controlled by the monitor


DiningPhilosophers. Before start eating, each philosopher must invoke the pickup() operation. It
indicates that the philosopher is hungry, means that the process wants to use the resource. It also
set the state to EATING in test() only if the philosopher’s left and right neighbors are not eating.
If the philosopher is unable to eat, then wait() operation is invoked. After the successful
completion of the operation, the philosopher may now eat.
Keeping that in mind, the philosopher now invokes the putdown() operation. After leaving forks,
it checks on his neighbors. If they are HUNGRY and both of its neighbors are not EATING, then
invoke signal() and offer them to eat.
Thus a philosopher must invokes the pickup() and putdown() operations simultaneously which
ensures that no two neighbors are eating at the same time, thus achieving mutual exclusion. Thus,
it prevents the deadlock. But there is a possibility that one of the philosopher may starve to death.

3. Explain Critical Section Problem using Peterson’s Solution.


Ans:
Critical Section is a code segment that can be accessed by only one process at a time.

Syntax for Critical Section:

do
{
entry section
critical section
exit section
remainder section

88
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

} while (TRUE);

Any solution to the critical section problem must satisfy three requirements:
1. Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
2. Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their
remainder section can participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
3. Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software based solution to the critical section problem.

89
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

4. Explain Critical Section problem and apply Hardware methods of Solution to the
problem.
Ans:
Synchronization Hardware

Synchronization hardware i.e. hardware-based solution for the critical section problem which
introduces the hardware instructions that can be used to resolve the critical section problem
effectively. Hardware solutions are often easier and also improves the efficiency of the system.

The hardware-based solution to critical section problem is based on a simple tool i.e. lock. The
solution implies that before entering into the critical section the process must acquire a lock and
must release the lock when it exits its critical section. Using of lock also prevent the race
condition.

Conditions to resolve the critical section problem.

1. Mutual Exclusion: The hardware instruction must verify that at a point in time only one process
can be in its critical section.
2. Bounded Waiting: The processes interested to execute their critical section must not wait for
long to enter their critical section.
3. Progress: The process not interested in entering its critical section must not block other
processes from entering into their critical section.

There are three algorithms in the hardware approach of solving Process Synchronization

90
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

problem:
1. Test and Set
2. Swap
3. Unlock and Lock
Hardware instructions in many operating systems help in the effective solution of critical section
problems.
1. Test and Set:
Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works
in this way – it always returns whatever value is sent to it and sets lock to true. The first process
will enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of
the while loop. The other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one. Progress is
also ensured. However, after the first process, any process can go in. There is no queue
maintained, so any new process that finds the lock to be false again can enter. So bounded
waiting is not ensured.
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in
the swap function, key is set to true and then swapped with lock. First process will be
executed, and in while(key), since key=true , swap will take place and hence lock=true and
key=false. Again next iteration takes place while(key) but key=false , so while loop breaks and
first process will enter in critical section. Now another process will try to enter in Critical
section, so again key=true and hence while(key) loop will run and swap takes place so,
lock=true and key=true (since lock=true in first process). Again on next iteration while(key) is
true so this will keep on executing and another process will not be able to enter in critical
section. Therefore Mutual exclusion is ensured. Again, out of the critical section, lock is
changed to false, so any process finding it gets t enter the critical section. Progress is ensured.
However, again bounded waiting is not ensured for the very same reason.
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially. Once the ith process gets out of the critical section, it does not turn
lock to false so that any process can avail the critical section now, which was the problem with
the previous algorithms. Instead, it checks if there is any process waiting in the queue. The
queue is taken to be a circular queue. j is considered to be the next process in line and the
while loop checks from jth process to the last process and again from 0 to (i-1)th process if
there is any process waiting to access the critical section. If there is no process waiting then the
lock value is changed to false and any process which comes next can enter the critical section.
If there is, then that process’ waiting value is turned to false, so that the first while loop
becomes false and it can enter the critical section. This ensures bounded waiting. So the
problem of process synchronization can be solved through this algorithm.

91
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

5.Explain the Bankers Algorithm for Deadlock Prevention

 Ans: A Deadlock is a situation where each of the computer process waits for a resource
which is being assigned to some another process.
 A deadlock happens in operating system when two or more processes need some resource
to complete their execution that is held by the other process.
 Necessary conditions for Deadlocks:
1. Mutual Exclusion: A resource can only be shared in mutually exclusive manner. It
implies, if two process cannot use the same resource at the same time.
2. Hold and Wait: A process waits for some resources while holding another resource at
the same time.
3. No preemption: The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.
4. Circular Wait: All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held by the first process.
 It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even
a slight chance that a transaction may lead to deadlock in the future, it is never allowed to
execute.
 It can be done using Banker’s Algorithm.
 Banker’s Algorithm:
1. It is used to avoid deadlock and allocate resources safely to each process in the computer
system.
2. The 'S-State' examines all possible tests or activities before deciding whether the allocation
should be allowed to each process. It also helps the operating system to successfully share
the resources between all the processes.
3. The banker's algorithm is named because it checks whether a person should be sanctioned
a loan amount or not to help the bank system safely simulate allocation resources.
4. When working with a banker's algorithm, it requests to know about three things:
 How much each process can request for each resource in the system. It is denoted by the
[MAX] request.
 How much each process is currently holding each resource in a system. It is denoted by the
[ALLOCATED] resource.
 It represents the number of each resource currently available in the system. It is denoted by
the [AVAILABLE] resource.
 Following are the important data structures terms applied in the banker's algorithm as
follows: Suppose n is the number of processes, and m is the number of each type of resource
used in a computer system.
1. Available: It is an array of length 'm' that defines each type of resource available in the
system. When Available[j] = K, means that 'K' instances of Resources type R[j] are
available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum
number of resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently
allocated to each process in the system. When Allocation [i, j] = K, it means that process
P[i] is currently allocated K instances of Resources type R[j] in the system.

92
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

4. Need: It is an M x N matrix sequence representing the number of remaining resources


for each process. When the Need[i] [j] = k, then process P[i] may require K more instances
of resources type Rj to complete the assigned work. Need[i][j] = Max[i][j] –
Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating
whether the process has been allocated to the requested resources, and all resources have
been released after finishing its task.
 The Banker's Algorithm is the combination of the safety algorithm and the resource request
algorithm to control the processes and avoid deadlock in a system:
 Safety Algorithm: It is first part which is used to check whether or not a system is in a
safe state or follows the safe sequence in a banker's algorithm:
1. There are two vectors Wok and Finish of length m and n in a safety algorithm. Initialize:
Work = Available Finish[i] = false; for i = 0, 1, 2, 3, 4… n – 1.
2. Check the availability status for each type of resources [i], such as: Need[i] <= Work
Finish[i] == false If the i does not exist, go to step 4.
3. Work = Work +Allocation(i) // to get new resource allocation Finish[i] = true Go to step
2 to check the status of resource availability for the next process.

4. If Finish[i] == true; it means that the system is safe for all processes

6. Consider the following snapshot of a system:


Process Allocation Max Available
ABCD ABCD ABCD
P0 0 0 1 2 0 0 1 2 2 1 0 0
P1 2 0 0 0 2 7 5 0
P2 0 0 3 4 6 6 56
P3 2 3 4 5 4 3 56
P4 0 3 3 2 0 6 5 2
Answer the following questions using the banker’s algorithm:
a).What is the content of the matrix Need?
b).Is the system in a safe state? Why?

c).Is the system currently deadlocked? Why or Why not? Which process, if any, or may
become deadlocked if the whole request is granted immediately? [2+3+2+3]

Ans:

a).What is the content of the matrix Need?


Ans: Need matrix is calculated by subtracting Allocation Matrix from the Max matrix i.e.
P0 = 0012-0012=0000
P1 = 2750-2000=0750
P2 = 6656-0034=6622
P3 = 4356-2345=2011
P4 = 0652-0332=0320
 Now, NEED MATRIX is:
PROCESS NEED

93
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

ABCD
P0 0000
P1 0750
P2 6622
P3 2011
P4 0320

b).Is the system in as safe state? Why ?


Ans: To check if system is in a safe state
The Available matrix is [2100].
A process after it has finished execution is supposed to free up all the resources it hold.

We need to find a safety sequence such that it satisfies the criteria Need≤Available.
Since Need(P0)≤Available, we select
P0[AVAILABLE]=AVAILABLE+P0[ALLOCATION].
1. P0:
as (0000)≤2100 <=> 2100+0012=2112
2. P1:
as (0750)≤(2112)<=>2112+2000=4112
3. P2:
as (6622) is not less than or equal to(4112). Allocation condition fails.
4. P3:
as (2011)≤(4112)<=>4112+2345=6457
5. P4:
as (0320)≤(6457)<=>6457+0332=6789
6. P2:
as (6622)≤(6789)<=>6789+2000=8789
 Hence the system is in safe state with safe sequence is: P0,P2,P3,P4,P1

c). Is the system currently deadlocked? Why or Why not? Which process, if any, or may
become deadlocked if the whole request is granted immediately?[2+3+2+3]
Ans:
 Currently it is not deadlocked because the safe sequence obtained ensures the deadlock
prevention.

If the whole request is granted immediately, the process which will become deadlocked is P1

94
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

7. Explain the Resource Allocation G r a p h with an example

Ans:

The Resource Allocation Graph, also known as RAG is a graphical representation or a


pictorial representation of the state of a system. As its name suggests, the resource
allocation graph is the complete information about all the processes which are holding
some resources or waiting for some resources.

It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.

In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle.

Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.
It is a directed graph that represents the processes in the system, the resources available,
and the relationships between them. A process node in the RAG has two types of edges,
request edges, and assignment edges. A request edge represents a request by a process
for a resource, while an assignment edge represents the assignment of a resource to a
process.
To determine whether the system is in a safe state or not, the RAG is analyzed to check
for cycles. If there is a cycle in the graph, it means that the system is in an unsafe state,
and granting a resource request can lead to a deadlock. In contrast, if there are no cycles
in the graph, it means that the system is in a safe state, and resource allocation can
processed.

95
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

8. Explain Readers-Writers Problem with an example.

Ans: The reader-writer problem is a classic synchronization problem in operating systems


where multiple processes require access to a shared resource. In this problem, some
processes may only read the resource while others may write to it. The goal is to ensure that
multiple reader processes can access the resource simultaneously, but only one writer process
can access the resource at a time to avoid data inconsistency.

The readers-writers problem is used to manage synchronization so that there are no problems with

96
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

the object data.


For example - If two readers access the object at the same time there is no problem. However if
two writers or a reader and writer access the object at the same time, there may be problems.
To solve this situation, a writer should get exclusive access to an object i.e. when a writer is
accessing the object, no reader or writer may access it. However, multiple readers can access the
object at the same time.
This can be implemented using semaphores.

97
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

9. a).Explain deadlock detection algorithm with an example.


b) Explain the technique used to prevent the deadlock.
Ans: a).Explain deadlock detection algorithm with an example
What is a deadlock detection algorithm in operating systems?
A deadlock detection algorithm is a technique used by an operating system to identify deadlocks
in the system. This algorithm checks the status of processes and resources to determine whether
any deadlock has occurred and takes appropriate actions to recover from the deadlock.

There are several algorithms for detecting deadlocks in an operating system, including:

1. Wait-For Graph: A graphical representation of the system’s processes and resources. A


directed edge is created from a process to a resource if the process is waiting for that
resource. A cycle in the graph indicates a deadlock.
2. Banker’s Algorithm: A resource allocation algorithm that ensures that the system is
always in a safe state, where deadlocks cannot occur.
3. Resource Allocation Graph: A graphical representation of processes and resources, where
a directed edge from a process to a resource means that the process is currently holding that
resource. Deadlocks can be detected by looking for cycles in the graph.
4. Detection by System Modeling: A mathematical model of the system is created, and
deadlocks can be detected by finding a state in the model where no process can continue to
make progress.
5. Timestamping: Each process is assigned a timestamp, and the system checks to see if any
process is waiting for a resource that is held by a process with a lower timestamp.

98
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

6. Deadlock Detection using RAG: If a cycle is being formed in a Resource allocation graph
where all the resources have the single instance then the system is deadlocked. In Case of
Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition. The following example contains
three processes P1, P2, P3 and three resources R2, R2, R3. All the resources are having
single instances each. If we analyze the graph then we can find out that there is a cycle
formed in the graph since the system is satisfying all the four conditions of deadlock.
7. Deadlock Detection and Recovery: In this approach, The OS doesn't apply any
mechanism to avoid or prevent the deadlocks. Therefore the system considers that the
deadlock will definitely occur. In order to get rid of deadlocks, The OS periodically checks
the system for any deadlock. In case, it finds any of the deadlock then the OS will recover
the system using some recovery techniques. The main task of the OS is detecting the
deadlocks. The OS can detect the deadlocks with the help of Resource allocation graph.
8. In single instanced resource types, if a cycle is being formed in the system then there will
definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system
by converting the resource allocation graph into the allocation matrix and request matrix.
In order to recover the system from deadlocks, either OS considers resources or processes.
9. For Resource: Preempt the resource We can snatch one of the resources from the owner
of the resource (process) and give it to the other process with the expectation that it will
complete the execution and will release this resource sooner. Well, choosing a resource
which will be snatched is going to be a bit difficult. Rollback to a safe state System passes
through various states to get into the deadlock state. The operating system can rollback the
system to the previous safe state. For this purpose, OS needs to implement check pointing
at every state. The moment, we get into deadlock, we will rollback all the allocations to get
into the previous safe state.
10. For Process: Kill a process can solve our problem but the bigger concern is to decide
which process to kill. Generally, Operating system kills a process which has done least
amount of work until now. This is not a suggestible approach but can be implemented if
the problem becomes very serious. Killing all process will lead to inefficiency in the system
because all the processes will execute again from starting.

b) Explain the technique used to prevent the deadlock.

Ans: Deadlock Prevention

In the deadlock prevention process, the OS will prevent the deadlock from occurring by
avoiding any one of the four conditions that caused the deadlock. If the OS can avoid any of
the necessary conditions, a deadlock will not occur.

No Mutual Exclusion

It means more than one process can have access to a single resource at the same time. It’s
impossible because if multiple processes access the same resource simultaneously, there will be
chaos. Additionally, no process will be completed. So this is not feasible. Hence, the OS can’t

99
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

avoid mutual exclusion.


Let’s take a practical example to understand this issue. Jack and Jones share a bowl of soup.
Both of them want to drink the soup from the same bowl and use a single spoon simultaneously,
which is not feasible.

No Hold and Wait

To avoid the hold and wait, there are many ways to acquire all the required resources before
starting the execution. But this is also not feasible because a process will use a single resource
at a time. Here, the resource utilization will be very less.
Before starting the execution, the process does not know how many resources would be required
to complete it. In addition to that, the bus time, in which a process will complete and free the
resource, is also unknown.
Another way is if a process is holding a resource and wants to have additional resources,
then it must free the acquired resources. This way, we can avoid the hold and wait condition,
but it can result in starvation.

Removal of No Preemption

One of the reasons that cause the deadlock is the no preemption. It means the CPU can’t take
acquired resources from any process forcefully even though that process is in a waiting
state. If we can remove the no preemption and forcefully take resources from a waiting process,
we can avoid the deadlock. This is an implementable logic to avoid deadlock.
For example, it’s like taking the bowl from Jones and give it to Jack when he comes to have
soup. Let’s assume Jones came first and acquired a resource and went into the waiting state. Now
when Jack came, the caterer took the bowl from Jones forcefully and told him not to hold the
bowl if you are in a waiting state.

Removal of Circular Wait

In the circular wait, two processes are stuck in the waiting state for the resources which
have been held by each other. To avoid the circular wait, we assign a numerical integer value
to all resources, and a process has to access the resource in increasing or decreasing order.
If the process acquires resources in increasing order, it’ll only have access to the new additional
resource if that resource has a higher integer value. And if that resource has a lesser integer
value, it must free the acquired resource before taking the new resource and vice-versa for
decreasing order.

10. a). Construct solution to Sleeping Barber problem by using semaphores.

100
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

b). Discuss the draw backs of Semaphores.

Ans:

a) Sleeping Barber problem :

The Sleeping Barber problem is a classic problem in process synchronization that is used to
illustrate synchronization issues that can arise in a concurrent system. The problem is as
follows:
There is a barber shop with one barber and a number of chairs for waiting customers.
Customers arrive at random times and if there is an available chair, they take a seat and wait
for the barber to become available. If there are no chairs available, the customer leaves. When
the barber finishes with a customer, he checks if there are any waiting customers. If there are,
he begins cutting the hair of the next customer in the queue. If there are no customers waiting,
he goes to sleep.
The problem is to write a program that coordinates the actions of the customers and the barber
in a way that avoids synchronization problems, such as deadlock or starvation.

Solution using semaphore:

101
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

b). Discuss the draw backs of Semaphores.

Ans:

 A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be


signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that called the wait function.
 A semaphore uses two atomic operations, wait and signal for process synchronization. A
Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal().
 Semaphores are very useful in process synchronization and multi-threading.
Yet it has many drawbacks. They are:

102
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Semaphores are prone to programming errors.


 Due to the complexity of semaphore programming, mutual exclusion may not be
achieved.
 One of the most significant limitations of semaphores is priority inversion. Low priority
processes get into a critical section, and high priority processes keep waiting.
 Semaphores can be expensive to implement in terms of memory and CPU usage.

1. Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
3. Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
4. If one of them forgets to call signal(S) after a critical section, the program can deadlock
and the cause of the failure will be difficult to isolate.

If we were to build a large system using semaphores alone, the responsibility for the correct use
of the semaphores would be diffused among all the implementers of the system .

…………………………………...UNIT-3 END ………………………………………

103
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

OPERATING SYSTEMS
UNIT-4
Short Questions & Answers:
1 Compare internal and external fragmentation.
2 List the first fit & best fit memory allocation techniques.
3 What are the disadvantages of virtual memory?
4 What is Thrashing?
5 What is Compaction?
Unit 4
6 What is Virtual Memory? Why is it required?
7 Differentiate logical and physical address.
8 What is a Page fault?
9 What is Demand paging?
10 What is the difference between Page and Frame?

1. Compare internal and external fragmentation.


Ans: Internal fragmentation occurs when memory is divided into fixed-sized partitions.
External fragmentation occurs when memory is divided into variable size partitions based
on the size of processes.
The difference between memory allocated and required space or memory is called Internal
fragmentation.
The unused spaces formed between non-contiguous memory fragments are too small to
serve a new process, which is called External fragmentation.

2. List the first fit & best fit memory allocation techniques.
Ans:
First Fit: “search for the first hole that is big enough “
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Best Fit: “search for the smallest hole that is big enough “
The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers the
smallest hole that is adequate. It then tries to find a hole which is close to actual process size
needed.

3. What are the disadvantages of virtual memory?


Ans: Virtual memory is a method of using secondary memory, consisting of both hardware
and software as if it was a part of the primary one.
Applications may run slower if the system is using virtual memory. Likely takes more time to
switch between applications. Offers lesser hard drive space for your use. It reduces system
stability. Virtual memory is prone to marketing schemes

4. What is Thrashing?

104
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Ans: Thrashing is a condition or a situation when the system is spending a major portion of its
time servicing the page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.

5. What is Compaction ?
Ans: Compaction is a technique to collect all the free memory present in form of fragments
into one large chunk of free memory, which can be used to run other processes.
(Or) Compaction refers to combining of all the empty spaces together and processes.
Compaction helps to solve the problem of fragmentation, but it requires a lot of CPU time. It
moves all the occupied areas of storage to one end and leaves one large free space for
incoming jobs, instead of numerous small ones.

6. What is Virtual Memory? Why is it required?

Ans: Virtual memory is a memory management technique where secondary memory can be used
as if it were a part of the main memory. Virtual memory is a common technique used in a
computer's operating system (OS).

Virtual memory uses both hardware and software to enable a computer to compensate for
physical memory shortages, temporarily transferring data from random access memory (RAM)
to disk storage. Mapping chunks of memory to disk files enables a computer to treat secondary
memory as though it were main memory.

105
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Today, most personal computers (PCs) come with at least 8 GB (gigabytes) of RAM. But,
sometimes, this is not enough to run several programs at one time. This is where virtual memory
comes in. Virtual memory frees up RAM by swapping data that has not been used recently over
to a storage device, such as a hard drive or solid-state drive (SSD).

Virtual memory is important for improving system performance, multitasking and using large
programs. However, users should not overly rely on virtual memory, since it is considerably
slower than RAM. If the OS has to swap data between virtual memory and RAM too often, the
computer will begin to slow down -- this is called thrashing.

7. Differentiate logical and physical address.


Ans:
A logical address is the virtual address that is generated by the CPU. Physical address is one that
represents a location in the computer memory. The logical address is used like a reference, to
access the physical address.
Page address is called logical address and represented by page number and the offset. Frame
address is called physical address and represented by a frame number and the offset. A data
structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
8. What is a Page fault?
Ans: “Page faults occur when a requested page is not loaded into the memory”.
A page fault occurs when a program attempts to access data or code that is in its address
space, but is not currently located in the system RAM. Page faults are detected by the
Memory Management Unit (MMU). Page replacement algorithms are used to minimize the
number of page faults.
9. What is Demand paging?
Ans: “Demand paging in os is a memory management scheme where pages of data are
loaded into memory only when they are needed, rather than loading the entire program
into memory at once.”
Demand paging in os is a technique used in virtual memory management that enables a
computer to use its physical memory more efficiently.
Demand paging used in operating systems to optimize memory usage. In demand paging, a
process is not loaded into memory completely at the start of execution. Instead, the operating
system loads only the necessary parts of a program into memory as they are required, on-
demand.
10. What is the difference between Page and Frame?
Ans:
In paging, processes are divided into equal parts called pages, and main memory is also divided
into equal parts and each part is called a frame. Each page gets stored in one of the frames of the
main memory whenever required. So, the size of a frame is equal to the size of a page.

106
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Long Questions & Answers:

1 Explain Swapping in memory management.


a) What is virtual memory? Discuss the benefits of virtual memory
techniques.
2
b) What are the disadvantages of single contiguous memory allocation?
3 Differentiate between paging and segmentation.
4 Discuss the Least Recently Used page replacement algorithm with example.
A process refers to 5 pages in the order- A,B,C,D,A,B,E,A,B,C,D,E. If the
5 page replacement algorithm is LRU calculate the number of page faults with
empty frames of size 4?
Unit 4
6 How to handle page fault? Explain with neat diagram.
Explain the terms in Memory Partitioning with an example.
7
a. Fixed partitioning b. Dynamic Partitioning.
8 Explain the segmentation with neat diagram.
a) What is Belady’s anomaly? Explain with an example
9
b) Discuss the Hardware support for paging.
Consider the following page reference strings:
10 1,2,3,4,2,1,5,6,2,1,2,3,2,1,2,3,6 How many page faults would occur for
Optimal Page Replacement algorithm, assuming three, four frames.

1. Explain Swapping in memory management.


Ans:

Ans: Swapping is a memory management technique and is used to temporarily remove the inactive
programs from the main memory of the computer system. Any process must be in the memory for

107
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

its execution, but can be swapped temporarily out of memory to a backing store and then again
brought back into the memory to complete its execution. Swapping is done so that other processes
get memory for their execution. Due to the swapping technique performance usually gets affected,
but it also helps in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processes may be swapped
out so that processes with a higher priority may be loaded and executed.

The above diagram shows swapping of two processes where the disk is used as a Backing store.

In the above diagram, suppose there is a multiprogramming environment with a round-robin


scheduling algorithm; whenever the time quantum expires then the memory manager starts to swap
out those processes that are just finished and swap another process into the memory that has been
freed. And in the meantime, the CPU scheduler allocates the time slice to some other processes in
the memory.

The swapping of processes by the memory manager is fast enough that some processes will be in
memory, ready to execute, when the CPU scheduler wants to reschedule the CPU.

A variant of the swapping technique is the priority-based scheduling algorithm. If any higher-
priority process arrives and wants service, then the memory manager swaps out lower priority
processes and then load the higher priority processes and then execute them. When the process
with higher priority finishes .then the process with lower priority swapped back in and continues
its execution. This variant is sometimes known as roll in and roll out.

There are two more concepts that come in the swapping technique and these are: swap in and swap
out.

Swap In and Swap Out in OS

The procedure by which any process gets removed from the hard disk and placed in the main
memory or RAM commonly known as Swap In.

On the other hand, Swap Out is the method of removing a process from the main memory or
RAM and then adding it to the Hard Disk.

108
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Advantages of Swapping

1. The swapping technique mainly helps the CPU to manage multiple processes within a
single main memory.
2. This technique helps to create and use virtual memory.
3. With the help of this technique, the CPU can perform several tasks simultaneously. Thus,
processes need not wait too long before their execution.
4. This technique is economical.
5. This technique can be easily applied to priority-based scheduling in order to improve its
performance.

Disadvantages of Swapping

The drawbacks of the swapping technique are as follows:

1. There may occur inefficiency in the case if a resource or a variable is commonly used by
those processes that are participating in the swapping process.
2. If the algorithm used for swapping is not good then the overall method can increase the
number of page faults and thus decline the overall performance of processing.
3. If the computer system loses power at the time of high swapping activity then the user
might lose all the information related to the program.

2. a) What is virtual memory? Discuss the benefits of virtual memory techniques.


b)What are the disadvantages of single contiguous memory allocation?

Ans: a) Virtual Memory:

Virtual Memory is a space where large programs can store themselves in form of pages while
their execution and only the required pages or portions of processes are loaded into the main
memory. This technique is useful as a large virtual memory is provided for user programs when a
very small physical memory is there. Thus Virtual memory is a technique that allows the execution
of processes that are not in the physical memory completely.

Virtual Memory mainly gives the illusion of more physical memory than there really is with the
help of Demand Paging.

In real scenarios, most processes never need all their pages at once, for the following reasons :

 Error handling code is not needed unless that specific error occurs, some of which are quite
rare.
 Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays
are actually used in practice.
 Certain features of certain programs are rarely used.

109
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

In an Operating system, the memory is usually stored in the form of units that are known as pages.
Basically, these are atomic units used to store large programs.

Virtual memory can be implemented with the help of:-

1. Demand Paging
2. Demand Segmentation

Need of Virtual Memory

 In case, if a computer running the Windows operating system needs more memory or RAM
than the memory installed in the system then it uses a small portion of the hard drive for this
purpose.
 Suppose there is a situation when your computer does not have space in the physical memory,
then it writes things that it needs to remember into the hard disk in a swap file and that as
virtual memory.

Benefits of having Virtual Memory

1. Large programs can be written, as the virtual space available is huge compared to physical
memory.
2. Less I/O required leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
4. Therefore, the Logical address space can be much more larger than that of physical address
space.
5. Virtual memory allows address spaces to be shared by several processes.
6. During the process creation, virtual memory allows: copy-on-write and Memory-mapped
files

Advantages of Virtual Memory

 Virtual Memory allows you to run more applications at a time.


 With the help of virtual memory, you can easily fit many large programs into smaller
programs.
 With the help of Virtual memory, a multiprogramming environment can be easily
implemented.
 As more processes should be maintained in the main memory which leads to the effective
utilization of the CPU.
 Data should be read from disk at the time when required.
 Common data can be shared easily between memory.
 With the help of virtual memory, speed is gained when only a particular segment of the
program is required for the execution of the program.
 The process may even become larger than all of the physical memory.

Disadvantages of Virtual Memory

110
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Virtual memory reduces the stability of the system.


 The performance of Virtual memory is not as good as that of RAM.
 If a system is using virtual memory then applications may run slower.
 Virtual memory negatively affects the overall performance of a system.
 Virtual memory occupies the storage space, which might be otherwise used for long term
data storage.
 This memory takes more time in order to switch between applications.

b) Disadvantages of single contiguous memory allocation:

1.Internal Fragmentation: Suppose the size of the process is lesser than the size of the partition
in that case some size of the partition gets wasted and remains unused. This wastage inside the
memory is generally termed as Internal fragmentation. As we have shown in the above diagram
the 70 KB partition is used to load a process of 50 KB so the remaining 20 KB got wasted.

2.Limitation on the size of the process: If in a case size of a process is more than that of a
maximum-sized partition then that process cannot be loaded into the memory. Due to this, a
condition is imposed on the size of the process and it is: the size of the process cannot be larger
than the size of the largest partition.

3.External Fragmentation: It is another drawback of the fixed-size partition scheme as total


unused space by various partitions cannot be used in order to load the processes even though there
is the availability of space but it is not in the contiguous fashion.

4.Degree of multiprogramming is less: In this partition scheme, as the size of the partition cannot
change according to the size of the process. Thus the degree of multiprogramming is very less and
is fixed.

5.Difficult Implementation: The implementation of this partition scheme is difficult as compared


to the Fixed Partitioning scheme as it involves the allocation of memory at run-time rather than
during the system configuration. As we know that OS keeps the track of all the partitions but here
allocation and deallocation are done very frequently and partition size will be changed at each time
so it will be difficult for the operating system to manage everything.

3. Differentiate between paging and segmentation.

Ans:

S.NO Paging Segmentation

In paging, the program is divided into In segmentation, the program is divided


1. fixed or mounted size pages. into variable size sections.

111
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

For the paging operating system is


2. accountable. For segmentation compiler is accountable.

3. Page size is determined by hardware. Here, the section size is given by the user.

It is faster in comparison to
4. segmentation. Segmentation is slow.

Paging could result in internal Segmentation could result in external


5. fragmentation. fragmentation.

In paging, the logical address is split Here, the logical address is split into section
6. into a page number and page offset. number and section offset.

Paging comprises a page table that While segmentation also comprises the
encloses the base address of every segment table which encloses segment
7. page. number and segment offset.

The page table is employed to keep up


8. the page data. Section Table maintains the section data.

In segmentation, the operating system


In paging, the operating system must maintains a list of holes in the main
9. maintain a free frame list. memory.

10. Paging is invisible to the user. Segmentation is visible to the user.

In paging, the processor needs the page In segmentation, the processor uses
number, and offset to calculate the segment number, and offset to calculate the
11. absolute address. full address.

It is hard to allow sharing of procedures Facilitates sharing of procedures between


12. between processes. the processes.

In paging, a programmer cannot


13 efficiently handle data structure. It can efficiently handle data structures.

14. This protection is hard to apply. Easy to apply for protection in

112
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

segmentation.

The size of the page needs always be There is no constraint on the size of
15. equal to the size of frames. segments.

A page is referred to as a physical unit A segment is referred to as a logical unit of


16. of information. information.

Paging results in a less efficient Segmentation results in a more efficient


17. system. system.

4. Discuss the Least Recently Used page replacement algorithm with example.

Ans:

1. Least Recently Used (LRU) algorithm is a page replacement technique used for memory
management. According to this method, the page which is least recently used is replaced.
Therefore, in memory, any page that has been unused for a longer period of time than the
others is replaced.The Least Recently Used (LRU) page replacement policy replaces the
page that has not been used for the longest period of time. It is one of the algorithms
that were made to approximate if not better the efficiency of the optimal page replacement
algorithm. The optimal algorithm assumes the entire reference string to be present at the
time of allocation and replaces the page that will not be used for the longest period of time.
LRU page replacement policy is based on the observation that pages that have been heavily
used in the last few instructions will probably be heavily used again in the next few.
Conversely, pages that have not been used for ages will probably remain unused for a long
time.
2. It is rather expensive to implement in practice in many cases and hence alternatives to LRU
or even variants to the original LRU are continuously being sought.
3. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory,
with the most recently used page at the front and the least recently used page at the rear.
The difficulty is that the list must be updated on every memory reference. Finding a page
in the list, deleting it, and then moving it to the front is a very time consuming operation,
even in hardware (assuming that such hardware could be built) or special hardware
resources need to be in place for LRU implementation which again is not satisfactory.

113
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

4. One important advantage of the LRU algorithm is that it is amenable to full statistical
analysis. It has been proven, for example, that LRU can never result in more than N-times
more page faults than Optimal (OPT) algorithm, where N is proportional to the number of
pages in the managed pool.
5. On the other hand, LRU's weakness is that its performance tends to degenerate under many
quite common reference patterns.

Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page frames.Find


number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 0 is
already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for
the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault
because they are already available in the memory.

5. A process refers to 5 pages in the order- A,B,C,D,A,B,E,A,B,C,D,E. If the page


replacement algorithm is LRU calculate the number of page faults with empty frames of size
4?

Ans:

: Given reference strings A,B,C,D,A,B,E,A,B,C,D,E


Least recently used algorithm
No. of page Frames = 4
A B C D A B E A B C

114
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

A A A A A A A A A A

B B B B B B B B B

C C C C E E E E

D D D D D D C

* * * * hit hit * hit hit *

D E
A E
B B
D D
C C
* *

Total page Faults = 8


Total page Hits = 4

6. How to handle page fault? Explain with neat diagram.


Ans: Page Fault:

Page fault dominates more like an error. It mainly occurs when any program tries to access the
data or the code that is in the address space of the program, but that data is not currently located in
the RAM of the system.

 So basically when the page referenced by the CPU is not found in the main memory then
the situation is termed as Page Fault.
 Whenever any page fault occurs, then the required page has to be fetched from the
secondary memory into the main memory.

In case if the required page is not loaded into the memory, then a page fault trap arises

The page fault mainly generates an exception, which is used to notify the operating system that it
must have to retrieve the "pages" from the virtual memory in order to continue the execution. Once
all the data is moved into the physical memory the program continues its execution normally. The
Page fault process takes place in the background and thus goes unnoticed by the user.

115
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 The hardware of the computer tracks to the kernel and the program counter (PC) is
generally saved on the stack. CPU registers store the information of the current state of
instruction.
 An assembly program is started that usually saves the general registers and also saves the
other volatile information to prevent the OS from destroying it.

Fig : Handling the Page Fault

If you will access a page that is marked as invalid then it also causes a Page Fault. Then the Paging
hardware during translating the address through the page table will notice that the invalid bit is set
that will cause a trap to the Operating system.

This trap is mainly the result of the failure of the Operating system in order to bring the desired
page into memory.

The procedure to handle the page fault as shown with the help of the above diagram:

1. First of all, internal table (that is usually the process control block) for this process in order
to determine whether the reference was valid or invalid memory access.
2. If the reference is invalid, then we will terminate the process. If the reference is valid, but
we have not bought in that page so now we just page it in.
3. Then we locate the free frame list in order to find the free frame.
4. Now a disk operation is scheduled in order to read the desired page into the newly
allocated frame.
5. When the disk is completely read, then the internal table is modified that is kept with the
process, and the page table that mainly indicates the page is now in memory.

116
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

6. Now we will restart the instruction that was interrupted due to the trap. Now the process
can access the page as though it had always been in memory.

7. Explain the terms in Memory Partitioning with an example.


a). Fixed partitioning b). Dynamic Partitioning.
Ans:
a). Fixed partitioning:

In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. Memory Management function keeps track of the status
of each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.

There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In


Contiguous Technique, executing process must be loaded entirely in the main memory.
Contiguous Technique can be divided into:

1. Fixed (or static) partitioning

2. Variable (or dynamic) partitioning

a) Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the main
memory. In this partitioning, the number of partitions (non-overlapping) in RAM is fixed
but the size of each partition may or may not be the same. As it is
a contiguous allocation, hence no spanning is allowed. Here partitions are made before
execution or during system configure.

117
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.

Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.

Advantages of Fixed Partitioning

1. Easy to implement
2. Little OS overhead

Disadvantages of Fixed Partitioning

1. Internal Fragmentation
2. External Fragmentation
3. Limit process size
4. Limitation on Degree of Multiprogramming

b). Dynamic Partitioning / Variable Partitioning:

It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed
Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or
during system configure. Various features associated with variable Partitioning-

1. Initially RAM is empty and partitions are made during the run-time according to
process’s need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.

118
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

3. The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilisation of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming
process and Main Memory’s size.

Advantages of Variable Partitioning –

1. No Internal Fragmentation
2. No restriction on Degree of Multiprogramming
3. No Limitation on the size of the process

Disadvantages of Variable Partitioning –

1. Difficult Implementation
2. External Fragmentation
For example, suppose in above example- process P1(2MB) and process P3(1MB)
completed their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose
process P5 of size 3MB comes. The empty space in memory cannot be allocated as no
spanning is allowed in contiguous allocation. The rule says that process must be
contiguously present in main memory to get executed. Hence it results in External
Fragmentation.

119
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Now P5 of size 3 MB cannot be accommodated in spite of required available space because in


contiguous no spanning is allowed.

8. Explain the segmentation with neat diagram.

Ans:
Segmentation :
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the same sizes are called segments. Segmentation gives user’s view of the
process which paging does not give. Here the user’s view is mapped to physical memory.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are resident at
any one point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into
memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical
address. It’s each table entry has:
 Base Address: It contains the starting physical address where the segments reside in
memory.
 Limit: It specifies the length of the segment.

120
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Translation of Two dimensional Logical Address to one dimensional Physical Address.

Address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation –
 No Internal fragmentation.

121
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Segment Table consumes less space in comparison to Page table in paging.


Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing External fragmentation.

9. a). What is Belady’s anomaly? Explain with an example

b). Discuss the Hardware support for paging

Ans: Bélády’s anomaly is the name given to the phenomenon where increasing the number of
page frames results in an increase in the number of page faults for a given memory access
pattern.
This phenomenon is commonly experienced in the following page replacement algorithms:
1. First in first out (FIFO)
2. Second chance algorithm
3. Random page replacement algorithm
Example: Consider the following diagram to understand the behavior of a stack-based page
replacement algorithm

The diagram illustrates that given the set of pages i.e. {0, 1, 2} in 3 frames of memory is not a
subset of the pages in memory – {0, 1, 4, 5} with 4 frames and it is a violation in the property of
stack based algorithms. This situation can be frequently seen in FIFO algorithm.
Belady’s Anomaly in FIFO –
Assuming a system that has no pages loaded in the memory and uses the FIFO Page
replacement algorithm. Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string the using FIFO page replacement
algorithm yields a total of 9 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.

122
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Case-2: If the system has 4 frames, the given reference string using the FIFO page replacement
algorithm yields a total of 10 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.

It can be seen from the above example that on increasing the number of frames while using the
FIFO page replacement algorithm, the number of page faults increased from 9 to 10.
Note – It is not necessary that every string reference pattern cause Belady anomaly in FIFO but
there is certain kind of string references that worsen the FIFO performance on increasing the
number of frames.
b) hardware support required to implement paging

Each operating system has its own techniques for storing page tables. The majority allocates a
page table for each process and a pointer to the page table is stored with the other register values
in the process control block.

The hardware execution of the page table can be done in several ways. In the simplest case the
page table is executing as a set of dedicated registers. These registers must be built with very
high-speed logic to make the paging-address translation efficient.

The use of registers for the page table is adequate if the page table is reasonably small. For other
cases the page table is kept in main memory also a page table base register (PTBR) points to the
page table. Changing page tables needs changing only one register substantially reducing
context-switch time.

Though the standard solution to this problem is to use a fast-lookup, special, small hardware
cache called translation look-aside buffer (TLB). The TLB is an associative high speed memory.
Every entry in the TLB consists of 2 parts: a value and a key. Only some TLB's stores address-

123
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

space identifiers (ASID's) in an every entry of the TLB. An ASID uniquely identifies every
process and is used to provide address space protection for that process. While TLB attempts to
resolve virtual page numbers, it makes sure the ASID for the currently running process matches
the ASID associated with the virtual page.

Paging Hardware (Using TLB (Transaction Look aside Buffer )

Every address generated by CPU mainly consists of two parts:

1. Page Number(p)
2. Page Offset (d)

where,

Page Number is used as an index into the page table that generally contains the base address of
each page in the physical memory.

Page offset is combined with base address in order to define the physical memory address which
is then sent to the memory unit.

If the size of logical address space is 2 raised to the power m and page size is 2 raised to the
power n addressing units then the high order m-n bits of logical address designates the page
number and the n low-order bits designate the page offset.

The logical address is as follows:

124
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

where p indicates the index into the page table, and d indicates the displacement within the page.
The Page size is usually defined by the hardware. The size of the page is typically the power of 2
that varies between 512 bytes and 16 MB per page.

10) Consider the following page reference strings:


1,2,3,4,2,1,5,6,2,1,2,3,2,1,2,3,6 How many page faults would occur for Optimal Page
Replacement algorithm, assuming three, four frames.

Ans:

Given reference strings is 1,2,3,4,2,1,5,6,2,1,2,3,2,1,2,3,6


Optimal page replacement algorithm
No.of page frames is 3
No.of page frames is 3
1 2 3 4 2 1 5 6 2
1

125
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

……………………………….. *UNIT-4 * END …………………………………………

126
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

OPERATING SYSTEMS
UNIT-5
Short Questions & Answers:
1 What are the various file accessing methods?
2 Define the terms Seek Time and Rotational Latency.
3 Define File. List down the operations that may be performed on File.
4 What are the file attributes?
5 Define mounting. What is the need for mounting in a file system?
Unit 5
6 What is directory structure?
7 Define Disk Scheduling.
8 Discuss about Free Space Management
9 List the file Allocation methods.
10 Define Boot Block and Bad blocks.

3. What are the various file accessing methods?


Ans: File access methods define how data is accessed and modified within a file. There are
different file access methods.
The three primary file access methods are:
1. Sequential access,
2. Random access,
3. Direct access.
Sequential access reads and writes data in a linear order, random access allows direct access
to specific data within the file, and direct access involves accessing data directly by its
physical location in the file.

4. Define the terms Seek Time and Rotational Latency.


Ans: Seek Time: Time taken by the R/W-Head to reach the desired track (location) from the
current position. (Or) Seek Time is defined as the time required by the read/write head to
move from one track to another.
Rotational Latency : Time taken to reach the desired sector. (or) Rotational latency
(sometimes called rotational delay or just latency) is the delay waiting for the rotation of the
disk to bring the required disk sector under the read-write head.

5. Define File. List down the operations that may be performed on File.
Ans: File : A file can be defined as a collection of data or information.

There are two types of files.

1. Program files
2. Data Files.

Operations on a file :

127
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Create a new file.


 Change the access permissions and attributes of a file.
 Open a file, which makes the file contents available to the program.
 Read data from a file.
 Write data to a file.
 Repositioning within a file
 Delete a file.
 Truncating a file

6. What are the file attributes?


Ans: A file’s attributes vary from one operating system to another but typically consist of
these:

• Name. The symbolic file name is the only information kept in human readable form.
• Identifier. This unique tag, usually a number, identifies the file within the file system; it is
the non-human-readable name for the file.
• Type. This information is needed for systems that support different types of files.
• Location. This information is a pointer to a device and to the location of the file on that
device. • Size. The current size of the file (in bytes, words, or blocks) and possibly the
maximum allowed size are included in this attribute.
• Protection. Access-control information determines who can do reading, writing, executing,
and so on.
• Time, date, and user identification. This information may be kept for creation, last
modification, and last use. These data can be useful for protection, security, and usage
monitoring.

7. Define mounting. What is the need for mounting in a file system .


Ans:
Before your computer can use any kind of storage device (such as a hard drive, CD-
ROM, or network share), you or your operating system must make it accessible through the
computer's file system. This process is called mounting. You can only
access files on mounted media.

Mounting a file system attaches that file system to a directory (mount point) and makes it
available to the system. The root (/) file system is always mounted. Any other file system can be
connected or disconnected from the root (/) file system.

When you mount a file system, any files or directories in the underlying mount point directory
are unavailable as long as the file system is mounted.

8. What is directory structure?

Ans: The directory structure is the organization of files into a hierarchy of folders.

On a computer, a directory is used to store, arrange, and segregate files and folders.
There are several logical structures of a directory, these are given below.
 Single level directory

128
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Two-level directory
 Tree structure or hierarchical directory
 Acyclic graph directory

9. Define Disk Scheduling.

Ans: Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling. Disk scheduling is important because: Hard
drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.

Disk Scheduling Algorithms


 First Come First Serve (FCFS)
 Shortest Seek Time First (SSTF)
 SCAN.
 LOOK.
 C-SCAN.
 C-LOOK.

10. Discuss about Free Space Management


Ans: The operating system manages the free space in the hard disk. This is known as free
space management in operating systems. The OS maintains a free space list to keep track of
the free disk space. The free space list consists of all free disk blocks that are not allocated to
any file or directory. For saving a file in the disk, the operating system searches the free
space list for the required disk space and then allocates that space to the file. When a file is
deleted, the space allocated to it is added to the free space list.

Following are four methods of doing free space management in operating systems:

11. List the file Allocation methods.


Ans:
The allocation methods define how the files are stored in the disk blocks / memory blocks.
There are three main disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation

129
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.

12. Define Boot Block and Bad blocks.


Ans: Both bad block and boot block are two important features of disk management in an
operating system.

 Boot block: is an important component of an operating system which resides in a region of


a hard disk or any other storage device and contains all crucial data and instructions required
for initiating the booting process.

 Bad block : is a region or sector of a data storage device which is damaged or
malfunctioned and is not reliable to store data. There are two types of bad blocks: A
physical, or hard, bad block comes from damage to the storage medium. A bad block is an
area of storage media that is no longer reliable for storing and retrieving data because it has
been physically damaged or corrupted. Bad blocks are also referred to as bad sectors.

(or)

Long Questions & Answers:

1 Explain contiguous and linked file allocation methods.


Compare and Contrast Free space management and Swap space
2
management.
3 Explain the Indexed file allocation method with an example.
Unit 5
4 Illustrate the concept of File Mounting with a neat diagram.
5 Describe the access methods of a file.
6 Explain FCFS, SSTF Disk scheduling Algorithms with an example.
7 Illustrate SCAN and C-SCAN Disk scheduling with example

130
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

8 Write a short note on Directory implementation


9 Explain the file system structure and implementation.
Explain the following with relevant diagrams:
10
a) Two- Level directory structure b) Acyclic –Graph directory structure.

1. Explain contiguous and linked file allocation methods.


Ans: The allocation methods define how the files are stored in the disk blocks. There are three
main disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.

Contiguous Allocation:

In this scheme, each file occupies a contiguous set of blocks on the disk.

For example: if a file requires n blocks and is given a block b as the starting location, then the
blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting
block address and the length of the file (in terms of blocks required), we can determine the
blocks occupied by the file.
The directory entry for a file with contiguous allocation contains

 Address of starting block


 Length of the allocated portion.

The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

131
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access, the
address of the kth block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks.

Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory
at a particular instance.

Linked List Allocation :

In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.

The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last
block (25) contains -1 indicating a null pointer and does not point to any other block.

132
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Advantages:
 This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.

Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
 It does not support random or direct access. We cannot directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access)
from the starting block of the file via block pointers.
 Pointers required in the linked allocation incur some extra overhead.

2.Compare and Contrast Free space management and Swap space management.
Ans:
Free Space Management:

As we know in our system, the hard disk space is limited. We need to use this space wisely. A file
system is responsible to allocate the free blocks to the file therefore it has to keep track of all the
free blocks present in the disk. So, the operating system manages the free space in the hard disk
created by the leftover spaces in memory or by deleting files using the free space management
techniques.

There are four methods of doing free space management in operating systems. These are as
follows-
 Bit Vector

133
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

 Linked List
 Grouping
 Counting

Bit Vector:

The first method that we will discuss is the bit vector method. Also known as the bit map, this is
the most frequently used method to implement the free space list. In this method, each block in
the hard disk is represented by a bit (either 0 or 1). If a block has a bit 0 means that the block is
allocated to a file, and if a block has a bit 1 means that the block is not allocated to any file, i.e.,
the block is free.
For example, consider a disk having 16 blocks where block numbers 2, 3, 4, 5, 8, 9, 10, 11, 12,
and 13 are free, and the rest of the blocks, i.e., block numbers 0, 1, 6, 7, 14 and 15 are allocated
to some files. The bit vector for this disk will look like this-

We can find the free block number from the bit vector using the following method-
Block number = (Number of bits per word )* (number of 0-value words) + (offset of first
bit)
We will now find the first free block number in the above example.
The first group of 8 bits (00111100) constitutes a non-zero word since all bits are not 0. After
finding the non-zero word, we will look for the first 1 bit. This is the third character of the non-
zero word. Hence, offset = 3.
Therefore, the first free block number = 8 * 0 + 3 = 3.

Linked List:

Another method of doing free space management in operating systems is a linked list. In this
method, all the free blocks existing in the disk are linked together in a linked list. The address of
the first free block is stored somewhere in the memory. Each free block contains a pointer that
contains the address to the next free block. The last free block points to null, indicating the end
of the linked list.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files. If we maintain a linked list, then Block 3 will contain a pointer to Block 4, and
Block 4 will contain a pointer to Block 5.
Similarly, Block 5 will point to Block 6, Block 6 will point to Block 9, Block 9 will point to
Block 10, Block 10 will point to Block 11, Block 11 will point to Block 12, Block 12 will point
to Block 13 and Block 13 will point to Block 14. Block 14 will point to null. The address of the
first free block, i.e., Block 3, will be stored somewhere in the memory. This is also represented

134
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

in the following figure-

Grouping:

The third method of free space management in operating systems is grouping. This method is the
modification of the linked list method. In this method, the first free block stores the addresses of
the n free blocks. The first n-1 of these blocks is free. The last block in these n free blocks
contains the addresses of the next n free blocks, and so on.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files.
If we apply the Grouping method considering n to be 3, Block 3 will store the addresses of Block
4, Block 5, and Block 6. Similarly, Block 6 will store the addresses of Block 9, Block 10, and
Block 11. Block 11 will store the addresses of Block 12, Block 13, and Block 14. This is also
represented in the following figure-

This method overcomes the disadvantages of the linked list method. The addresses of a large
number of free blocks can be found quickly, just by going to the first free block or the nth free
block. There is no need to traverse the whole list, which was the situation in the linked list
method.

Counting:

This is the fourth method of free space management in operating systems. This method is also a
modification of the linked list method. This method takes advantage of the fact that several
contiguous blocks may be allocated or freed simultaneously. In this method, a linked list is
maintained but in addition to the pointer to the next free block, a count of free contiguous blocks

135
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

that follow the first block is also maintained. Thus each free block in the disk will contain two
things-
1. A pointer to the next free block.
2. The number of free contiguous blocks following it.

For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11, 12, 13,
and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15 and 16 are allocated
to some files.
If we apply the counting method, Block 3 will point to Block 4 and store the count 4 (since
Block 3, 4, 5, and 6 are contiguous). Similarly, Block 9 will point to Block 10 and keep the count
of 6 (since Block 9, 10, 11, 12, 13, and 14 are contiguous). This is also represented in the
following figure-

This method also overcomes the disadvantages of the linked list method since there is no need to
traverse the whole list.

Note:

In the grouping method, the first free block stores the addresses of the next n free blocks, and in
the counting method, a free block stores the count of the next contiguous free blocks along with a
pointer to the next free block. Both these methods are used to overcome the drawbacks of the
linked list method.

3.Explain the Indexed file allocation method with an example.


Ans:

Indexed Allocation: In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file. Each file has its own index block. The ith entry in
the index block contains the disk address of the ith file block. The directory entry contains the
address of the index block as shown in the image:

136
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Advantages:
 This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
 It overcomes the problem of external fragmentation.

Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation would
keep one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.

4. Illustrate the concept of File Mounting with a neat diagram.


Ans:
File Mounting:
Mounting is a process by which the operating system makes files and directories on a storage
device (such as hard drive, CD-ROM,or network share) available for users to access via the
computers file system.
The basic idea is just like a file must be opened before it is used, a file system must be mounted
before it can be available to processes on the system.
More specifically, the directory structure can be built out of multiple volumes, which must be
mounted to make them available within the file-system name space. The mount procedure is
straightforward. The operating system is given the name of the device and the mount point—the
location within the file structure where the file system is to be attached. Typically, a mount point
is an empty directory.

Mounting refers to the grouping of files in a file system structure accessible to the user
of the group of users. It can be local or remote, in the local mounting, it connects disk
drivers as one machine, while in the remote mounting it uses Network File System (NFS)
to connect to directories on other machines so that they can be used as if they are the part

137
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

of the user’s file system.

Fig 1: A File system before mounting

Fig 2: A File system after mounting


Note: a system may allow the same file system to be mounted repeatedly, at different mount
points; or it may only allow one mount per file system.
Unmounting: is the process opposite to mounting, in which the operating system cuts off all
users access to files and directories on the mount point.

5. Describe the access methods of a file.


Ans: When a file is used, information is read and accessed into computer memory and there
are several ways to access this information of the file. Some systems provide only one access
method for files. Other systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design problem.
There are three ways to access a file into a computer system:

138
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

1. Sequential-Access,
2. Direct Access,
3. Index sequential Method.
Sequential Access –

It is the simplest access method. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read
next- read the next position of the file and automatically advance a file pointer, which
keeps track I/O location. Similarly, for the -write next- append to the end of the file and
advance to the newly written material.

Key points:
 Data is accessed one record right after another record in an order.
 When we use read command, it move ahead pointer by one
 When we use write command, it will allocate memory and move the pointer to the end
of the file
 Such a method is reasonable for tape.

Direct Access –
Another method is direct access method also known as relative access method. A fixed-length
logical record that allows the program to read and write record rapidly. in no particular order.
The direct access is based on the disk model of a file since disk allows random access to any
file block. For direct access, the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59, and then we can write block 17. There is no
restriction on the order of reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a relative block
number, the first relative block of the file is 0 and then 1 and so on.
Advantages of Direct Access Method:
 The files can be immediately accessed decreasing the average access time.
 In the direct access method, in order to access a block, there is no need of traversing all the
blocks present before it.

Index sequential method –

It is the other method of accessing a file that is built on the top of the sequential access method.
These methods construct an index for the file. The index, like an index in the back of a book,
contains the pointer to the various blocks. To find a record in the file, we first search the index,
and then by the help of pointer we access the file directly.
Key points:
 It is built on top of Sequential access.
 It control the pointer by using index.

6. Explain FCFS, SSTF Disk scheduling Algorithms with an example.

139
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Ans:
First Come First Serve (FCFS) Disk Scheduling Algorithm:
FCFS is the simplest disk scheduling algorithm. As the name suggests, this algorithm
entertains requests in the order they arrive in the disk queue. The algorithm looks very fair and
there is no starvation (all requests are serviced sequentially) but generally, it does not provide
the fastest service.
Example:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50

The following chart shows the sequence in which requested tracks are serviced using FCFS.
Note: The FCFS process the first request then the next and so on…

Therefore, the total seek count is calculated as:

= (176-50)+(176-79)+(79-34)+(60-34)+(92-60)+(92-11)+(41-11)+(114-41)
= 510
Total number of seek operations = 510

Shortest Seek Time First (SSTF) –

Basic idea is the tracks which are closer to current disk head position should be serviced first
in order to minimise the seek operations.

Example –
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50

The following chart shows the sequence in which requested tracks are serviced using SSTF.

140
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Note:
Service all requests close (nearer) to the current head position.

Therefore, total seek count is calculated as:


= (50-41)+(41-34)+(34-11)+(60-11)+(79-60)+(92-79)+(114-92)+(176-114)
= 204
Which can also be directly calculated as: (50-11)+(176-11) = 204 tracks covered.

Advantages:

1. Better performance than FCFS scheduling algorithm.


2. It provides better throughput.
3. This algorithm is used in Batch Processing system where throughput is more important.
4. It has less average response and waiting time.

Disadvantages:
Starvation is possible for some requests as it favors easy to reach request and ignores the far
away processes.
1. There is lack of predictability because of high variance of response time.
2. Switching direction slows things down.

7.Illustrate SCAN and C-SCAN Disk scheduling with example


Ans:
SCAN (Elevator) Disk Scheduling Algorithm:

In SCAN disk scheduling algorithm, head starts from one end of the disk and moves towards

141
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

the other end, servicing requests in between one by one and reach the other end. Then the
direction of the head is reversed and the process continues as head continuously scan back and
forth to access the disk. So, this algorithm works as an elevator and hence also known as
the elevator algorithm. As a result, the requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait.

Example:

Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}

Initial head position = 50

Direction = left (We are moving from right to left)

The following chart shows the sequence in which requested tracks are serviced using SCAN.

Note:
Head starts at one end and moves towards the other end, servicing the requests on its way. At
the end the head movement direction is reversed and servicing continues.

Therefore, the total seek count is calculated as:


= (50-41)+(41-34)+(34-11)
+(11-0)+(60-0)+(79-60)
+(92-79)+(114-92)+(176-114)
= 226 tracks.

Advantages of SCAN (Elevator) algorithm


1. This algorithm is simple and easy to understand.
2. SCAN algorithm have no starvation.
3. This algorithm is better than FCFS Scheduling algorithm.

142
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Disadvantages of SCAN (Elevator) algorithm

1. More complex algorithm to implement.


2. This algorithm is not fair because it cause long waiting time for the cylinders just visited by
the head.
3. It causes the head to move till the end of the disk in this way the requests arriving ahead of
the arm position would get immediate service but some other requests that arrive behind
the arm position will have to wait for the request to complete.

C-SCAN Disk Scheduling Algorithm:

C-scan algorithm also known as Circular Elevator algorithm is the modified version of SCAN
algorithm. In this algorithm, the head pointer starts from one end of the disk and moves
towards the other end, serving all requests in between. After reaching the other end, the head
reverses its direction and go to the starting point. It then satisfies the remaining requests, in the
same direction as before.

Example :
Consider a disk with 200 tracks (0-199)
and the disk queue having I/O requests in the following order as follows:
98, 183, 40, 122, 10, 124, 65
The current head position of the Read/Write head is 53
and will move in Right direction.
The following chart shows the sequence in which requested tracks are serviced using C-
SCAN.

143
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Total head movements


= (65 - 53) + (98 - 65)
+ (122 - 98)
+ (124 - 122) + (183 - 124)
+ (199 - 183) + (199 - 0)
+ (10 - 0) + (40 - 10)
= 395

8. Write a short note on Directory implementation


Ans:
Directory Implementation in Operating System:

Directory implementation in the operating system can be done using Singly Linked List and Hash
table. The efficiency, reliability, and performance of a file system are greatly affected by the
selection of directory-allocation and directory-management algorithms. There are numerous ways
in which the directories can be implemented. But we need to choose an appropriate directory
implementation algorithm that enhances the performance of the system.

144
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Directory Implementation using Singly Linked List

The implementation of directories using a singly linked list is easy to program but is time-
consuming to execute. Here we implement a directory by using a linear list of filenames with
pointers to the data blocks.

Directory Implementation Using Singly Linked List

 To create a new file the entire list has to be checked such that the new directory does not exist
previously.
 The new directory then can be added to the end of the list or at the beginning of the list.
 In order to delete a file, we first search the directory with the name of the file to be deleted.
After searching we can delete that file by releasing the space allocated to it.
 To reuse the directory entry we can mark that entry as unused or we can append it to the list of
free directories.
 To delete a file linked list is the best choice as it takes less time.

Disadvantage

The main disadvantage of using a linked list is that when the user needs to find a file the user has
to do a linear search. In today’s world directory information is used quite frequently and linked
list implementation results in slow access to a file. So the operating system maintains a cache to
store the most recently used directory information.

Directory Implementation using Hash Table

An alternative data structure that can be used for directory implementation is a hash table. It
overcomes the major drawbacks of directory implementation using a linked list. In this method,
we use a hash table along with the linked list. Here the linked list stores the directory entries, but
a hash data structure is used in combination with the linked list.
In the hash table for each pair in the directory key-value pair is generated. The hash function on
the file name determines the key and this key points to the corresponding file stored in the
directory. This method efficiently decreases the directory search time as the entire list will not be
searched on every operation. Using the keys the hash table entries are checked and when the file

145
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

is found it is fetched.

Directory Implementation Using Hash Table

Disadvantage:
The major drawback of using the hash table is that generally, it has a fixed size and its dependency
on size. But this method is usually faster than linear search through an entire directory using a
linked list.

9. Explain the file system structure and implementation.


Ans: File System Implementation in Operating System:

A file is a collection of related information. The file system resides on secondary storage and
provides efficient and convenient access to the disk by allowing data to be stored, located, and
retrieved.
File system implementation in an operating system refers to how the file system manages the
storage and retrieval of data on a physical storage device such as a hard drive, solid-state drive, or
flash drive. The file system implementation includes several components, including:
1. File System Structure: The file system structure refers to how the files and directories are
organized and stored on the physical storage device. This includes the layout of file systems
data structures such as the directory structure, file allocation table, and inodes.
2. File Allocation: The file allocation mechanism determines how files are allocated on the
storage device. This can include allocation techniques such as contiguous allocation, linked
allocation, indexed allocation, or a combination of these techniques.

146
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

3. Data Retrieval: The file system implementation determines how the data is read from and
written to the physical storage device. This includes strategies such as buffering and caching
to optimize file I/O performance.
4. Security and Permissions: The file system implementation includes features for managing
file security and permissions. This includes access control lists (ACLs), file permissions, and
ownership management.
5. Recovery and Fault Tolerance: The file system implementation includes features for
recovering from system failures and maintaining data integrity. This includes techniques such
as journaling and file system snapshots.
File system implementation is a critical aspect of an operating system as it directly impacts the
performance, reliability, and security of the system. Different operating systems use different file
system implementations based on the specific needs of the system and the intended use cases.
Some common file systems used in operating systems include NTFS and FAT in Windows, and
ext4 and XFS in Linux.
The file system is organized into many layers:

1. I/O Control level – Device drivers act as an interface between devices and OS, they help to
transfer data between disk and main memory. It takes block number as input and as output, it
gives low-level hardware-specific instruction.
2. Basic file system – It Issues general commands to the device driver to read and write physical
blocks on disk. It manages the memory buffers and caches. A block in the buffer can hold the
contents of the disk block and the cache stores frequently used file system metadata.
3. File organization Module – It has information about files, the location of files and their logical
and physical blocks. Physical blocks do not match with logical numbers of logical blocks
numbered from 0 to N. It also has a free space that tracks unallocated blocks.
4. Logical file system – It manages metadata information about a file i.e includes all details about
a file except the actual contents of the file. It also maintains via file control blocks. File Control
Block (FCB) has information about a file – owner, size, permissions, and location of file
contents.

10. Explain the following with relevant diagrams:


a) Two-Level directory structure b) Acyclic –Graph directory structure.
Ans:

147
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

Two-Level directory structure:


A single level directory often leads to confusion of files names among different users. The
solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. System’s master file
directory (MFD) is searched whenever a new user id is correct.

Advantages:
 The main advantage is there can be more than two files with same name, and would be very
helpful if there are multiple users.
 A security would be there which would prevent user to access other user’s files.
 Searching of the files becomes very easy in this directory structure.
Disadvantages:
 As there is advantage of security, there is also disadvantage that the user cannot share the
file with the other users.
 Unlike the advantage users can create their own files, users don’t have the ability to create
subdirectories.

Acyclic Graph Structure:

Actually other directory structures like Single,Two-Level & Tree structure none of them have
the capability to access one file from multiple directories. The file or the subdirectory could be
accessed through the directory it was present in, but not from the other directory.
This problem is solved in acyclic graph directory structure, where a file in one directory can be
accessed from multiple directories. In this way, the files could be shared in between the users.
It is designed in a way that multiple directories point to a particular directory or file with the

148
Operating Systems Question Bank with Answers : B.Tech II Year II-Sem AIDS/CSM/CSC/CSE

help of links.
In the below figure, this explanation can be nicely observed, where a file is shared between
multiple users. If any user makes a change, it would be reflected to both the users.

Advantages:
 Sharing of files and directories is allowed between multiple users.
 Searching becomes too easy.
 Flexibility is increased as file sharing and editing access is there for multiple users.
Disadvantages:
 Because of the complex structure it has, it is difficult to implement this directory structure.
 The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.
 If we need to delete the file, then we need to delete all the references of the file in order to
delete it permanently.

…………………………….UNIT-5 END ………………………………………

149

You might also like