0% found this document useful (0 votes)
23 views67 pages

Dics 320

Uploaded by

agrawalbhavish07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views67 pages

Dics 320

Uploaded by

agrawalbhavish07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 67

DIPLOMA CSE

III SEMESTER / II YEAR


OPERATING SYSTEM (DICS-320)
Some Important Guidelines for the Question Bank Setter:
1. Question bank must cover: Course (subject) Learning Outcomes (CLOs) and bloom's taxonomy
(L1 : Remember, L2 : Understand, L3 : Apply, L4 : Analyse) details in this regard are attached with
the mail.
2. The question bank should be prepared in the given format would also be attached.
3. Each question and parts of the questions should be written in clear language. Also break the questions
relatively in shorter sentences if they contain brief information.
4. Repetition of a question is not allowed.
5. The file should be sent in MS-Word format.
6. The font size of the content should be Arial (font size 12) for English & Kruti Dev 010 (font size 14) for
Hindi.
7. Wherever the question papers have been prepared in both Hindi and English languages, the Hindi
version of the question should be written immediately after English version of each question.
8. In case of MBA Course, Section-C must contain Case studies (one case study per unit or
numerical type questions as per the format).
9. If the syllabus contains more than 5 units or less than 5 units then update the format accordingly.
10. A question bank moderation committee will be formed by the Dean of the concerned college
under the supervision of the concerned department HOD. Committee will check & ensure that the
question bank is prepared according to the guidelines. After that they will make a folder according
to the Program, Branch & semester/year wise, and would ensure that all question banks are
available according to the evaluation scheme. Hods will submit all folders to the CoE Office.
SECTION-A (Very Short Answer Type Questions)
UNIT-I
S.No Question CO Bloom's
. Taxono
my
a) What is an operating System? CO L1
ऑपरेटिंग सिस्टम क्या है? 1
An operating system (OS) is a fundamental piece of
system software that manages computer hardware, software
resources, and provides a platform for applications to run. It
serves as an intermediary between users and the computer
hardware, ensuring that users and programs can efficiently
interact with the machine. The operating system is essential
for the operation of any modern computing device, whether
it's a personal computer, a smartphone, a server, or even an
embedded device like a smart TV.
Core Functions of an Operating System
The operating system performs several critical functions,
including:
1. Process Management:
o A process is a program in execution. The OS
manages processes by ensuring that each
process gets the necessary resources (like CPU
time and memory) to run.
o It schedules processes, controls their execution,
and handles multitasking by managing the
switching between multiple processes, ensuring
that each gets its fair share of system resources.
2. Memory Management:
o The OS is responsible for managing the
computer's memory (RAM). It allocates memory
to different processes, ensuring that they don't
interfere with each other.
o Memory management includes functions such as
memory allocation, deallocation, and handling
virtual memory, which allows a system to use
disk storage as extra "RAM" when the physical
RAM is full.
3. File System Management:
o The OS manages the file system, which is how
data is stored, retrieved, and organized on
storage devices like hard drives, SSDs, or
network drives.
o It provides a hierarchical structure for organizing
files into directories, ensures data integrity, and
controls file permissions for security.
4. Device Management:
o The operating system manages the input/output
(I/O) devices, such as keyboards, mice, printers,
displays, and network interfaces.
o Through device drivers (specialized programs),
the OS communicates with hardware
components, sending and receiving data in
formats the hardware can understand.
5. User Interface (UI):
o The OS provides a user interface, which can be
either command-line-based (CLI) or graphical
(GUI).
o A GUI allows users to interact with the computer
using graphical icons, buttons, and windows,
while a CLI requires users to type text
commands.
6. Security and Access Control:
o The OS provides security by implementing user
authentication (like passwords) and ensuring that
unauthorized users cannot access the system.
o It enforces access control mechanisms to restrict
which resources or files can be accessed by
specific users or programs, helping to protect the
system from malicious activities and ensuring
data privacy.
7. Networking:
o The OS facilitates communication between
computers over a network. It handles network
protocols, data transmission, and network
interface management.
o For example, it manages how data is sent and
received over Wi-Fi or Ethernet, ensuring that
applications can access network resources
seamlessly.
8. System Performance Monitoring:
o The OS monitors and optimizes system
performance. It tracks system resources like CPU
usage, memory usage, and storage utilization,
and may provide tools or utilities for users to
view this data.
o Performance management can include load
balancing, prioritization of tasks, and resource
allocation to ensure optimal performance.
Types of Operating Systems
There are different types of operating systems, each suited
to specific environments or use cases:
1. Batch Operating Systems:
o These systems process jobs in batches without
user interaction. Early mainframes used batch
OS for efficient processing of large amounts of
data.
2. Time-sharing Operating Systems:
o These systems allow multiple users to share the
same system simultaneously. Time-sharing
enables the OS to switch between tasks quickly
so that users perceive the system as being
responsive and interactive. Modern desktop OSes
(like Windows or macOS) are examples of time-
sharing systems.
3. Real-time Operating Systems (RTOS):
o These OSs are designed for systems where
timely and predictable responses are critical,
such as embedded systems, robotics, or medical
devices. RTOS ensures that tasks are completed
within strict time constraints.
4. Single-user, Single-task Operating Systems:
o These are designed for use by a single user
performing one task at a time. An example would
be early versions of mobile OSes or simple
embedded systems.
5. Multi-user, Multi-tasking Operating Systems:
o Modern OSes like Windows, Linux, and macOS
allow multiple users to log in simultaneously,
while each user can run multiple applications
concurrently.
6. Distributed Operating Systems:
o These OSes manage a collection of separate
physical computers that appear to users as a
single cohesive system. They coordinate tasks,
resource allocation, and communication across a
network of machines.
7. Mobile Operating Systems:
o These are specifically designed for mobile
devices like smartphones and tablets. Popular
examples include Android, iOS, and HarmonyOS.
These OSes are optimized for touch interfaces,
low power consumption, and connectivity.
Examples of Popular Operating Systems
1. Microsoft Windows:
o One of the most widely used operating systems
for personal computers and business
environments. It provides a graphical user
interface, multitasking capabilities, and
extensive support for software and hardware.
2. macOS:
o The OS developed by Apple for Mac computers. It
is known for its sleek user interface, integration
with Apple's hardware, and a strong ecosystem
of apps for creative professionals.
3. Linux:
o An open-source operating system that powers
everything from personal computers to servers
and embedded systems. Linux is highly
customizable and is used by developers, system
administrators, and hobbyists. Popular
distributions (or "distros") include Ubuntu,
Debian, and Fedora.
4. Android:
o A mobile OS based on the Linux kernel, Android
is the most popular OS for smartphones and
tablets. It offers extensive app support and
flexibility, allowing manufacturers to customize
the OS for their devices.
5. iOS:
o Apple's mobile operating system for iPhones and
iPads. It is known for its seamless integration
with Apple hardware, security features, and a
controlled app ecosystem.
Conclusion
The operating system is the backbone of any computing
device, responsible for managing hardware resources,
executing programs, and providing users with an interface to
interact with the system. Its importance cannot be
overstated as it enables the operation of complex hardware,
ensures security and efficiency, and allows users and
software to coexist and operate effectively on the same
system. Without an OS, the hardware would be nearly
impossible to use, as there would be no management of
resources, processes, or communication with users.

b) What is a Multiprocessor? CO L1
मल्टीप्रोसेसर क्या है? 1
A multiprocessor refers to a computer system that has more than
one central processing unit (CPU) or processor, which can work
simultaneously to perform tasks. The main advantage of a
multiprocessor system is that it can handle multiple processes at
once, improving the overall performance and speed of computation,
especially for complex or large-scale tasks.
Key Characteristics of Multiprocessor Systems:
1. Multiple CPUs: A multiprocessor system contains two or more
processors (CPUs) that can perform tasks concurrently.
2. Shared Memory: In many multiprocessor systems, all
processors may have access to a common memory space,
which allows them to share data efficiently. This is called
shared-memory multiprocessing.
3. Parallel Processing: With multiple processors, a
multiprocessor system can divide large tasks into smaller
subtasks and run them simultaneously, thus speeding up the
execution time for certain applications. This is called parallel
processing.
4. Coordination: The processors in a multiprocessor system
often need to communicate and synchronize their actions to
ensure the correct execution of tasks. This can be managed by
a specialized operating system.
Types of Multiprocessor Systems:
1. Symmetric Multiprocessing (SMP): In an SMP system, all
processors have equal access to the memory and can work
independently. The operating system treats all processors as
peers, and they share the same resources.
2. Asymmetric Multiprocessing (AMP): In an AMP system, one
processor (called the master processor) controls the others
(called slave processors). The slave processors handle
specific tasks and communicate with the master processor for
coordination.
3. Clustered Multiprocessing: This involves a collection of
independent systems (or nodes) working together as a single
system. Each node has its own memory and processor, and
communication happens through a network.
Benefits of Multiprocessors:
 Increased Performance: By running tasks in parallel,
multiprocessor systems can significantly reduce the time
required for computation-intensive processes.
 Reliability and Fault Tolerance: If one processor fails, others
can take over the work, making the system more fault-tolerant.
 Scalability: Multiprocessor systems can often be scaled by
adding more processors to handle larger workloads.
Applications:
 Scientific Computing: Tasks like simulations and modeling
benefit greatly from the parallel processing capabilities of
multiprocessor systems.
 Servers and Datacenters: Multiprocessor systems are
commonly used in servers to handle multiple user requests
simultaneously.
 Real-time Systems: Certain real-time applications, such as
video rendering or gaming, require the power of
multiprocessors to achieve smooth performance.
Overall, multiprocessor systems are crucial for handling modern
computing tasks that require high processing power and speed.

c) Explain a real time operating system? CO L2


रियल टाइम ऑपरेटिंग सिस्टम के बारे में बताएं? 2
A Real-Time Operating System (RTOS) is an operating system
designed to handle tasks within a specific time constraint, often
referred to as deadlines. It ensures that critical tasks are processed
in a timely and predictable manner, making it suitable for applications
where timing is crucial, such as embedded systems, industrial control,
automotive systems, and robotics.
Key Features of an RTOS:
1. Predictability: RTOS ensures that tasks are completed within
a defined time frame (real-time).
2. Multitasking: It can manage multiple tasks simultaneously
while meeting their deadlines.
3. Task Prioritization: RTOS assigns priorities to tasks, ensuring
that higher-priority tasks are executed first.
4. Interrupt Handling: It can quickly respond to external events
through interrupts to handle time-sensitive operations.
Types of RTOS:
 Hard RTOS: Guarantees that critical tasks will always meet
deadlines.
 Soft RTOS: Ensures tasks are completed in a timely manner
but with some flexibility in meeting deadlines.
Examples:
 Automotive control systems
 Medical devices
 Telecommunications systems
In summary, an RTOS is essential for environments where timing,
reliability, and precision are crucial.

d) What is a Distributed Operating System? CO L1


डिस्ट्रीब्यूटेड ऑपरेटिंग सिस्टम क्या है? 1
A Distributed Operating System (DOS) is an operating
system that manages a group of independent computers or
nodes and makes them appear as a single unified system to
users and applications. It enables resources, such as
memory, processing power, and storage, to be shared and
accessed across multiple machines, often over a network.
Key Features:
1. Transparency: Users and applications are unaware of
the underlying distribution of resources (e.g., location
of files or processes).
2. Concurrency: Multiple processes can run
simultaneously across different machines.
3. Fault Tolerance: The system can continue functioning
even if one or more nodes fail.
4. Resource Sharing: Distributed systems allow
resources like CPU, memory, and storage to be shared
among multiple nodes.
Examples:
 Cloud computing environments
 Large-scale web applications
 Distributed databases
In summary, a distributed operating system allows multiple
computers to work together efficiently, presenting
themselves as a single cohesive system to end users.
e) Explain advantages and disadvantages of Operating CO L2
systems? 3
ऑपरेटिंग सिस्टम के फायदे और नुकसान बताएं?
Operating systems (OS) are crucial for managing hardware
and software resources in a computer system. While they
offer several advantages, they also have certain limitations.
Here’s an overview of the advantages and disadvantages
of operating systems:
Advantages of Operating Systems:
1. Resource Management:
o An OS efficiently manages the computer's
hardware resources (CPU, memory, storage,
etc.), ensuring that each application and user
gets the necessary resources without conflict.
2. User Interface:
o Provides a user interface (UI), such as command-
line or graphical user interface (GUI), making it
easier for users to interact with the system and
run programs.
3. Multitasking and Multithreading:
o Modern operating systems enable multitasking
(running multiple applications at once) and
multithreading (executing parts of a program
simultaneously), improving system efficiency.
4. Security and Access Control:
o OS provides security features like user
authentication, encryption, and access control,
ensuring that only authorized users can access
the system and its resources.
5. File Management:
o OS handles file storage, retrieval, and
organization, allowing users to save, modify, and
organize files in a structured manner.
6. Error Detection and Handling:
o Detects and manages hardware or software
errors, ensuring the system runs smoothly and
recovers from unexpected issues.
7. Networking:
o Allows computers to connect to networks,
facilitating data sharing, communication, and
access to remote resources (e.g., the internet).

Disadvantages of Operating Systems:


1. Complexity:
o Operating systems can be complex to design and
manage, especially in systems with multiple
users or complex hardware. This complexity can
make the OS harder to maintain and
troubleshoot.
2. Resource Consumption:
o Operating systems require system resources
(memory, CPU) to run, which can reduce the
resources available for running applications,
especially in resource-constrained devices.
3. Vulnerabilities and Security Risks:
o While OS provides security, they are still
vulnerable to attacks such as malware, viruses,
and hacking. Flaws in OS security features can
be exploited by malicious actors.
4. Overhead:
o The OS introduces overhead due to tasks like
managing processes, memory, and hardware.
This can reduce overall system performance,
especially if the OS is not optimized.
5. Compatibility Issues:
o Some software applications may not be
compatible with certain OS versions, or there
may be compatibility issues between hardware
and OS, leading to difficulties in running specific
programs.
6. Updates and Maintenance:
o OS requires regular updates and patches to fix
bugs, enhance security, and improve
performance. These updates can be time-
consuming and may require system reboots,
which can interrupt normal operations.

Summary:
Operating systems provide essential services for managing
hardware and enabling applications to run efficiently, but
they also come with complexities, overhead, and security
challenges that need to be carefully managed.

f) Explain advantages and disadvantages of Real time CO L3


operating systems? 2
रियल टाइम ऑपरेटिंग सिस्टम के फायदे और नुकसान बताएं?
Advantages of Real-Time Operating Systems (RTOS):
1. Timeliness and Predictability:
o RTOS ensures that tasks are executed within a
specific time constraint, which is crucial for time-
sensitive applications (e.g., medical devices,
automotive systems).
2. Reliability:
o Provides high reliability and fault tolerance,
making them ideal for critical systems where
failures can be catastrophic.
3. Task Prioritization:
o RTOS supports priority-based scheduling,
ensuring that the most critical tasks are given
higher priority and executed first.
4. Concurrency:
o Allows multiple tasks to run concurrently while
maintaining real-time performance, improving
system efficiency and responsiveness.
5. Resource Management:
o Efficiently manages system resources such as
memory, processing power, and peripherals to
ensure optimal performance under real-time
constraints.

Disadvantages of Real-Time Operating Systems


(RTOS):
1. Complexity:
o RTOS are more complex to design and configure
compared to general-purpose operating systems,
often requiring specialized knowledge.
2. Limited Flexibility:
o RTOS are optimized for specific tasks and may
not be as versatile or capable of handling
general-purpose computing as efficiently as
other operating systems.
3. Resource Intensive:
o Due to their strict timing and reliability
requirements, RTOS can be more resource-
intensive in terms of memory and processing
power.
4. Cost:
o Developing and maintaining an RTOS can be
expensive, especially for specialized or
embedded applications.
5. Limited Software Support:
o RTOS may not support as many applications or
have as large a software ecosystem as general-
purpose operating systems.

In summary, RTOS offers high performance for time-sensitive


applications but comes with challenges such as complexity,
resource demands, and limited flexibility.

g) What is the main frame? CO L1


मुख्य फ्रेम क्या है? 1
A mainframe is a powerful, large-scale computer designed
to handle and process vast amounts of data and support
many simultaneous users. It is used by large organizations
for critical applications like bulk data processing, enterprise
resource planning (ERP), and transaction processing
systems. Mainframes are known for their reliability,
scalability, and security.
They are typically used in industries like banking, insurance,
and government, where high-volume, mission-critical
workloads need to be managed.

h) Write an example of an operating system? CO L4


ऑपरेटिंग सिस्टम का उदाहरण लिखें? 3
Microsoft Windows, Mac OS and Android are some of the
more popular operating systems on the market. Linux and
Ubuntu are operating systems oriented more toward tech-
savvy users, while Chrome OS and Mac OS are more
accessible to general users.
i) Explain the function of the operating system? CO L2
ऑपरेटिंग सिस्टम के कार्य बताएं? 2
The functions of an operating system (OS) include:
1. Process Management: The OS manages running
processes, ensuring they are executed efficiently and
in an orderly manner, including task scheduling and
multitasking.
2. Memory Management: It controls the computer's
memory, allocating and deallocating memory to
processes, and ensuring optimal usage.
3. File Management: The OS manages files, directories,
and storage, providing access, organization, and
security of data.
4. Device Management: It controls hardware devices
(like printers, disk drives, and keyboards) by providing
appropriate drivers and interfaces.
5. Security and Access Control: The OS ensures that
the system is secure, managing user authentication,
permissions, and protecting data from unauthorized
access.
6. User Interface: It provides a user interface (like GUI
or command-line) for interaction with the system.
In short, the OS acts as an intermediary between the user
and hardware, managing resources and ensuring smooth,
secure operation.

j) Write a component of the operating System? CO L3


ऑपरेटिंग सिस्टम के घटक लिखें? 3
The 8 components of an Operating System are Process
Management, File Management, Network Management, Main
Memory Management, Secondary Storage Management, I/O
Device Management, Security Management and Command
Interpreter System
UNIT-II

S.No Question CO Bloom's


. Taxono
my
a) Define the process? CO L1
प्रक्रिया परिभाषित करें? 2
A process in an operating system is essentially a running
software. The execution of any process must follow a specific
sequence. A process can be considered as an entity that
represents the basic unit of work that needs to be done in a
system. In simpler words, we write computer programs in the
form of a text file.
b) Define process synchronization? CO L1
प्रक्रिया तुल्यकालन को परिभाषित करें? 2
Process synchronization is the coordination of multiple
processes to ensure that they operate in a correct and
predictable manner, especially when they share resources. It
prevents conflicts and ensures that processes access shared
resources in a way that avoids errors, such as race
conditions. This is typically achieved using synchronization
mechanisms like mutexes, semaphores, and locks.

c) What is process scheduling? CO L2


प्रोसेस शेड्यूलिंग क्या है? 1
Process synchronization is the task of ensuring that multiple
processes can safely share resources without interfering with
each other. It is a critical part of operating system design, as
it ensures that data integrity and resource efficiency are
maintained.

d) Define Thread? CO L1
थ्रेड को परिभाषित करें? 2
A thread refers to an execution unit in the process that has
its own programme counter, stack, as well as a set of
registers. Threads aren't actually allowed to exist outside a
process. Furthermore, each and every thread belongs to one
single process.

e) What is CPU scheduling? CO L2


सीपीयू शेड्यूलिंग क्या है? 1
CPU scheduling is a process management technique used
by the operating system to decide which process or thread
will be assigned to the CPU for execution. It aims to optimize
CPU utilization, system responsiveness, and fairness.
Scheduling is crucial in a multitasking environment, where
multiple processes are competing for CPU time.
The operating system uses different scheduling
algorithms (such as First-Come-First-Serve (FCFS),
Round Robin (RR), Shortest Job First (SJF), or Priority
Scheduling) to determine the order in which processes are
executed. The choice of algorithm affects system
performance, particularly in terms of CPU efficiency and
response time.

f) Explain the advantage of thread? CO L3


धागे का लाभ बताएं? 3
Advantages of threads include:
1. Improved Performance: Threads within the same
process can run concurrently on multiple CPU cores,
leading to better utilization of system resources and
faster execution.
2. Resource Sharing: Threads share the same memory
and resources of their parent process, which makes
communication between threads more efficient
compared to inter-process communication.
3. Responsiveness: Threads enable better
responsiveness in applications, especially in multi-
threaded programs like web browsers, where one
thread handles user input while others load content.
4. Lower Overhead: Creating and managing threads
typically involves less overhead compared to
processes, as threads share resources with their parent
process.

g) What are the types of schedulers? CO L3


शेड्यूलर कितने प्रकार के होते हैं? 1
There are three main types of schedulers in an operating
system:
1. Long-term scheduler (Job scheduler):
o Decides which processes are admitted into the
system (new processes).
o Controls the degree of multiprogramming (how
many processes are in the system at once).
o Runs less frequently, typically when a new
process is created.
2. Short-term scheduler (CPU scheduler):
o Decides which process in the ready queue gets
CPU time next.
o Operates frequently, typically on the order of
milliseconds, as it selects processes for
immediate execution.
o Aims to optimize CPU usage and ensure fair
distribution of CPU time.
3. Medium-term scheduler (Swapper):
o Manages the swapping of processes in and out of
memory (often between main memory and disk).
o Helps balance the number of processes in
memory to avoid overloading the system.
o Runs less frequently and is used to control the
number of processes in the ready queue and
memory.

h) Define FCFS and SJF? CO L1


एफसीएफएस और एसजेएफ को परिभाषित करें? 1
FCFS (First-Come, First-Served):
 Definition: FCFS is a non-preemptive CPU scheduling
algorithm where processes are executed in the order
they arrive in the ready queue.
 How it works: The process that arrives first is
executed first. Once a process starts, it runs to
completion without interruption.
 Advantages: Simple to implement and understand.
 Disadvantages: Can lead to poor performance (e.g.,
long waiting times) especially when short processes
are delayed by long processes (convoy effect).
SJF (Shortest Job First):
 Definition: SJF is a CPU scheduling algorithm where
the process with the shortest burst time (execution
time) is selected for execution next.
 How it works: The process with the smallest expected
runtime is given CPU time first, leading to shorter
average waiting time.
 Types:
o Preemptive SJF: If a new process with a shorter
burst time arrives, it preempts the current
running process.
o Non-preemptive SJF: Once a process starts, it
runs to completion, and new shorter processes
are queued until the current process finishes.
 Advantages: Minimizes average waiting time.
 Disadvantages: Requires knowledge of the process's
burst time, which is often not known in advance, and
can cause starvation of longer processes.

i) Define Round robin scheduling ? CO L2


राउंड रॉबिन शेड्यूलिंग को परिभाषित करें? 2
Round Robin (RR) Scheduling is a preemptive CPU
scheduling algorithm where each process is assigned a fixed
time slice or quantum.
 How it works: Processes are executed in a cyclic
order. Each process gets a small, equal time slice
(quantum) to execute. If a process doesn't complete
within its quantum, it is preempted and moved to the
back of the ready queue, and the CPU is given to the
next process. This continues until all processes are
finished.
 Advantages:
o Fair allocation of CPU time.
o Suitable for time-sharing systems where many
processes need to be executed.
o Simple and easy to implement.
 Disadvantages:
o The choice of time quantum is crucial; too large
can make it similar to FCFS, while too small can
lead to excessive context switching, affecting
performance.

j) What do you mean by preemptive scheduling? CO L2


प्रीमेप्टिव शेड्यूलिंग से आपका क्या तात्पर्य है? 1
Preemptive scheduling is a type of CPU scheduling where
the operating system can forcibly interrupt and suspend a
currently running process in order to give CPU time to
another process. This is done before the process completes
its execution, typically based on certain conditions like a
higher-priority process becoming ready to run or a time slice
expiring.
Key Features:
 Interrupts Process: A running process can be paused
or preempted, allowing other processes to be
executed.
 Improved Responsiveness: Helps ensure that high-
priority or time-sensitive processes are given CPU time
promptly.
 Fairness: Allows the system to allocate CPU time more
equitably among processes, reducing the chances of a
single process monopolizing the CPU.
Examples of Preemptive Scheduling Algorithms:
 Round Robin (RR)
 Shortest Remaining Time First (SRTF)
 Priority Scheduling (preemptive version)
Advantages:
 Better system responsiveness and fairness.
 Prevents low-priority processes from starving high-
priority processes.
Disadvantages:
 Can lead to increased context switching overhead.

UNIT-III
S.No Question CO Bloom's
. Taxono
my
a) Define deadlock. CO L2
गतिरोध को परिभाषित करें. 2
Deadlock is a situation in a computer system where two or
more processes are unable to proceed because each is
waiting for the other to release resources. In other words, the
processes are in a state of perpetual waiting, and none of
them can complete their execution.
Conditions for Deadlock:
Deadlock occurs when the following four necessary
conditions are met simultaneously:
1. Mutual Exclusion: At least one resource is held in a
non-shareable mode (only one process can use it at a
time).
2. Hold and Wait: A process holding one resource is
waiting to acquire additional resources held by other
processes.
3. No Preemption: Resources cannot be forcibly taken
from a process; they must be released voluntarily.
4. Circular Wait: A set of processes are waiting for
resources in a circular chain, where each process is
waiting for a resource held by the next process in the
chain.
Example:
If Process A holds Resource 1 and waits for Resource 2, while
Process B holds Resource 2 and waits for Resource 1, a
deadlock occurs, as neither can proceed.
Consequences:
 System resources are wasted.
 Processes remain stuck, leading to reduced system
performance or total system halt.
Deadlock prevention, avoidance, detection, and recovery are
common strategies for handling deadlock in operating
systems.

b) What is the necessary condition of deadlock? CO L1


गतिरोध की आवश्यक शर्त क्या है? 1
The necessary conditions for deadlock to occur in a
system are four:
1. Mutual Exclusion:
o At least one resource must be held in a non-
shareable mode, meaning only one process can
use the resource at a time. If other processes
request that resource, they must wait until it is
released.
2. Hold and Wait:
o A process that is holding at least one resource is
waiting to acquire additional resources that are
currently being held by other processes.
3. No Preemption:
o Resources cannot be forcibly taken from a
process. A resource can only be released
voluntarily by the process holding it after it has
completed its task.
4. Circular Wait:
o A set of processes exist such that each process
in the set is waiting for a resource held by the
next process in the set, forming a closed loop.
All four conditions must be present simultaneously for a
deadlock to occur. If any one of these conditions is not met,
deadlock cannot happen.

c) What is deadlock prevention? CO L2


गतिरोध निवारण क्या है? 1
Deadlock prevention is a set of strategies used to ensure
that at least one of the necessary conditions for deadlock is
never satisfied, thereby preventing deadlock from occurring
in the system.
There are several approaches to deadlock prevention by
eliminating one or more of the four necessary conditions for
deadlock:
1. Eliminating Mutual Exclusion:
 Approach: This condition can be avoided by allowing
resources to be shared among multiple processes,
making resources "shareable" where possible (e.g.,
read-only files can be accessed by multiple processes).
 Limitations: Not all resources can be shared,
especially non-shareable resources like printers, CPUs,
or memory.
2. Eliminating Hold and Wait:
 Approach: A process must request all the resources it
needs at once, before it starts execution. This way, it
does not hold any resource while waiting for others.
 Limitations: This could lead to inefficient resource
utilization, as processes may have to wait
unnecessarily for resources they might not
immediately need.
3. Eliminating No Preemption:
 Approach: If a process holding a resource is waiting
for another resource that is currently being held by
another process, the system can forcibly take the
resource away from the first process (preemption) and
allocate it to the process that needs it.
 Limitations: Preemption may cause processes to be
interrupted frequently, leading to increased overhead
and possible system instability.
4. Eliminating Circular Wait:
 Approach: This can be done by defining a strict
ordering of resource types. Processes are required to
request resources in a specific order (e.g., Resource 1,
then Resource 2, and so on). This prevents circular
waiting because processes can only hold resources in a
linear chain.
 Limitations: The ordering can complicate resource
management and increase the chance of starvation,
where some processes may never get the resources
they need.
By preventing one or more of these conditions, the operating
system can avoid situations where deadlock might occur.
However, deadlock prevention strategies often introduce
trade-offs in efficiency and complexity.

d) What is deadlock avoidance? CO L2


गतिरोध परिहार क्या है? 1
Deadlock avoidance is a technique used to ensure that a
system never enters a state where deadlock is possible.
Unlike deadlock prevention, which tries to eliminate one of
the necessary conditions for deadlock, deadlock avoidance
dynamically checks the resource allocation state and ensures
that no circular wait or unsafe states can arise.
Key Concept: Safe vs. Unsafe States
 A system is in a safe state if there is a sequence of
processes that allows each process to eventually
complete, even if resources are allocated in a specific
order.
 An unsafe state is one where there is no such
sequence, and a deadlock could potentially occur.
How Deadlock Avoidance Works:
Deadlock avoidance typically uses resource allocation
algorithms that track the system's state and make decisions
based on the current situation to prevent deadlock. The
system evaluates the potential outcomes of resource
allocation and determines if granting a request would lead to
an unsafe state.
Common Deadlock Avoidance Algorithms:
1. Banker's Algorithm:
o Developed by Edsger Dijkstra, the Banker's
algorithm is a well-known deadlock avoidance
technique.
o How it works: The system checks if granting a
process's resource request will leave the system
in a safe state. If yes, the request is allowed;
otherwise, the process must wait.
o The algorithm considers:
 Available resources: Resources not
currently allocated.
 Max claim: The maximum resources each
process may need.
 Current allocation: The resources
currently allocated to a process.
 Need: The remaining resources a process
still requires to complete its task.
o Safe State: The algorithm ensures that
processes can always be completed by checking
if resources can be allocated in such a way that
all processes will eventually finish without
causing a deadlock.
Advantages of Deadlock Avoidance:
 Prevents Deadlock: By ensuring the system never
enters an unsafe state, it guarantees that deadlock
cannot occur.
 Efficient Resource Utilization: Since the system can
still allow resource allocation in safe situations, it can
optimize the use of resources.
Disadvantages of Deadlock Avoidance:
 Overhead: Constantly checking if resource requests
can lead to a safe state can add significant overhead,
especially in systems with many processes and
resources.
 Resource Allocation Complexity: Algorithms like the
Banker's algorithm require knowledge of each
process's maximum resource requirements in advance,
which may not always be feasible or practical.
In summary, deadlock avoidance ensures that a system
remains in a safe state at all times by carefully analyzing the
potential effects of resource requests, preventing any
allocation that could lead to deadlock.

e) Define Deadlock recovery. CO L1


डेडलॉक पुनर्प्राप्ति को परिभाषित करें। 2
Deadlock recovery is a set of strategies used to recover
from a deadlock situation once it has occurred. Unlike
deadlock prevention and deadlock avoidance, which
aim to prevent deadlock from happening, deadlock
recovery focuses on how to deal with deadlock after it has
already occurred and the system has become stuck.
Key Methods for Deadlock Recovery:
1. Process Termination:
o Abort one or more processes involved in the
deadlock to break the circular wait.
 Abort all deadlocked processes: This is
a straightforward approach but can be
resource-intensive as all processes
involved are terminated.
 Abort one process at a time:
Continuously terminate processes involved
in the deadlock until the system is no
longer in a deadlocked state.
o Advantages: Simple and effective at breaking
the deadlock.
o Disadvantages: Can lead to wasted resources,
and processes may need to be restarted or rolled
back, leading to potential data loss.
2. Resource Preemption:
o Preempt resources from one or more
processes involved in the deadlock and allocate
them to other processes, breaking the circular
wait.
 The preempted process may be rolled back
or restarted, or the preemption can occur
until the deadlock is resolved.
o Advantages: Can resolve deadlock without
needing to terminate processes.
o Disadvantages: Resource preemption can lead
to significant overhead, and rolling back
processes can be complicated, especially if
processes have made progress.
3. Rollback:
o Rollback processes to a safe state: If a
process has been preempted, it may need to be
rolled back to a previous safe state to continue
execution without causing deadlock.
o Advantages: This prevents the process from
being terminated entirely and allows it to
continue later from a checkpoint.
o Disadvantages: Rollback can be resource-
intensive and may cause the process to lose
progress.
Summary:
 Deadlock recovery methods focus on breaking the
deadlock after it has happened.
 Common techniques include terminating processes,
preempting resources, and rolling back processes to a
previous state.
 While effective, recovery strategies often come with
overhead, potential data loss, and a significant impact
on system performance.

f) What is starvation? CO L2
भुखमरी क्या है? 1
Starvation is a situation in which a process is perpetually
denied access to the resources it needs to execute, because
other processes are continually given priority over it. This
can happen in systems where resources are allocated based
on certain scheduling algorithms or priorities, and low-
priority processes may never get the CPU time or resources
required to complete their execution.
Key Points:
 Cause: Starvation typically occurs when a process is
repeatedly preempted or blocked by higher-priority
processes, preventing it from getting the resources it
needs.
 Effect: A process may wait indefinitely for resources,
causing delays or failure to complete its task, even if
the system as a whole is functioning normally.
 Example: In priority-based scheduling, a low-priority
process may never get executed because higher-
priority processes keep coming and are always chosen
over it.
Solutions to Prevent Starvation:
1. Aging: Gradually increase the priority of a process the
longer it waits, ensuring that even low-priority
processes eventually get CPU time.
2. Fair Scheduling Algorithms: Use algorithms like
Round Robin or Fair Share Scheduling that allocate
resources to all processes more evenly, reducing the
chances of starvation.

g) What is Deadlock Detection and Recovery. CO L2


डेडलॉक डिटेक्शन और रिकवरी क्या है? 3
Deadlock Detection and Recovery refers to methods
used by an operating system to detect when a deadlock has
occurred and then take steps to recover from it. This is
different from deadlock prevention and avoidance, which aim
to avoid deadlock altogether. In deadlock detection and
recovery, the system allows deadlock to occur, but it
identifies when it happens and then resolves it.
1. Deadlock Detection:
 Goal: To identify if the system has entered a deadlock
state.
 How it works: The system periodically checks
whether any set of processes is deadlocked. This is
typically done using a deadlock detection
algorithm.
Common methods for detection:
 Resource Allocation Graph (RAG): A directed graph
that represents processes and resources. If there is a
cycle in the graph, a deadlock is detected.
 Wait-for Graph: A simplified version of the RAG,
where only processes are represented, and directed
edges indicate that one process is waiting for another
to release a resource. A cycle in this graph indicates a
deadlock.
 Detection Algorithms:
o The system checks the allocation and request
states of processes to find cycles or circular
dependencies, which indicate deadlock.
Challenges:
 Detecting deadlock can be computationally expensive,
especially in large systems with many processes and
resources.
 The system needs to periodically monitor the state,
which can introduce overhead.
2. Deadlock Recovery:
 Once deadlock is detected, recovery strategies are
employed to resolve it.
Common recovery methods:
 Process Termination:
o Abort all deadlocked processes: This
approach terminates all processes involved in
the deadlock.
o Abort one process at a time: In this method,
processes involved in the deadlock are
terminated one by one until the deadlock is
resolved.
o Selective termination: Choose processes to
terminate based on factors like priority or
resources consumed.
 Resource Preemption:
o Preempt resources from one or more
processes involved in the deadlock and allocate
them to other processes.
o This may involve rolling back processes to a
previous safe state or forcing them to release
resources they are holding.
 Rollback:
o In some systems, processes may be rolled back
to a previous checkpoint, allowing the system to
restart without deadlock. This is useful in
systems where processes have made progress
but are blocked by deadlock.
Summary of Deadlock Detection and Recovery:
 Deadlock Detection involves monitoring the system
to identify when a deadlock occurs, typically using
graphs or detection algorithms.
 Deadlock Recovery involves taking actions to break
the deadlock, such as terminating processes,
preempting resources, or rolling back processes.
 This method is used in systems that allow deadlocks to
happen and then handle them later, rather than
preventing them upfront.

h) Explain the advantages and disadvantages of deadlock. CO L3


गतिरोध के फायदे और नुकसान बताएं। 3
Deadlock itself is a problematic situation, but discussing the
advantages and disadvantages helps to understand its
impact on system behavior. Typically, deadlock is something
that needs to be avoided or handled, but there are some
theoretical aspects where it may provide certain benefits in
specific contexts.
Disadvantages of Deadlock:
1. Resource Wastage:
o Deadlock results in processes being stuck in a
state of waiting, which leads to resources (CPU,
memory, etc.) being wasted. These resources
cannot be used by other processes, reducing
overall system efficiency.
2. Performance Degradation:
o When deadlock occurs, the system must take
additional steps (like detection, recovery, or
prevention) to handle the situation, which
increases overhead and can lead to performance
degradation.
3. System Unresponsiveness:
o In extreme cases, deadlock can make the system
unresponsive, as multiple processes are waiting
indefinitely. This can halt essential operations
and disrupt normal system functioning.
4. Complexity in Handling:
o Resolving deadlock involves complex algorithms
for detection, prevention, or recovery. This adds
complexity to system design and requires extra
computational resources, which can lead to
delays and complications in system
management.
5. Starvation:
o Processes involved in a deadlock may also
experience starvation, where they never get
the resources they need because other
processes keep taking precedence.

Advantages of Deadlock (In Specific Contexts):


While deadlock is generally undesirable, there are some
cases where deadlock may have advantages in very specific
situations:
1. Simplifies Resource Allocation in Some Cases:
o In some specialized systems (e.g., very
controlled environments), allowing deadlock may
simplify resource management. If processes are
guaranteed to use resources in a strictly defined
sequence (or some other predictable manner),
deadlock detection can help resolve issues only
when necessary.
o For example, a system with highly predictable
tasks might allow for deadlock detection and
recovery without severe consequences, thereby
simplifying scheduling decisions upfront.
2. Enforces Process Isolation:
o In some scenarios, deadlock might be an
unintended consequence of process isolation,
where processes are not allowed to interfere with
each other. This may be useful in systems where
process independence is highly prioritized (e.g.,
in a security context). In this case, deadlock
becomes a natural mechanism to prevent
processes from affecting one another.
3. Avoids Interleaving:
o Deadlock can be seen as an extreme form of
avoiding unnecessary process interleaving. By
preventing processes from acquiring the same
resources concurrently, deadlock ensures that
processes do not step on each other's toes
(though in practice, this is a flawed approach
compared to other synchronization mechanisms
like locks).

Conclusion:
Deadlock generally has more disadvantages than
advantages. It leads to resource wastage, system
unresponsiveness, and performance degradation, which are
undesirable in most systems. However, in very specific and
controlled scenarios, allowing deadlock may simplify design
or enforce strict isolation between processes. This makes it
crucial for systems to avoid deadlock using prevention,
detection, or recovery techniques.

i) Explain the advantages and disadvantages of Banker’s CO L4


algorithm. 4
बैंकर एल्गोरिदम के फायदे और नुकसान बताएं।
The Banker's Algorithm is a deadlock avoidance algorithm
used in operating systems to allocate resources to processes
in such a way that deadlock is avoided. It works by
determining whether the system will remain in a "safe state"
after a resource request is granted.
Advantages of the Banker's Algorithm:
1. Prevents Deadlock:
o The primary advantage of the Banker's Algorithm
is that it prevents deadlock by ensuring that the
system only enters a safe state after a resource
allocation. This avoids the possibility of circular
wait conditions.
2. Safe Resource Allocation:
o It checks whether resource requests can be
safely granted without leading to an unsafe
state, where no process could complete its
execution. By ensuring this, the system avoids
resource starvation and ensures fair distribution.
3. Efficient in Predicting Resource Needs:
o The algorithm works well when the maximum
resource requirements of each process are
known in advance. This allows the system to
manage resources effectively and prevent unsafe
allocations.
4. Fairness:
o By considering each process's maximum needs
and the resources available, the Banker's
algorithm ensures that all processes get their fair
share of resources and are not starved.
5. Helps in Dynamic Resource Allocation:
o It is useful in systems where processes require
dynamic resource allocation. The algorithm
ensures that as resources are allocated and
released, the system remains in a safe state.

Disadvantages of the Banker's Algorithm:


1. High Overhead:
o The Banker's algorithm requires checking the
safety of the system after every resource
request. This involves recalculating the potential
future state of the system, which can be
computationally expensive, especially in systems
with a large number of processes and resources.
2. Requires Knowledge of Maximum Resource
Needs:
o The algorithm requires that each process's
maximum resource needs be known in advance.
In real-world scenarios, it may be difficult or
impractical to determine these maximum needs
precisely, making the algorithm less useful in
certain dynamic systems.
3. Limited Scalability:
o The algorithm can become inefficient as the
number of processes or resources increases. For
large-scale systems, the time complexity of
checking whether a system is in a safe state can
become a bottleneck, leading to performance
degradation.
4. Requires Static Allocation Information:
o For the Banker's Algorithm to work, the system
must have a clear and static understanding of
resource types and allocation patterns. In highly
dynamic systems where processes can
unpredictably change resource requirements, the
algorithm may be less effective or impractical.
5. Complex Implementation:
o The algorithm involves maintaining data
structures to track resource allocation, maximum
requirements, and available resources. This adds
complexity to the system's implementation and
increases the likelihood of bugs or errors in
managing resource states.
6. Not Suitable for All Systems:
o The Banker's Algorithm works well in systems
with predictable and relatively static resource
allocation patterns. It is not suitable for systems
with highly dynamic resource needs or where
resources are frequently requested and released.

Conclusion:
The Banker's Algorithm is a robust tool for deadlock
avoidance, ensuring that resource allocation occurs in a
safe state to prevent deadlock. It is particularly effective in
environments with known, predictable resource
requirements. However, its high overhead, complexity,
and dependence on static resource knowledge make it
less suitable for large, dynamic systems where resource
needs change unpredictably.

j) Explain the advantages and disadvantages of deadlock CO L4


recovery. 4
गतिरोध पुनर्प्राप्ति के फायदे और नुकसान बताएं।
Deadlock recovery is a technique used to resolve deadlock
situations that have already occurred in a system. It involves
detecting deadlock and then taking corrective actions such
as terminating processes or preempting resources to resolve
the deadlock. While this strategy ensures the system can
recover from a deadlock, it also comes with certain
advantages and disadvantages.
Advantages of Deadlock Recovery:
1. System Continuity:
o Prevents System Halt: Deadlock recovery
ensures that the system does not remain stuck
indefinitely. Instead of allowing the system to
crash or become unresponsive due to deadlock,
recovery mechanisms can restore normal
operation.
2. Allows Deadlock to Occur:
o Less Restrictive: Unlike deadlock prevention or
avoidance, which aim to eliminate deadlock
scenarios before they happen, deadlock recovery
allows deadlock to occur but then addresses it.
This can result in fewer constraints on resource
allocation, making it suitable for systems that
don't have strict requirements for deadlock-free
operation.
3. Flexible:
o Dynamic Approach: The system can adapt to
changing conditions and deal with deadlock only
when it happens, which may be preferable in
dynamic, unpredictable environments where it is
difficult to predict resource needs in advance.
4. Does Not Require Complete Resource
Knowledge:
o No Need for Maximum Resource
Information: Unlike deadlock avoidance
algorithms (like the Banker's algorithm),
deadlock recovery does not require knowing the
maximum resource requirements of each
process ahead of time. This makes it easier to
manage in situations where process behaviors
and resource needs are unpredictable.
5. Minimal Overhead for Resource Allocation:
o Simpler Allocation: Since deadlock recovery
allows processes to execute and only intervenes
once a deadlock is detected, the overhead of
managing resource allocation is generally lower
than that of deadlock prevention or avoidance
techniques.

Disadvantages of Deadlock Recovery:


1. Potential for System Instability:
o Interruptions and Inconsistent States: The
process of recovering from deadlock—whether
by terminating processes, preempting resources,
or rolling back—can result in system instability or
inconsistent states, especially if the recovery
process is not carefully managed.
o Loss of Progress: When processes are
terminated or rolled back, the progress made by
those processes may be lost. This can cause
significant delays and frustration for users or
other processes.
2. Performance Overhead:
o High Overhead: Recovery mechanisms,
particularly resource preemption, and process
rollback, can be computationally expensive. The
system may experience performance
degradation due to the time spent checking for
deadlocks, handling rollbacks, or reallocating
resources.
3. Resource Wastage:
o Resource Reallocation: Preempting resources
from processes to resolve a deadlock can lead to
the waste of resources. For instance, preempting
a process and rolling it back may require
reinitializing or re-requesting resources, leading
to inefficiencies.
4. Complex Implementation:
o Difficult to Implement: Deadlock recovery
algorithms require careful management of
system state, resource allocation, and process
execution. This can increase the complexity of
the system and the likelihood of errors or bugs in
the recovery mechanism.
5. Starvation of Certain Processes:
o Risk of Starvation: After a recovery action like
terminating or preempting processes, some
processes may never get a chance to execute if
they are repeatedly chosen for termination or
preemption. This can result in starvation, where
certain processes are indefinitely delayed.
6. Unpredictable Effects:
o Unintended Consequences: Recovering from
deadlock, particularly by aborting processes or
preempting resources, can have unintended side
effects, such as corrupting data or causing
further deadlocks. The recovery process itself
may need to be carefully monitored to ensure
that it doesn’t exacerbate the problem.

Conclusion:
Deadlock recovery is a useful strategy for ensuring that a
system can resolve deadlock after it occurs, rather than
trying to avoid or prevent it. While it is flexible, allows for
dynamic resource management, and avoids the overhead of
constant monitoring, it comes with the risk of system
instability, resource wastage, and increased
complexity. The choice of using deadlock recovery depends
on the specific system requirements, such as whether the
system can afford to recover from deadlock or needs to
prevent it altogether.

UNIT-IV
S.No Question CO Bloom's
. Taxono
my
a) What is paging? CO L2
पेजिंग क्या है? 1
Paging is a memory management scheme that eliminates
the need for contiguous allocation of physical memory. In
paging, the physical memory is divided into fixed-size blocks
called frames, and the logical memory (or process address
space) is divided into blocks of the same size called pages.
The process of paging involves mapping the logical pages to
the physical frames in memory, allowing for non-contiguous
allocation of memory, which helps in efficient memory
utilization and minimizes fragmentation.
Key Concepts of Paging:
1. Page:
o A page is a fixed-size block of logical memory. It
is the smallest unit of data for memory
management in the paging system. The size of a
page is typically a power of 2 (e.g., 4 KB, 8 KB).
2. Frame:
o A frame is a fixed-size block of physical memory.
It corresponds to a page and holds the data from
the logical memory. The size of a frame is the
same as the size of a page.
3. Page Table:
o The page table is a data structure used to store
the mapping between logical pages and physical
frames. Each process has its own page table,
which keeps track of where each page is stored
in physical memory.
4. Logical Address (Virtual Address):
o The logical address refers to the address
generated by the CPU, which is used by the
process. The logical address is divided into two
parts:
 Page Number: Identifies the page in the
logical address space.
 Page Offset: Identifies the specific
location within the page.
5. Physical Address:
o The physical address refers to the actual location
in the physical memory (RAM). It consists of the
frame number (from the page table) and the
frame offset (from the page offset).
Paging Process:
1. Address Translation: When a process generates a
logical address, it is divided into two parts:
o The page number is used to index into the page
table, finding the corresponding frame number.
o The page offset is combined with the frame
number to produce the physical address.
2. Page Table Lookup:
o The page table is used to convert the logical
page number into the corresponding physical
frame number.
o If a page is not in memory (a page fault), the
operating system will load it into a free frame
from secondary storage (like a disk).
Advantages of Paging:
1. Eliminates Fragmentation:
o Paging eliminates external fragmentation
because pages and frames are of fixed size, and
the system can allocate non-contiguous blocks of
physical memory to processes.
o Internal fragmentation is also reduced because
pages can be allocated precisely according to the
required size.
2. Efficient Memory Use:
o Paging allows for better utilization of available
physical memory by allocating only the required
number of frames to each process.
3. Simplifies Memory Allocation:
o Since pages and frames are of the same size,
memory management is simplified, and the
operating system does not have to deal with
complex memory allocation schemes.
4. Supports Virtual Memory:
o Paging is a fundamental technique for
implementing virtual memory, allowing
processes to use more memory than is physically
available by swapping pages in and out of disk
storage.
Disadvantages of Paging:
1. Overhead of Page Table:
o Maintaining a page table for each process
introduces overhead, as the page table itself
consumes memory. For large processes, this can
become significant.
2. Page Faults:
o Frequent page faults, where pages need to be
loaded from disk to memory, can lead to high
latency and degrade system performance. This is
known as thrashing when excessive paging
occurs.
3. Internal Fragmentation:
o While paging reduces external fragmentation,
internal fragmentation can still occur within a
page if the process does not use all of the space
within a page.
4. Increased CPU Overhead:
o Address translation (from logical to physical
addresses) requires an extra lookup into the
page table, adding some overhead to each
memory access, which can slightly reduce
performance.
Summary:
Paging is an essential memory management technique that
enables non-contiguous memory allocation, reducing
fragmentation and enabling efficient memory use. It is widely
used in modern operating systems to support virtual
memory, allowing processes to use more memory than
physically available. However, paging comes with some
overhead in terms of memory management (page tables)
and potential performance costs (page faults).

b) What is virtual memory? CO L2


वर्चुअल मेमोरी क्या है? 1
Virtual memory is a memory management technique that
creates the illusion of a large, continuous block of memory
for processes, even if the physical memory (RAM) is smaller
or fragmented. It allows programs to access more memory
than what is physically available by using a combination of
RAM and secondary storage (typically a hard disk or SSD).
Key Concepts of Virtual Memory:
1. Virtual Address Space:
o Each process in a system is given its own virtual
address space, which is the range of memory
addresses that the process can use. The virtual
address space is typically larger than the actual
physical memory available.
2. Physical Memory (RAM):
o The actual physical memory (RAM) is used by the
operating system to store data and instructions
that are actively being used by processes.
3. Paging and Segmentation:
o Paging: Virtual memory is often implemented
using paging, where the virtual address space is
divided into fixed-size pages, and the physical
memory is divided into frames. The operating
system manages the mapping between virtual
pages and physical frames.
o Segmentation: In some systems, segmentation
may also be used, where memory is divided into
variable-sized segments (e.g., code, data, stack)
rather than fixed-size pages.
4. Swap Space (Page File):
o When the system runs out of physical memory,
less frequently used pages of memory are
swapped out to a reserved space on the disk
called the swap space or page file. This
process is called paging out. When those pages
are needed again, they are swapped back into
RAM, a process known as paging in.
5. Address Translation:
o When a process generates a virtual address, the
memory management unit (MMU) translates this
virtual address into a physical address using a
page table or other mechanisms, depending on
the system's memory management scheme.
Advantages of Virtual Memory:
1. Increased Address Space:
o Virtual memory allows processes to use more
memory than is physically available by providing
each process with a large virtual address space.
This is particularly useful for large applications or
when multiple processes are running
simultaneously.
2. Isolation Between Processes:
o Each process operates in its own virtual address
space, which helps protect it from the memory
space of other processes. This isolation improves
security and stability, as one process cannot
directly access or corrupt the memory of another
process.
3. Efficient Memory Utilization:
o By swapping pages in and out of physical
memory, the system can ensure that active
processes get the memory they need, while less
frequently used data can be moved to secondary
storage, making better use of available physical
memory.
4. Simplified Memory Management:
o Virtual memory simplifies memory management
for both the operating system and the
programmer. The programmer does not need to
worry about the physical memory layout, and the
operating system can handle memory allocation
dynamically.
5. Running Larger Programs:
o Virtual memory allows programs to run even if
their memory requirements exceed the available
physical memory. This is essential for handling
large programs, databases, or running multiple
applications at the same time.
Disadvantages of Virtual Memory:
1. Performance Overhead:
o Paging/Swapping: The process of moving data
between physical memory and disk (paging or
swapping) introduces significant performance
overhead. When a system constantly needs to
swap pages in and out of RAM (called
thrashing), it can cause severe degradation in
performance.
2. Disk I/O Bottleneck:
o Since virtual memory relies on secondary storage
(e.g., a hard disk or SSD) to hold data that
doesn’t fit in physical memory, the speed of disk
I/O operations becomes a limiting factor. Disk
access is much slower than RAM access, so if the
system frequently accesses data in the swap
space, performance can suffer.
3. Complexity in Implementation:
o Implementing virtual memory requires complex
hardware (like the Memory Management Unit
(MMU)) and software (like page tables or
segment tables) to manage the mapping
between virtual addresses and physical
addresses, making the system more
complicated.
4. Limited by Disk Space:
o The size of virtual memory is limited by the
available disk space for swap files or paging
areas. While disk storage is large compared to
RAM, it is still finite and slower than memory,
limiting the effectiveness of virtual memory.
How Virtual Memory Works:
1. Paging:
o When a program accesses a memory address,
the Memory Management Unit (MMU)
translates the virtual address into a physical
address using a page table. If the required page
is not in physical memory (a page fault), the
operating system retrieves it from disk.
2. Swapping:
o When the system needs more memory than is
physically available, it swaps less active pages of
memory out to disk. When those pages are
needed again, they are swapped back into
physical memory, and other pages may be
swapped out.
Summary:
Virtual memory is a crucial technique in modern operating
systems that allows programs to access more memory than
physically available by using disk space as an extension of
RAM. It provides the benefits of increased address space,
process isolation, and efficient memory utilization but comes
with performance trade-offs due to the overhead of paging
and swapping. Virtual memory is essential for running large
applications, multitasking, and handling more complex
workloads in modern systems.

c) Define main memory. CO L1


मुख्य मेमोरी को परिभाषित करें. 2
Main memory (also known as primary memory or RAM
- Random Access Memory) is the primary storage area
used by a computer's processor to store data and
instructions that are actively being used or processed.
It provides fast and direct access to data for the CPU,
making it a crucial component for overall system
performance.
Key Characteristics of Main Memory:
1. Volatility:
o Main memory is volatile, meaning that it
loses all stored data when the power is
turned off.
2. Speed:
o Main memory is much faster than
secondary storage (like hard drives or
SSDs) and is designed to allow the
processor to quickly access data for
execution.
3. Direct Access:
o Data in main memory can be directly
accessed by the CPU using memory
addresses, allowing for rapid data retrieval
and processing.
4. Temporary Storage:
o Main memory stores data temporarily while
a program is running or while the CPU is
processing instructions. Once the program
or process terminates, the data is typically
discarded unless it has been saved to
secondary storage.
Types of Main Memory:
1. RAM (Random Access Memory):
o This is the most common type of main
memory and is used for storing data and
instructions temporarily while the
computer is in use.
o Dynamic RAM (DRAM): Requires periodic
refreshing of its contents to maintain data.
o Static RAM (SRAM): Faster and more
reliable than DRAM but more expensive;
doesn't need refreshing.
2. Cache Memory:
o A smaller, faster type of memory located
closer to the CPU to store frequently
accessed data and instructions, improving
performance by reducing access times to
the main memory.
Functions of Main Memory:
 Temporary Data Storage: Stores data that the
CPU is currently processing or instructions that
are actively being executed.
 Program Execution: Loads programs and their
data into memory for execution.
 Data Communication: Provides a communication
bridge between the CPU and secondary storage
(disk, SSD, etc.).
Summary:
Main memory is a critical component of a computer
system, providing fast and efficient storage for active
data and program instructions that the CPU needs to
process. It helps improve system performance but is
volatile and temporary, losing all stored information
when the system is powered down.

d) Explain demand paging. CO L3


डिमांड पेजिंग को समझाइये। 3
Demand Paging is a memory management technique used
in operating systems to implement virtual memory. It is a
type of paging where pages of a program are only loaded
into main memory (RAM) when they are needed or
requested by the process, rather than being loaded all at
once when the program starts. This approach helps optimize
memory usage and reduces unnecessary memory
consumption by loading only the portions of a program that
are actively used.
Key Concepts of Demand Paging:
1. Lazy Loading:
o In demand paging, the system does not load the
entire program or process into memory at once.
Instead, it loads only those pages that are
needed by the program during execution. This is
often referred to as "lazy loading."
2. Page Fault:
o A page fault occurs when a program tries to
access a page that is not currently in physical
memory. When this happens, the operating
system must retrieve the required page from
secondary storage (e.g., hard disk or SSD) and
load it into memory. This retrieval process
introduces some latency but allows the system to
efficiently manage memory usage.
3. Page Table:
o The operating system maintains a page table to
map the virtual memory pages to physical
memory pages. When a page fault occurs, the
page table helps the system identify where the
requested page is located on the disk and
manage its loading into memory.
4. Swap Space:
o Pages that are not currently in use may be
swapped out of physical memory and stored in a
reserved space on disk (called swap space or
page file). When a page is needed but not in
memory, it is fetched from swap space.
How Demand Paging Works:
1. Program Starts: When a program starts, the
operating system doesn't load the entire program into
memory.
2. Page Access: The program executes and requests
access to pages in memory.
3. Page Fault: If the requested page is not currently in
memory, a page fault occurs. The operating system
checks the page table and finds that the page is not
loaded into physical memory.
4. Page Loading: The operating system fetches the
required page from secondary storage (disk) and loads
it into a free frame in physical memory.
5. Page Table Update: The page table is updated to
reflect that the page is now in memory.
6. Program Resumes: The program resumes execution
with the page now available in memory.
Advantages of Demand Paging:
1. Efficient Memory Usage:
o Only the pages that are actually used are loaded
into memory, which means memory is used more
efficiently, and processes do not consume
unnecessary memory.
2. Reduces Load Time:
o Since not all pages need to be loaded at once,
the program's initial loading time is reduced. The
program starts running sooner and only fetches
the required pages when needed.
3. Support for Larger Programs:
o Demand paging allows programs to run even if
their total size exceeds the available physical
memory. As long as the working set of the
program fits into memory, the program can
execute efficiently.
4. Better Utilization of RAM:
o Pages that are not currently needed can be
swapped out to disk, freeing up memory for
other tasks or processes.
Disadvantages of Demand Paging:
1. Page Fault Overhead:
o Every time a page is not in memory and must be
fetched from disk, it introduces a page fault.
Disk access is much slower than RAM, so
frequent page faults can significantly degrade
performance (called thrashing when it happens
excessively).
2. Latency in Access:
o When a page fault occurs, there is a delay while
the page is loaded from disk into memory. This
delay can cause noticeable lag in program
execution, especially if the disk access time is
long.
3. Complexity in Management:
o The operating system must efficiently manage
the page table, disk I/O, and memory allocation.
Handling page faults, managing swap space, and
ensuring data consistency between disk and
memory can be complex.
4. Fragmentation:
o Although demand paging reduces external
fragmentation, it can still cause internal
fragmentation if a process doesn't fully utilize
the allocated page.
Summary:
Demand paging is a technique in virtual memory
management where pages are loaded into memory only
when they are needed, rather than all at once. This optimizes
memory usage and enables programs to run with more
memory than is physically available. However, it introduces
some overhead due to page faults and disk access, which
can affect performance if not managed efficiently.

e) What is segmentation? CO L1
Segmentation is an operating system (OS) memory 2
management technique that divides memory into segments,
or sections, of different sizes:
Segmentation

Purpose Divides memory into segments to improve memory management

and system performance

How it works Assigns each segment to a process, and allocates segments in a

non-contiguous manner in physical memory

Benefits Improves memory use, protects different parts of memory, and

handles memory sharing between processes

Segments Each segment represents a distinct part of a program, such as code,

data, or stack

Segment The OS maintains a segment table that contains the base address and
table
length of each segment

Segmentation was originally invented to isolate software


processes and data, and to increase the reliability of systems
running multiple processes simultaneously.

Some disadvantages of segmentation include: External


fragmentation, Costly memory management algorithm, and
Allocating contiguous memory to variable segment size is
expensive and challenging.
process is divided into Segments. The chunks that a program
is divided into which are not necessarily all of the exact sizes
are called segments. Segmentation gives the user’s view of
the process which paging does not provide. Here the user’s
view is mapped to physical memory.
Types of Segmentation in Operating Systems
Virtual Memory Segmentation: Each process is divided into a
number of segments, but the segmentation is not done all at
once. This segmentation may or may not take place at the run
time of the program.
Simple Segmentation: Each process is divided into a number
of segments, all of which are loaded into memory at run time,
though not necessarily contiguously.
What is Segment Table?
It maps a two-dimensional Logical address into a one-
dimensional Physical address. It’s each table entry has:
Base Address: It contains the starting physical address where
the segments reside in memory.
Segment Limit: Also known as segment offset. It specifies the
length of the segment.
The address generated by the CPU is divided into:
Segment number (s): Number of bits required to represent the
segment.
Segment offset (d): Number of bits required to represent the
position of data within a segment.

f) Explain the advantages and disadvantages of main memory. CO L3


मुख्य मेमोरी के फायदे और नुकसान बताएं। 3
g) Explain swapping in memory management. L4
मेमोरी प्रबंधन में स्वैपिंग को समझाइये।
Swapping in an operating system is a process that moves
data or programs between the computer’s main memory
(RAM) and a secondary storage (usually a hard
disk or SSD). This helps manage the limited space in RAM
and allows the system to run more programs than it could
otherwise handle simultaneously.
It’s important to remember that swapping is only used when
data isn’t available in RAM. Although the swapping process
degrades system performance, it allows larger and multiple
processes to run concurrently. Because of this, swapping is
also known as memory compaction. The CPU scheduler
determines which processes are swapped in and which are
swapped out. Consider a multiprogramming environment that
employs a priority-based scheduling algorithm. When a high-
priority process enters the input queue, a low-priority process
is swapped out so the high-priority process can be loaded
and executed. When this process terminates, the low-priority
process is swapped back into memory to continue its
execution. The below figure shows the swapping process in
the operating system:

Swapping has been subdivided into two concepts: swap-in


and swap-out.
 Swap-out is a technique for moving a process from RAM
to the hard disc.
 Swap-in is a method of transferring a program from a hard
disc to main memory, or RAM.
Process of Swapping
 When the RAM is full and a new program needs to run,
the operating system selects a program or data that is
currently in RAM but not actively being used.
 The selected data is moved to the secondary storage,
making space in RAM for the new program.
 When the swapped-out program is needed again, it can
be swapped back into RAM, replacing another inactive
program or data if necessary.
Real Life Example of Swapping
Imagine you have a disk (RAM) that is too small to hold all
your books and papers (programs). You keep the most
important items on the desk and store the rest in a cabinet
(secondary storage). When you need something from the
cabinet, you swap it with something on your desk. This way,
you can work with more items than your desk alone could
hold.
Advantages
 If there is low main memory so some processes may has
to wait for much long but by using swapping process do
not have to wait long for execution on CPU.
 It utilize the main memory.
 Using only single main memory, multiple process can be
run by CPU using swap partition.
 The concept of virtual memory start from here and it
utilize it in better way.
 This concept can be useful in priority based scheduling to
optimize the swapping process.
Disadvantages
 If there is low main memory resource and user is
executing too many processes and suddenly the power of
system goes off there might be a scenario where data get
erase of the processes which are took parts in swapping.
 Chances of number of page faults occur
 Low processing performance
Only one process occupies the user program area of
memory in a single tasking operating system and remains in
memory until the process is completed.
When all of the active processes in a multitasking operating
system cannot coordinate in main memory, a process is
swapped out of main memory so that other processes can
enter it.

Swapping is a memory management technique that moves


processes between a computer's main memory (RAM) and
secondary storage (disk) to improve memory utilization:
Swap-out: Moves a process from RAM to the disk
Swap-in: Moves a process from the disk to RAM
Swapping is used when there isn't enough RAM to run all the
processes, or when a process needs to be temporarily
removed from RAM to make room for other processes. When
a process is swapped out, the operating system finds an
empty block of physical memory to replace it with. When the
process needs to be executed again, it's swapped back into
RAM.
Swapping can help a computer run multiple large processes
at the same time, but it can also affect performance.
virtual memory is a management technique that
combines a computer's hard disk space with its random
access memory (RAM) to create a larger virtual
address space.
h) What is Contiguous Memory Allocation? CO L2
सन्निहित मेमोरी आवंटन क्या है?
Contiguous memory allocation is a memory management
technique in an operating system (OS) where a single,
uninterrupted block of memory is assigned to a process or
program:
How it works
When a process requests memory, the OS assigns it a contiguous
section of memory blocks that meets its needs.
Benefits
This method allows for efficient read/write operations because of the
continuous storage structure. It also enables fast and direct access
to data.
Contrast with non-contiguous memory allocation
In non-contiguous memory allocation, a process can be scattered
across various locations in the memory.
Contiguous memory allocation is a memory management
technique in an operating system (OS) where a single,
uninterrupted block of memory is assigned to a process or
program:
 How it works
When a process requests memory, the OS assigns it a
contiguous section of memory blocks that meets its needs.
 Benefits
This method allows for efficient read/write operations
because of the continuous storage structure. It also enables
fast and direct access to data.
 Contrast with non-contiguous memory allocation
In non-contiguous memory allocation, a process can be
scattered across various locations in the memory.

i) What are the advantages and disadvantages of using virtual CO L1


memory? 4
Virtual memory allows a computer to use more memory than is
physically available, but it has some advantages and disadvantages:
Advantages
Memory protection: Virtual memory protects memory.
Data/code sharing: Virtual memory allows data and code to be shared between
memories.
Disadvantages
Slower speed: Virtual memory is slower than physical memory.

St ability problems: Opening and storing large applications can reduce the
system's stability and performance.
Applications may run slower: Applications may run slower because accessing
disk storage is slow.
External storage lifespan: Using virtual memory on external storage can
impact the lifespan of the device.
Hard drive space: Virtual memory reduces the amount of hard drive space
available to the user.
Storage space: Virtual memory takes up storage space that could be used for
long-term data.
Virtual memory allows a computer to use more memory than
is physically available, but it has some advantages and
disadvantages:
 Advantages
 Memory protection: Virtual memory protects
memory.
 Data/code sharing: Virtual memory allows data
and code to be shared between memories.
 Disadvantages
 Slower speed: Virtual memory is slower than
physical memory.
 Stability problems: Opening and storing large
applications can reduce the system's stability
and performance.
 Applications may run slower: Applications
may run slower because accessing disk storage
is slow.
 External storage lifespan: Using virtual
memory on external storage can impact the
lifespan of the device.
 Hard drive space: Virtual memory reduces the
amount of hard drive space available to the
user.
 Storage space: Virtual memory takes up
storage space that could be used for long-term
data.

वर्चुअल मेमोरी का उपयोग करने के क्या फायदे और नुकसान


हैं?
j) What are the advantages and disadvantages of using CO L2
paging? 4
Paging has both advantages and disadvantages, including:
Advantages
Simplified memory management: Paging simplifies
memory management so that programs don't need to
worry about physical memory addresses.
Efficient memory usage: Paging allows for efficient
memory usage.
Memory protection: Paging prevents unauthorized
access to memory.
Simple mapping: The mapping between virtual and
physical addresses is simple.
Supports large programs: Paging allows large programs
to execute, even if they don't fit entirely in physical
memory.

Disadvantages
Internal fragmentation: Paging can cause internal fragmentation
because some pages may be underutilized.
External fragmentation: Paging can suffer from external fragmentation,
especially when pages are swapped in and out of memory.
Longer memory access time: It can take longer to access memory due
to page table lookup.
Memory requirements: Paging has memory requirements.
Paging has both advantages and disadvantages, including:
 Advantages
 Simplified memory management: Paging
simplifies memory management so that
programs don't need to worry about physical
memory addresses.
 Efficient memory usage: Paging allows for
efficient memory usage.
 Memory protection: Paging prevents
unauthorized access to memory.
 Simple mapping: The mapping between virtual
and physical addresses is simple.
 Supports large programs: Paging allows large
programs to execute, even if they don't fit
entirely in physical memory.
 Disadvantages
 Internal fragmentation: Paging can cause
internal fragmentation because some pages may
be underutilized.
 External fragmentation: Paging can suffer
from external fragmentation, especially when
pages are swapped in and out of memory.
 Longer memory access time: It can take
longer to access memory due to page table
lookup.
 Memory requirements: Paging has memory
requirements.

पेजिंग का उपयोग करने के क्या फायदे और नुकसान हैं?

UNIT-V
S.No Question CO Bloom's
. Taxono
my
a) What is a File? CO L1
फ़ाइल क्या है? 1

File
A file is a named collection of related information that is
recorded on secondary storage such as magnetic disks,
magnetic tapes and optical disks. In general, a file is a
sequence of bits, bytes, lines or records whose meaning is
defined by the files creator and user.

File Structure
A File Structure should be according to a required format that
the operating system can understand.

 A file has a certain defined structure according to its


type.
 A text file is a sequence of characters organized into
lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks
that are understandable by the machine.
 When operating system defines different file structures, it
also contains the code to support these file structure.
Unix, MS-DOS support minimum number of file structure.

Explore our latest online courses and learn new skills at your
own pace. Enroll and become a certified expert to boost your
career.

File Type
File type refers to the ability of the operating system to
distinguish different types of file such as text files source files
and binary files etc. Many operating systems support many
types of files. Operating system like MS-DOS and UNIX have the
following types of files −

Ordinary files
 These are the files that contain user information.
 These may have text, databases or executable program.
 The user can apply various operations on such files like
add, modify, delete or even remove the entire file.

Directory files
 These files contain list of file names and other
information related to these files.

Special files
 These files are also known as device files.
 These files represent physical device like disks, terminals,
printers, networks, tape drive etc.

These files are of two types −

 Character special files − data is handled character by


character as in case of terminals or printers.
 Block special files − data is handled in blocks as in the
case of disks and tapes.

File Access Mechanisms


File access mechanism refers to the manner in which the
records of a file may be accessed. There are several ways to
access files −

 Sequential access
 Direct/Random access
 Indexed sequential access

Sequential access
A sequential access is that in which the records are accessed in
some sequence, i.e., the information in the file is processed in
order, one record after the other. This access method is the
most primitive one. Example: Compilers usually access files in
this fashion.

Direct/Random access
 Random access file organization provides, accessing the
records directly.
 Each record has its own address on the file with by the
help of which it can be directly accessed for reading or
writing.
 The records need not be in any sequence within the file
and they need not be in adjacent locations on the storage
medium.

Indexed sequential access


 This mechanism is built up on base of sequential access.
 An index is created for each file which contains pointers
to various blocks.
 Index is searched sequentially and its pointer is used to
access the file directly.

Space Allocation
Files are allocated disk spaces by operating system. Operating
systems deploy following three main ways to allocate disk
space to files.

 Contiguous Allocation
 Linked Allocation
 Indexed Allocation

Contiguous Allocation
 Each file occupies a contiguous address space on disk.
 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of
allocation technique.

Linked Allocation
 Each file carries a list of links to disk blocks.
 Directory contains link / pointer to first block of a file.
 No external fragmentation
 Effectively used in sequential access file.
 Inefficient in case of direct access file.

Indexed Allocation
 Provides solutions to problems of contiguous and linked
allocation.
 A index block is created having all pointers to files.
 Each file has its own index block which stores the
addresses of disk space occupied by the file.
 Directory contains the addresses of index blocks of files.

b) What are the advantages of Linked Allocation? CO L1


लिंक्ड आवंटन के क्या लाभ हैं? 2
inked allocation has several advantages in operating systems,
including:
No external fragmentation
Linked allocation allows files to be stored in non-contiguous blocks,
so there's no external fragmentation.
Easy file resizing
New blocks can be added to the existing file chain to increase the
file size.
Judicious memory use
Linked allocation uses memory judiciously.
Suitable for systems with frequent file creation, deletion, or
resizing
Linked allocation is well-suited for systems where files are frequently
created, deleted, or resized, such as certain database systems and
file storage with many small files.
inked allocation has several advantages in operating
systems, including:
 No external fragmentation
Linked allocation allows files to be stored in non-contiguous
blocks, so there's no external fragmentation.
 Easy file resizing
New blocks can be added to the existing file chain to
increase the file size.
 Judicious memory use
Linked allocation uses memory judiciously.
 Suitable for systems with frequent file creation,
deletion, or resizing
Linked allocation is well-suited for systems where files are
frequently created, deleted, or resized, such as certain
database systems and file storage with many small files.
inked allocation has several advantages in operating systems,
including:
No external fragmentation
Linked allocation allows files to be stored in non-contiguous blocks,
so there's no external fragmentation.
Easy file resizing
New blocks can be added to the existing file chain to increase the
file size.
Judicious memory use
Linked allocation uses memory judiciously.
Suitable for systems with frequent file creation, deletion, or
resizing
Linked allocation is well-suited for systems where files are frequently
created, deleted, or resized, such as certain database systems and
file storage with many small files.
Fragmentation refers to an unwanted problem that occurs in the OS in
which a process is unloaded and loaded from memory, and the free memory
space gets fragmented. The processes cannot be assigned to the memory
blocks because of their small size. Thus the memory blocks always stay
unused. Fragmentation is an unwanted problem in the operating system in
which the processes are loaded and unloaded from memory, and free memory
space is fragmented.

c) List the various File Attributes. CO L2


3
Attributes of the File
1.Name

Every file carries a name by which the file is recognized in the file
system. One directory cannot have two files with the same name.

2.Identifier

Along with the name, Each File has its own extension which identifies
the type of the file. For example, a text file has the extension .txt, A
video file can have the extension .mp4.

3.Type

In a File System, the Files are classified in different types such as video
files, audio files, text files, executable files, etc.

4.Location

In the File System, there are several locations on which, the files can be
stored. Each file carries its location as its attribute.

5.Size

The Size of the File is one of its most important attribute. By size of the
file, we mean the number of bytes acquired by the file in the memory.

6.Protection

The Admin of the computer may want the different protections for the
different files. Therefore each file carries its own set of permissions to
the different group of Users.

7.Time and Date

Every file carries a time stamp which contains the time and date on
which the file is last modified.
विभिन्न फ़ाइल विशेषताओं की सूची बनाएं I
d) What are the various File Operations? CO L1
2

Operations on the File


A file is a collection of logically related data that is recorded on the
secondary storage in the form of sequence of operations. The content of
the files are defined by its creator who is creating the file. The various
operations which can be implemented on a file such as read, write, open
and close etc. are called file operations. These operations are performed
by the user by using the commands provided by the operating system.
Some common operations are as follows:

1.Create operation:

This operation is used to create a file in the file system. It is the most
widely used operation performed on the file system. To create a new file
of a particular type the associated application program calls the file
system. This file system allocates space to the file. As the file system
knows the format of directory structure, so entry of this new file is made
into the appropriate directory.

2. Open operation:

This operation is the common operation performed on the file. Once the
file is created, it must be opened before performing the file processing
operations. When the user wants to open a file, it provides a file name to
open the particular file in the file system. It tells the operating system to
invoke the open system call and passes the file name to the file system.

3. Write operation:

This operation is used to write the information into a file. A system call
write is issued that specifies the name of the file and the length of the
data has to be written to the file. Whenever the file length is increased by
specified value and the file pointer is repositioned after the last byte
written.

4. Read operation:
This operation reads the contents from a file. A Read pointer is
maintained by the OS, pointing to the position up to which the data has
been read.

5. Re-position or Seek operation:

The seek system call re-positions the file pointers from the current
position to a specific place in the file i.e. forward or backward depending
upon the user's requirement. This operation is generally performed with
those file management systems that support direct access files.

6. Delete operation:

Deleting the file will not only delete all the data stored inside the file it is
also used so that disk space occupied by it is freed. In order to delete the
specified file the directory is searched. When the directory entry is
located, all the associated file space and the directory entry is released.

7. Truncate operation:

Truncating is simply deleting the file except deleting attributes. The file is
not completely deleted although the information stored inside the file
gets replaced.

8. Close operation:

When the processing of the file is complete, it should be closed so that


all the changes made permanent and all the resources occupied should
be released. On closing it deallocates all the internal descriptors that
were created when the file was opened.

9. Append operation:

This operation adds data to the end of the file.

10. Rename operation:

This operation is used to rename the existing file.

विभिन्न फ़ाइल संचालन क्या हैं?


e) What is Directory? CO L2
A directory is a hierarchical structure that organizes files and 1
other resources on a computer or network. It's a type of file
that contains information to access other files or directories,
and it takes up less space than other types of files.
In computing, directories are also known as folders or
drawers. They help users find specific data, applications, or
services within a system.

Directories can also refer to:


A book that lists names, addresses, and phone numbers, usually in
alphabetical order or by trade
A book or manual that provides directions
A book that contains rules for church worship

निर्देशिका क्या है?


f) What are the operations that can be performed on a CO L4
Directory? 4
Here are some operations that can be performed on a
directory:
Create: Create a new directory with a unique name
Search: Find a specific file or directory within a directory
Delete: Remove unwanted files or empty directories
List: Get a list of files in a directory
Rename: Change the name of a directory while keeping its contents
and attributes
Link: Link files so they appear in multiple directories
Unlink: Remove links from files in multiple directories
Copy: Duplicate an existing directory to create a new directory with
the same content and attributes
Move: Transfer a directory from one location to another within the file
system
Traverse: Traverse the file system by reading and searching the
directory
Directories contain files and metadata about those files, such
as permissions, timestamps, and file sizes.

वे कौन से ऑपरेशन हैं जो किसी निर्देशिका पर किए जा सकते


हैं?
g) Define UFD and MFD. CO L1
In the two-level directory structure, each user has own user file 1
directory (UFD). Each UFD has a similar structure, but lists only
the files of a single user. When a job starts the system’s master
file directory (MFD) is searched. The MFD is indexed by the user
name or account number, and each entry points to the UFD for
that user.
यूएफडी और एमएफडी को परिभाषित करें
h) Define Equal Allocation. CO L1
Equal allocation is a statistical sampling technique that 2
involves dividing a sample across strata in an equal manner:
 Definition
Equal allocation is a statistical sampling technique that
involves dividing a sample across strata in an equal manner.
 Explanation
In equal allocation, the sample size is constant across the
stratum groups. This ensures that each stratum has a
minimum level of precision.
 Advantages
Equal allocation is considered the most efficient approach
because it offers the best risk-benefit ratio for subjects. It
also incentivizes informed consent discussions and
minimizes threats to internal validity.

समान आवंटन को परिभाषित करें।


i) What are the different methods for allocation in a File CO L3
System? 3
The allocation methods define how the files are stored in the
disk blocks. There are three main disk space or file allocation
methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and
disadvantages as discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks
on the disk. For example, if a file requires n blocks and is
given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1. This
means that given the starting block address and the length
of the file (in terms of blocks required), we can determine the
blocks occupied by the file.
The directory entry for a file with contiguous allocation
contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19
with length = 6 blocks. Therefore, it occupies 19, 20, 21, 22,
23, 24 blocks.
Advantages:
 Both the Sequential and Direct Accesses are supported
by this. For direct access, the address of the kth block
of the file which starts at block b can easily be
obtained as (b+k).
 This is extremely fast since the number of seeks are
minimal because of contiguous allocation of file blocks.
Disadvantages:
 This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of
memory utilization.
 Increasing file size is difficult because it depends on
the availability of contiguous memory at a particular
instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks
which need not be contiguous. The disk blocks can be
scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the
ending file block. Each block contains a pointer to the next
block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are
randomly distributed. The last block (25) contains -1
indicating a null pointer and does not point to any other
block.
Advantages:
 This is very flexible in terms of file size. File size can be
increased easily since the system does not have to
look for a contiguous chunk of memory.
 This method does not suffer from external
fragmentation. This makes it relatively better in terms
of memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the
disk, a large number of seeks are needed to access
every block individually. This makes linked allocation
slower.
 It does not support random or direct access. We can
not directly access the blocks of a file. A block k of a
file can be accessed by traversing k blocks sequentially
(sequential access ) from the starting block of the file
via block pointers.
 Pointers required in the linked allocation incur some
extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index
block contains the pointers to all the blocks occupied by a
file. Each file has its own index block. The ith entry in the
index block contains the disk address of the ith file block.
The directory entry contains the address of the index block
as shown in the image:
Advantages:
 This supports direct access to the blocks occupied by
the file and therefore provides fast access to the file
blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater
than linked allocation.
 For very small files, say files that expand only 2-3
blocks, the indexed allocation would keep one entire
block (index block) for the pointers which is inefficient
in terms of memory utilization. However, in linked
allocation we lose the space of only 1 pointer per
block.
For files that are very large, single index block may not be
able to hold all the pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index
blocks together for holding the pointers. Every index
block would then contain a pointer or the address to
the next index block.
2. Multilevel index: In this policy, a first level index
block is used to point to the second level index blocks
which inturn points to the disk blocks occupied by the
file. This can be extended to 3 or more levels
depending on the maximum file size.
3. Combined Scheme: In this scheme, a special block
called the Inode (information Node) contains all the
information about the file such as the name, size,
authority, etc and the remaining space of Inode is used
to store the Disk Block addresses which contain the
actual file as shown in the image below. The first few
of these pointers in Inode point to the direct
blocks i.e the pointers contain the addresses of the
disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be
single indirect, double indirect or triple indirect. Single
Indirect block is the disk block that does not contain
the file data but the disk address of the blocks that
contain the file data. Similarly, double indirect
blocks do not contain the file data but the disk
address of the blocks that contain the address of the
blocks containing the file data.

फ़ाइल सिस्टम में आवंटन की विभिन्न विधियाँ क्या हैं?


j) What are the advantages of Contiguous Allocation? CO L3
Contiguous memory allocation refers to a memory management 2
technique in which whenever there occurs a request by a user
process for the memory, one of the sections of the contiguous
memory block would be given to that process, in accordance with its
requirement.
As you can see in the illustration shown above, three files are there in
the directory. The starting block, along with the length of each file, is
mentioned in the table. Thus, we can see in this table that all the
contiguous blocks get assigned to every file as per their need.

Types of Partitions
Contiguous memory allocation can be achieved when we divide the
memory into the following types of partitions:

1. Fixed-Sized Partitions
Another name for this is static partitioning. In this case, the system
gets divided into multiple fixed-sized partitions. In this type of scheme,
every partition may consist of exactly one process. This very process
limits the extent at which multiprogramming would occur, since the
total number of partitions decides the total number of processes.
Read more on fixed-sized partitions here.

2. Variable-Sized Partitions
Dynamic partitioning is another name for this. The scheme allocation
in this type of partition is done dynamically. Here, the size of every
partition isn’t declared initially. Only once we know the process size,
will we know the size of the partitions. But in this case, the size of the
process and the partition is equal; thus, it helps in preventing internal
fragmentation.

On the other hand, when a process is smaller than its partition, some
size of the partition gets wasted (internal fragmentation). It occurs in
static partitioning, and dynamic partitioning solves this issue. Read
more on dynamic partitions here.

Pros of Contiguous Memory Allocation


1. It supports a user’s random access to files.

2. The user gets excellent read performance.

3. It is fairly simple to implement.

Cons of Contiguous Memory Allocation


1. Having a file grow might be somewhat difficult.

2. The disk may become fragmented.

सन्निहित आवंटन के क्या लाभ हैं?

SECTION-B (Short Answer Type Questions)


UNIT-I
S.No Question CO Bloom's
. Taxono
my
a) What are the various objectives and functions of Operating CO L1
systems? 1
ऑपरेटिंग सिस्टम के विभिन्न उद्देश्य और कार्य क्या हैं?
b) What is a real time operating system and explain different CO L1
types of real time operating system? 1
रियल टाइम ऑपरेटिंग सिस्टम क्या है और विभिन्न प्रकार के
रियल टाइम ऑपरेटिंग सिस्टम के बारे में बताएं?
c) Explain the components of an operating system? CO L2
ऑपरेटिंग सिस्टम के घटकों को समझाइये? 2
d) Differentiate between distributed systems from CO L2
multiprocessor systems? 2
मल्टीप्रोसेसर सिस्टम से वितरित सिस्टम के बीच अंतर
बताएं?
e) Differentiate between distributed operating systems from CO L3
real time operating systems? 3
वितरित ऑपरेटिंग सिस्टम और रियल टाइम ऑपरेटिंग सिस्टम के
बीच अंतर बताएं?
f) What is an operating system and explain the characteristics CO L3
of an operating system? 3
ऑपरेटिंग सिस्टम क्या है और ऑपरेटिंग सिस्टम की विशेषताएँ
समझाइये?
g) What is a distributed operating System and explain the CO L4
advantages and disadvantages of operating systems? 1
डिस्ट्रीब्यूटेड ऑपरेटिंग सिस्टम क्या है और ऑपरेटिंग
सिस्टम के फायदे और नुकसान बताएं?
h) What are the major activities of operating systems with CO L4
regard to 3
Process management?
ऑपरेटिंग सिस्टम की प्रमुख गतिविधियाँ किसके संबंध में
हैं?प्रक्रिया प्रबंधन?
i) What are different types of operating systems? Explain them CO L1
in detail. 3
ऑपरेटिंग सिस्टम के विभिन्न प्रकार क्या हैं? उन्हें
विस्तार से बताएं.
j) What is the real time operating system? Explain them in CO L2
detail. 5
रियल टाइम ऑपरेटिंग सिस्टम क्या हैं? उन्हें विस्तार से
बताएं.
UNIT-II
S.No Question CO Bloom's
. Taxono
my
a) Define process and Explain process states in detail with CO L1
diagrams. 1
प्रक्रिया को परिभाषित करें और प्रक्रिया की स्थिति को
चित्र सहित विस्तार से समझाएं।
b) Differentiate between process and thread. CO L1
प्रक्रिया और धागे के बीच अंतर बताएं. 2
c) What is process Synchronisation and how does it work? CO L2
प्रोसेस सिंक्रोनाइज़ेशन क्या है और यह कैसे काम करता है? 3
d) What is the solution to a critical section problem? CO L2
गंभीर अनुभाग की समस्या का समाधान क्या है? 1
e) Consider following processes with length of CPU burst time in CO L3
milliseconds 4
Process Burst time

P1 24

P2 3

P3 3

All process arrived in order p1,p2,p3 all time zero


a) Draw Gantt charts illustrating execution of the processes
for First come first serve(FCFS)
b)Calculate waiting time for each process for each
Scheduling algorithm
मिलीसेकंड में सीपीयू बर्स्ट समय की लंबाई के साथ
निम्नलिखित प्रक्रियाओं पर विचार करें
प्रक्रिया समय
विस्फोट

पी1 24

पी2 3

पी3 3

सभी प्रक्रियाएँ क्रम p1,p2,p3 में आ गईं, हर समय शून्य


ए) पहले आओ पहले पाओ (एफसीएफएस) के लिए प्रक्रियाओं के
निष्पादन को दर्शाने वाले गैंट चार्ट बनाएं
बी) प्रत्येक शेड्यूलिंग एल्गोरिदम के लिए प्रत्येक
प्रक्रिया के लिए प्रतीक्षा समय की गणना करें
f) Consider following processes with length of CPU burst time in CO L3
milliseconds 3
Process Burst time

P1 5

P2 10
P3 2

P4 1

All process arrived in order p1, p2, p3, p4 all time zero
a) Draw Gantt charts illustrating execution of the processes for SJF
and Round robin (given time quantum=1)
b) Calculate waiting time for each process for each Scheduling
algorithm
मिलीसेकंड में सीपीयू बर्स्ट समय की लंबाई के साथ
निम्नलिखित प्रक्रियाओं पर विचार करें
प्रक्रिया समय
विस्फोट

पी1 5

पी2 10

पी3 2

पी4 1

सभी प्रक्रियाएँ क्रम p1,p2,p3,p4 में आ गईं, हर समय शून्य


ए) एसजेएफ और राउंड रॉबिन के लिए प्रक्रियाओं के निष्पादन
को दर्शाने वाले गैंट चार्ट बनाएं (समय क्वांटम = 1 दिया
गया है)
बी) प्रत्येक शेड्यूलिंग एल्गोरिदम के लिए प्रत्येक
प्रक्रिया के लिए प्रतीक्षा समय की गणना करें|
g) What is Preemptive scheduling and NonPreemptive CO L4
scheduling.Explain its type? 1
प्रीमेप्टिव शेड्यूलिंग और नॉनप्रीमेप्टिव शेड्यूलिंग
क्या है? इसके प्रकार बताएं?
h) Difference between process scheduling and CPU scheduling? CO L4
प्रोसेस शेड्यूलिंग और सीपीयू शेड्यूलिंग के बीच अंतर? 2
i) Explain the process memory used for efficient operation? CO L2
कुशल संचालन के लिए उपयोग की जाने वाली प्रोसेस मेमोरी की 1
व्याख्या करें?
j) What is Waiting Time and response time in CPU scheduling? CO L1
सीपीयू शेड्यूलिंग में प्रतीक्षा समय और प्रतिक्रिया समय 2
क्या है?

UNIT-III
S.No. Question CO Bloom's
Taxono
my
a) Explain deadlock prevention in detail. CO L1
गतिरोध निवारण को विस्तार से समझाइये। 3
b) Discuss deadlock detection with one resource of each type. CO L3
प्रत्येक प्रकार के एक संसाधन के साथ गतिरोध का पता लगाने 3
पर चर्चा करें।
c) Explain deadlock avoidance. CO L1
गतिरोध निवारण को समझाइये। 2
d) What is a resource-allocation graph?Explain in detail. CO L2
संसाधन-आवंटन ग्राफ क्या है? विस्तार से समझाइए। 2
e) Explain wait for graph with example. CO L2
वेट फॉर ग्राफ़ को उदाहरण सहित समझाइये। 2
f) What is a deadlock? Write the advantages and CO L1
disadvantages of deadlock. 1
गतिरोध क्या है? गतिरोध के लाभ एवं हानियाँ लिखिए।
g) Explain Deadlock detection (Banker’s Algorithm) with CO L3
Example? 4
डेडलॉक डिटेक्शन (बैंकर्स एल्गोरिथम) को उदाहरण सहित
समझाएं?
h) Give the condition necessary for a deadlock situation to CO L4
arise? 3
गतिरोध की स्थिति उत्पन्न होने के लिए आवश्यक शर्त बताइए?
i) Difference between Deadlock and Starvation. CO L2
गतिरोध और भुखमरी के बीच अंतर. 2
j) Explain about r111ecovery from deadlock? CO L1
गतिरोध से उबरने के बारे में बताएं? 1

UNIT-IV
S.No Question CO Bloom's
. Taxono
my
a) What is Segmentation? Explain with Example. CO L1
विभाजन क्या है? उदाहरण सहित समझाइये। 1
b) What are Pages and Frames? CO L1
पेज और फ़्रेम क्या हैं? 1
c) Difference between paging and demand paging. CO L2
पेजिंग और डिमांड पेजिंग के बीच अंतर. 2
d) Write a page replacement algorithm in detail. CO L1
पृष्ठ प्रतिस्थापन एल्गोरिथम को विस्तार से लिखें। 3
e) Explain Contiguous memory allocation in detail. CO L2
सन्निहित मेमोरी आवंटन को विस्तार से समझाइये। 3
f) Explain the concept of Paging. CO L2
पेजिंग की अवधारणा को समझाइयेI 3
g) Explain the types of Page Table Structure. CO L3
पेज टेबल संरचना के प्रकार बताइये। 2
h) Explain about Segmentation in detail. CO L2
सेग्मेंटेशन के बारे में विस्तार से बताएं। 3
i) What is paging? Write the advantages and disadvantages of CO L2
paging? 4
पेजिंग क्या है? पेजिंग के फायदे और नुकसान लिखें?
j) Explain demand paging and write the advantages and disadvantages CO L3
of demand paging? 4
डिमांड पेजिंग को समझाइए और डिमांड पेजिंग के फायदे और
नुकसान लिखिए?

UNIT-V

S.No Question CO Bloom's


. Taxono
my
a) What is the information associated with an Open File? CO L1
ओपन फ़ाइल से जुड़ी जानकारी क्या है? 1
b) Explain the file system.write an advantage and disadvantage CO L2
in detail? 2
फाइल सिस्टम के बारे में विस्तार से बताएं, इसके फायदे और
नुकसान के बारे में विस्तार से बताएं?
c) What is the file allocation method? Explain in detail. CO L1
फ़ाइल आवंटन विधि क्या है? विस्तार से व्याख्या। 1
d) Difference between UFD and MFD? CO L2
यूएफडी और एमएफडी को परिभाषित करें? 1
e) Which are the typical operations performed on a directory? CO L3
किसी निर्देशिका पर किए जाने वाले विशिष्ट ऑपरेशन कौन से 2
हैं?
f) Explain linked list allocation of file in detail. CO L4
फ़ाइल की लिंक्ड सूची आवंटन को विस्तार से समझाएँ। 3
g) Compare file organization methods. CO L4
फ़ाइल संगठन विधियों की तुलना करें. 3
h) What are methods of free space management of Disk? CO L2
डिस्क के मुक्त स्थान प्रबंधन के तरीके क्या हैं? 2
i) What is a directory? Explain directory operation in detail. CO L2
निर्देशिका क्या है? डायरेक्ट्री ऑपरेशन को विस्तार से 4
समझाइये।
j) What criteria are important in choosing a file organization? CO L4
फ़ाइल संगठन चुनने में कौन से मानदंड महत्वपूर्ण हैं? 4
SECTION-C [Descriptive Answer Type Questions / Case Study (for MBA COURSES
only)]
UNIT-I
S.No Question CO Bloom's
. Taxono
my
a) Write a short note- CO L1
Operating system 1
Real time Operating system
Distributed Operating System
एक संक्षिप्त नोट लिखें-
ऑपरेटिंग सिस्टम
रियल टाइम ऑपरेटिंग सिस्टम
वितरित ऑपरेटिंग सिस्टम
b) What is a distributed operating System and explain the CO L1
advantages and disadvantages of operating systems? 2
डिस्ट्रीब्यूटेड ऑपरेटिंग सिस्टम क्या है और ऑपरेटिंग
सिस्टम के फायदे और नुकसान बताएं?
c) What is an operating system? Explain multiprogramming and CO L1
time sharing systems. 2
एक ऑपरेटिंग सिस्टम क्या है? मल्टीप्रोग्रामिंग और टाइम
शेयरिंग सिस्टम की व्याख्या करें।
d) What are different types of operating system? Explain them CO L2
in detail. 1
ऑपरेटिंग सिस्टम के विभिन्न प्रकार क्या हैं? उन्हें
विस्तार से बताएं I
e) Explain User Operating-System Interface in detail. CO L3
यूजर ऑपरेटिंग-सिस्टम इंटरफ़ेस को विस्तार से समझाइये 3
f) Explain operating system functions and services in detail. CO L3
ऑपरेटिंग सिस्टम के कार्यों और सेवाओं को विस्तार से 3
समझाइये।
g) What are the various objectives and functions of Operating CO L4
systems? 4
ऑपरेटिंग सिस्टम के विभिन्न उद्देश्य और कार्य क्या हैं?
h) What is a real time operating system and explain different CO L2
types of real time operating system? 2
रियल टाइम ऑपरेटिंग सिस्टम क्या है और विभिन्न प्रकार के
रियल टाइम ऑपरेटिंग सिस्टम के बारे में बताएं?
i) Explain the components of an operating system? CO L3
ऑपरेटिंग सिस्टम के घटकों को समझाइये? 2
j) What is an operating system and explain the characteristics CO L2
of an operating system? 1
ऑपरेटिंग सिस्टम क्या है और ऑपरेटिंग सिस्टम की विशेषताएँ
समझाइये?

UNIT-II
S.No Question CO Bloom's
. Taxono
my
a) Explain the following process scheduling algorithm CO L2
a) Priority scheduling 2
b) Shortest job first scheduling.

निम्नलिखित प्रक्रिया शेड्यूलिंग एल्गोरिदम की व्याख्या


करें
ए) प्राथमिकता शेड्यूलिंग
बी) सबसे छोटा काम पहला शेड्यूलिंग I
b) Define process and Explain process states in detail with CO L1
diagrams. 2
प्रक्रिया को परिभाषित करें और प्रक्रिया की स्थिति को
चित्र सहित विस्तार से समझाएं I
c) Evaluate FCFS, SJF CPU Scheduling algorithm for given CO L4
Problem 4
Process P1 P2 P3 P4
Process Time 8 4 9 5
Arrival Time 0 1 2 3
a) Draw Gantt charts illustrating execution of the processes
for Short job First(SJF), First come first serve(FCFS).
b)Calculate waiting time for each process for each
Scheduling algorithm.
दी गई समस्या के लिए एफसीएफएस, एसजेएफ सीपीयू शेड्यूलिंग
एल्गोरिदम का मूल्यांकन करें
प्रक्रिया P1 P2 P3 P4
प्रक्रिया समय 8 4 9 5
आगमन समय 0 1 2 3
ए) शॉर्ट जॉब फर्स्ट (एसजेएफ), पहले आओ पहले पाओ (एफसीएफएस)
के लिए प्रक्रियाओं के निष्पादन को दर्शाने वाले गैंट
चार्ट बनाएं।
बी) प्रत्येक शेड्यूलिंग एल्गोरिदम के लिए प्रत्येक
प्रक्रिया के लिए प्रतीक्षा समय की गणना करें।
d) Evaluate Round CPU Scheduling algorithm for given Problem CO L4
Time quantum =3 ms. 4
Process P1 P2 P3 P4
Process Time 10 5 18 6
Arrival Time 5 3 0 4
a) Draw Gantt charts illustrating execution of the processes
for Round robin.
b) Calculate waiting time for each process for each
Scheduling algorithm.
दी गई समस्या के लिए राउंड सीपीयू शेड्यूलिंग एल्गोरिदम
का मूल्यांकन करें
समय की मात्रा =3 एमएस.
प्रक्रिया P1 P2 P3 P4
प्रक्रिया समय 10 5 18 6
आगमन समय 5 3 0 4
ए) राउंड रॉबिन के लिए प्रक्रियाओं के निष्पादन को दर्शाने
वाले गैंट चार्ट बनाएं।
बी) प्रत्येक शेड्यूलिंग एल्गोरिदम के लिए प्रत्येक
प्रक्रिया के लिए प्रतीक्षा समय की गणना करें I
e) a) Define Process? Explain process State diagram? CO L3
b) Explain about process schedulers? 2
क) प्रक्रिया को परिभाषित करें? प्रक्रिया राज्य आरेख
समझाइये?
ख) प्रक्रिया अनुसूचियों के बारे में बताएं?
f) a) Define process synchronization. CO L3
b) What is a process? Explain Process Control Block. 3

ए) प्रक्रिया सिंक्रनाइज़ेशन को परिभाषित करें।


ख) एक प्रक्रिया क्या है? प्रोसेस कंट्रोल ब्लॉक को
समझाइये।
g) What is Preemptive scheduling and NonPreemptive CO L2
scheduling.Explain its type in detail? 2
प्रीमेप्टिव शेड्यूलिंग और नॉनप्रीमेप्टिव शेड्यूलिंग
क्या है? इसके प्रकार विस्तार से समझाएं I
h) List the main differences and similarities between threads CO L4
and process. 3
थ्रेड और प्रक्रिया के बीच मुख्य अंतर और समानताएं
सूचीबद्ध करें।
i) Explain the difference between long term and short term CO L3
schedulers. 3
दीर्घकालिक और अल्पकालिक अनुसूचियों के बीच अंतर स्पष्ट
करें।
j) Consider 3 processes P1, P2 and P3, which require 5, 7 and 4 CO L4
time units and arrive at time 0, 1 and 3. Draw the Gant chart, 4
process completion sequence and average waiting time for.
(i) Round robin scheduling with CPU quantum of 2 time units.
(ii) FCFS

तीन प्रक्रियाओं P1, P2 और P3 पर विचार करें, जिनके लिए 5, 7


और 4 समय इकाइयों की आवश्यकता होती है
और समय 0, 1 और 3 पर पहुंचें। गैंट चार्ट बनाएं, प्रक्रिया
पूरी करें
अनुक्रम और औसत प्रतीक्षा समय।
(i) 2 समय इकाइयों के सीपीयू क्वांटम के साथ राउंड रॉबिन
शेड्यूलिंग।
(ii) एफसीएफएस

UNIT-III
S.No Question CO Bloom's
. Taxono
my
a) What are the conditions for deadlock? Explain deadlock CO L1
detection and recovery in detail. 1
गतिरोध की स्थितियाँ क्या हैं? गतिरोध का पता लगाने और
पुनर्प्राप्ति को विस्तार से समझाएं
b) Explain banker's algorithm for multiple resources to avoid CO L1
deadlock. 2
गतिरोध से बचने के लिए अनेक संसाधनों के लिए बैंकर्स
एल्गोरिदम की व्याख्या करें।
c) Explain different methods to handle deadlocks. CO L2
गतिरोधों से निपटने के विभिन्न तरीकों की व्याख्या करें। 2
d) Explain the methods for deadlock prevention. CO L2
गतिरोध निवारण के उपाय बताएं I 3
e) Explain Deadlock detection (Banker’s Algorithm) with CO L3
Example? 3
डेड लॉक डिटेक्शन (बैंकर्स एल्गोरिथम) को उदाहरण सहित
समझाएं?
f) a) Explain about Deadlock Avoidance? CO L3
b) Explain how recovery from a deadlock? 1

क) गतिरोध निवारण के बारे में बताएं?


ख) बताएं कि गतिरोध से कैसे उबरें?
g) Why is the deadlock state more critical than starvation? CO L4
Describe resource allocation graph with a deadlock, with a 2
cycle but no deadlock.
गतिरोध की स्थिति भुखमरी से अधिक गंभीर क्यों है? एक
गतिरोध के साथ संसाधन आवंटन ग्राफ़ का वर्णन करें, एक चक्र
के साथ लेकिन कोई गतिरोध नहीं।
h) Define deadlock and starvation. Explain Difference between CO L4
Deadlock and starvation. 2
गतिरोध एवं भुखमरी को परिभाषित करें। गतिरोध और भुखमरी के
बीच अंतर स्पष्ट करें।
i) Explain wait for graph with example in detail. CO L2
वेट फॉर ग्राफ़ को उदाहरण सहित विस्तार से समझाइये। 3
j) a) What are the methods for handling deadlock? CO L1
b) Write about deadlock and starvation? 3
क) गतिरोध से निपटने के तरीके क्या हैं?
ख) गतिरोध और भुखमरी के बारे में लिखें?

UNIT-IV
S.No Question CO Bloom's
. Taxono
my
a) Mention the merits and demerits of FIFO and LRU, optimal CO L2
page replacement algorithms. 3
FIFO और LRU, ऑप्टिकल पेज रिप्लेसमेंट एल्गोरिदम के गुण और
दोषों का उल्लेख करें।
b) Consider the reference stream CO L4
1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6. How many page faults 4
while using FCFS and LRU using 2 frames?
संदर्भ धारा 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 पर विचार
करें। 2 फ़्रेमों का उपयोग करते हुए एफसीएफएस और एलआरयू का
उपयोग करते समय कितने पृष्ठ दोष हैं?
c) Explain how paging supports virtual memory. With a neat CO L2
diagram explain in detail. 1
बताएं कि पेजिंग वर्चुअल मेमोरी को कैसे सपोर्ट करती है।
एक साफ़ चित्र के साथ विस्तार से समझाइये।
d) a) What is Segmentation? Explain with Example. CO L2
b) Explain about Paging.? 1
क) विभाजन क्या है? उदाहरण सहित समझाइये।
ख) पेजिंग के बारे में बताएं?
e) a) What is virtual memory? Discuss the benefits of virtual CO L3
memory techniques. 2
b) Write a short note on Disk management.
क) वर्चुअल मेमोरी क्या है? वर्चुअल मेमोरी तकनीकों के
लाभों पर चर्चा करें।
ख) डिस्क प्रबंधन पर एक संक्षिप्त नोट्स लिखें।
f) Given page reference string: CO L4
1,2,3,2,1,5,2,1,6,2,5,6,3,1,3,6,1,2,4,3. Compare the number 4
of page faults for LRU, FIFO and Optimal page replacement
algorithm.
दी गई पृष्ठ संदर्भ स्ट्रिंग:
1,2,3,2,1,5,2,1,6,2,5,6,3,1,3,6,1,2,4,3। एलआरयू, फीफो और
ऑप्टिमल पेज रिप्लेसमेंट एल्गोरिदम के लिए पेज दोषों की
संख्या की तुलना करें।
g) Explain the basic concepts of segmentation in detail. CO L3
विभाजन की मूल अवधारणाओं को विस्तार से समझाइए। 2
h) Write a short note- CO L2
● demand paging 1
● virtual memory
● paging
एक संक्षिप्त नोट लिखें-
● पेजिंग की मांग करें
● आभासी मेमोरी
● पेजिंग
i) What is contiguous memory allocation? Explain it in detail. CO L1
सन्निहित स्मृति आवंटन क्या है? इसे समझाओ। 2
j) Write short notes on CO L2
a) Demand paging 1
b) Thrashing
c) Page replacement
पर संक्षिप्त नोट्स लिखें
ए) डिमांड पेजिंग
ख) पिटाई
ग) पृष्ठ प्रतिस्थापन

UNIT-V
S.No Question CO Bloom's
. Taxono
my
a) Explain about single-level, two-level directory structure? CO L1
एकल-स्तरीय, दो-स्तरीय निर्देशिका संरचना के बारे में 1
बताएं?
b) Discuss the objectives for file management systems. CO L2
फ़ाइल प्रबंधन प्रणालियों के उद्देश्यों पर चर्चा करें। 1
c) Mention the different file attributes and file types. CO L1
विभिन्न फ़ाइल विशेषताओं और फ़ाइल प्रकारों का उल्लेख 2
करें।
d) What are the different disk scheduling algorithms? CO L2
विभिन्न डिस्क शेड्यूलिंग एल्गोरिदम क्या हैं बताएं। 2
e) Explain different free space management techniques in CO L3
detail. 2
विभिन्न मुक्त स्थान प्रबंधन तकनीकों को विस्तार से
समझाइए।
f) Write about different types of operation performed on file. CO L3
फ़ाइल पर निष्पादित विभिन्न प्रकार के ऑपरेशन के बारे में 3
लिखें।
g) Write a short note- CO L1
1. file attributes 3
2. file operations
एक संक्षिप्त नोट लिखें-
1. फ़ाइल विशेषताएँ
2. फ़ाइल संचालन
h) Write a short note- CO L1
1. contiguous allocation 3
2. linked allocation
3. indexed allocation
एक संक्षिप्त नोट लिखें-
1. सन्निहित आवंटन
2. संबद्ध आवंटन
3. अनुक्रमित आवंटन
i) a) Explain the concept of file with Example. CO L2
b) Explain about the access method with Example. 4
क) फ़ाइल की अवधारणा को उदाहरण सहित समझाइए।
ख) उदाहरण सहित एक्सेस विधि के बारे में बताएं।
j) a) Discuss about File type. CO L3
b) Explain about File operation. 4
a) फ़ाइल प्रकार के बारे में चर्चा करें।
बी) फ़ाइल ऑपरेशन के बारे में बताएं।

You might also like