0% found this document useful (0 votes)
10 views37 pages

7 Years Assignment (4, 5 & 6)

The document discusses various concepts related to operating systems, focusing on scheduling methods such as preemptive and non-preemptive scheduling, along with specific algorithms like FCFS and Round Robin. It also covers essential terms like scheduling queues, schedulers, context switches, and deadlock conditions, providing examples and explanations for each. Additionally, it outlines file operations and file allocation methods, emphasizing the importance of efficient resource management in operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views37 pages

7 Years Assignment (4, 5 & 6)

The document discusses various concepts related to operating systems, focusing on scheduling methods such as preemptive and non-preemptive scheduling, along with specific algorithms like FCFS and Round Robin. It also covers essential terms like scheduling queues, schedulers, context switches, and deadlock conditions, providing examples and explanations for each. Additionally, it outlines file operations and file allocation methods, emphasizing the importance of efficient resource management in operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Operating System

7 Years Question
Assignment

Quote of the day: “ The greatest amount of wasted time is the time not getting
started ”

1. What is scheduling ? Define preemptive and non-preemptive scheduling


with the help of an example.

Scheduling in operating systems refers to the method by which processes are given
access to system resources, particularly the CPU. The primary goal of scheduling is
to optimize CPU utilization and to ensure that each process gets its fair share of time
on the CPU to execute efficiently.

Types of Scheduling:

1. Preemptive Scheduling
2. Non-Preemptive Scheduling

1. Preemptive Scheduling

In preemptive scheduling, a process can be interrupted and moved to the "ready"


state if a higher-priority process arrives or if the current process’s allowed time on
the CPU is up. This method allows the operating system to handle time-sensitive
tasks more efficiently by giving preference to higher-priority processes.

• Example: Suppose there are two processes, P1 and P2, where:


o P1 has a longer CPU burst time, but it starts first.
o P2 has a shorter burst time and arrives after P1 starts executing.

If the scheduling is preemptive and a time slice is given, P1 will start


executing but may get interrupted once P2 arrives. This allows P2 to execute
first since it has a shorter burst time, leading to quicker completion of smaller
tasks.

• Common Algorithms: Round Robin, Priority Scheduling (preemptive

P.T.O.
version), Shortest Remaining Time First (SRTF).

2. Non-Preemptive Scheduling

In non-preemptive scheduling, once a process is allocated the CPU, it keeps the CPU
until it completes its CPU burst or voluntarily releases the CPU (e.g., by moving to
an I/O operation). This scheduling method is simpler but can lead to inefficiencies,
especially when short processes have to wait for longer processes to finish.

• Example: Consider processes P1 and P2 again with the same scenario as


above. If the scheduling is non-preemptive, P1 will complete its entire burst
time on the CPU before P2 is allowed to execute, regardless of the length of
P2's burst time.
• Common Algorithms: First-Come, First-Served (FCFS), Shortest Job First
(SJF, non-preemptive), Priority Scheduling (non-preemptive).

2. Explain FCFS.

First-Come, First-Served (FCFS) is one of the simplest CPU scheduling algorithms used in
operating systems. In FCFS scheduling, the process that arrives first is served first, meaning
it follows a first-in, first-out (FIFO) order. The process that enters the ready queue earliest
is given the CPU first and runs until it completes or voluntarily yields control.

Key Characteristics of FCFS Scheduling

1. Non-preemptive: Once a process starts execution, it cannot be preempted by another


process. It holds the CPU until it finishes.
2. Simple to implement: Since it follows the order of arrival, it does not require complex
priority calculations or context switching.
3. Can cause long waiting times: If a process with a long CPU burst arrives first, it can delay all
subsequent processes, a phenomenon known as the "convoy effect."

Advantages of FCFS Scheduling

• Simplicity: Easy to implement and understand, as it only requires tracking arrival times.
• Fairness: Every process is handled in the order of arrival, so there’s no favoritism.

Disadvantages of FCFS Scheduling

• Poor response time for short processes: Shorter tasks may have to wait a long time if a long
process arrives first, leading to inefficient CPU utilization.
• Convoy Effect: Long processes can hold up shorter processes, creating a bottleneck.

P.T.O.
3. Define the term
4. Describe the term :

i. Scheduling queues

Scheduling Queues

In an operating system, processes pass through various stages in their


lifecycle, and to manage this, the OS maintains different types of scheduling
queues. Each queue holds processes at different stages, facilitating efficient
management and execution. Here are the main types of scheduling queues:

1. Job Queue
o The job queue, sometimes called the submit queue, contains all
processes in the system.
o When a process is created, it is initially placed in the job queue.
o It represents the pool of jobs waiting to be executed.
2. Ready Queue
o The ready queue holds all processes that are in the "ready" state,
waiting to be assigned to the CPU for execution.
o Processes are moved to the ready queue once they are ready to run
but are waiting for CPU time.
o The CPU scheduler picks processes from the ready queue based on
the scheduling algorithm (e.g., FCFS, Round Robin).
3. Device Queue
o Device queues contain processes that are waiting for a specific I/O
device, such as a printer, disk, or network device.
o Each I/O device typically has its own device queue.
o When a process requests I/O, it’s moved to the device queue, and
when I/O is complete, it may go back to the ready queue.
4. Waiting Queue (Blocked Queue)
o This queue holds processes that cannot execute until certain events
occur, such as waiting for an I/O operation to complete or waiting
for access to a shared resource.
o Once the event occurs, the process is moved from the waiting
queue back to the ready queue.

ii. Scheduler

P.T.O.
A scheduler in an operating system is a component responsible for managing process
scheduling. It decides which process runs on the CPU at a given time, optimizing the
use of system resources and ensuring a balanced and fair process execution
environment. The scheduler determines the order and timing of process execution
based on various criteria, such as priority, arrival time, or required CPU burst time.

Types of Schedulers

There are typically three types of schedulers in an operating system:

1. Long-Term Scheduler (Job Scheduler):


o Manages the admission of processes into the system.
o Controls the degree of multiprogramming by selecting which processes
should enter the ready queue.
o Less frequent; it operates when a new process is created.
o Found mostly in batch processing systems.
2. Short-Term Scheduler (CPU Scheduler):
o Selects a process from the ready queue and allocates the CPU to it.
o Operates frequently as it controls process execution in real time.
o Decides which process should run next when the CPU becomes idle or
when a preemption occurs.
o Plays a significant role in ensuring efficient CPU utilization and system
responsiveness.
3. Medium-Term Scheduler:
o Handles the suspension and resumption of processes.
o Manages processes in the suspended (or swapped out) state to optimize
memory usage and avoid overloading.
o Used mainly in systems that support swapping (temporarily removing a
process from memory to free space).

iii. Context switch.


A context switch is the process of saving and restoring the state (or "context") of a
CPU so that multiple processes can share a single CPU resource effectively. This
switching enables the operating system to interrupt a currently running process and
resume it later, allowing other processes to execute in between. Context switches are
fundamental in multitasking operating systems, where multiple processes need CPU
time.

Key Steps in Context Switching

During a context switch, the operating system performs several steps:

P.T.O.
1. Save the State of the Current Process: The OS saves the current process’s state
(register values, program counter, etc.) in the process control block (PCB), so it can
resume from this point later.
2. Switch to the New Process: The OS selects the next process to run (based on
scheduling) and loads its saved state from its PCB.
3. Restore the Process State: The OS loads the saved state of the new process into the
CPU, allowing it to continue from where it left off.

5. Define following terms :


• Dead line

• Response time

• Throughput

• Turnaround time

P.T.O.
6. Explain Round robin algorithm with suitable example.

The Round Robin (RR) scheduling algorithm is a preemptive scheduling method that
assigns a fixed time slice, known as a time quantum, to each process in the ready queue.
Each process executes for this allotted time, and if it does not complete within the time
quantum, it is preempted and placed at the end of the queue, allowing the next process to
run. This cycle continues until all processes are completed.

Key Features of Round Robin Scheduling

1. Preemptive: Processes are interrupted after a set time quantum, making it fair and
responsive, especially for time-sharing systems.
2. Time Quantum: A critical factor in RR scheduling. If the quantum is too short, it leads to
frequent context switching (high overhead). If too long, it behaves like First-Come, First-
Served (FCFS).
3. Fairness: Ensures that all processes get an equal share of CPU time, making it ideal for
interactive systems where response time is important.

7. Explain any four operations which can be performed on file.


Files are an essential part of operating systems, used to store data and information
persistently. The operating system provides several operations that can be performed on
files, allowing users and applications to manage and manipulate data stored in them.

Here are four fundamental operations that can be performed on files:

1. Create

• The create operation is used to make a new file in the file system. When a file is created,
the operating system allocates space in the storage for the file and records its entry in the
directory.
• Example: In a text editor, when you create a new document and save it for the first time, a
new file is created in the specified directory.

2. Open

• The open operation allows access to a file’s content. When a file is opened, a file control
block (FCB) is created in memory to keep track of its attributes, such as file position,
permissions, and status.
• Example: When you open a document in a text editor, the file is loaded into memory, and
the editor displays its contents for viewing or editing.

P.T.O.
3. Read

• The read operation enables the operating system or an application to access and retrieve
the content of a file without modifying it. This operation is commonly used for processing
data stored in files.
• Example: When a program reads a file, it loads data from the file into memory, allowing it
to process and use the information stored.

4. Write

• The write operation allows adding or modifying content within a file. When data is written
to a file, the existing content may be overwritten, or new data is appended, depending on
the mode in which the file was opened.
• Example: Saving changes in a text document is a write operation, where the modified data
replaces the original data in the file.

8. Describe four conditions for dead locking.


Deadlock is a situation in an operating system where a set of processes becomes
permanently blocked because each process is waiting for a resource held by another process
in the same set. Deadlocks prevent processes from completing their tasks and can bring the
system to a halt.

Four Conditions for Deadlock

For a deadlock to occur, all of the following four conditions must hold simultaneously:

1. Mutual Exclusion
o At least one resource must be held in a non-shareable mode, meaning only one
process can use the resource at any given time. If another process requests this
resource, it must wait until the resource is released.
o Example: A printer can be used by only one process at a time. If a process has
control of the printer, no other process can use it until it is released.
2. Hold and Wait
o Processes already holding one or more resources can request additional resources
and wait for them while continuing to hold onto their existing resources.
o Example: A process holding a file lock requests access to a printer. If the printer is
occupied by another process, the first process waits, holding onto the file lock while
waiting for the printer.
3. No Preemption
o Resources cannot be forcibly taken from a process holding them; the process must
release the resource voluntarily once it completes its task.
o Example: If a process is using a CPU, it cannot be preempted and must complete its
task to release the CPU, even if other processes are waiting for CPU access.
4. Circular Wait

P.T.O.
o A circular chain of processes exists, where each process holds at least one resource
needed by the next process in the chain.
o Example: Process P1 holds Resource R1 and waits for Resource R2, held by Process
P2; Process P2 waits for Resource R3, held by Process P3, and so on, until the last
process in the chain waits for a resource held by P1, creating a circular dependency.

9. Describe types of scheduler used in scheduling.


In operating systems, schedulers are components that manage process scheduling, deciding
which process should execute at any given time. There are three main types of schedulers,
each serving a unique role in managing different aspects of process lifecycle and resource
allocation:

1. Long-Term Scheduler (Job Scheduler)

• The long-term scheduler controls the degree of multiprogramming, determining which


processes are admitted into the system for processing.
• It selects processes from the job pool (processes on secondary storage) and loads them into
memory to enter the ready queue.
• It is typically invoked less frequently since it controls the overall load in the system.
• Purpose: To maintain a balanced system load by selecting the right mix of I/O-bound and
CPU-bound processes, thereby ensuring efficient CPU utilization.
• Example: In batch processing systems, the long-term scheduler decides which jobs to start
based on job priority, system capacity, or resource availability.

2. Short-Term Scheduler (CPU Scheduler)

• The short-term scheduler is responsible for selecting from the ready queue the process that
should execute on the CPU next.
• It operates frequently, as it makes decisions every time the CPU becomes idle or when a
time slice expires in a preemptive scheduling system.
• Purpose: To maximize CPU utilization and reduce waiting time by quickly switching between
processes.
• Example: In time-sharing systems, the short-term scheduler frequently switches between
processes to provide a responsive user experience.

3. Medium-Term Scheduler

• The medium-term scheduler manages the swapping of processes in and out of memory to
control the degree of multiprogramming. This process is known as swapping.
• When system memory is full or when a process is waiting for I/O operations, the medium-
term scheduler can temporarily remove (swap out) a process from memory, allowing other
processes to load.
• Once resources are available, the process is swapped back into memory to resume
execution.

P.T.O.
• Purpose: To improve memory utilization and provide a flexible way to handle active and
suspended processes without overloading the system.
• Example: In a system with limited memory, the medium-term scheduler may swap out a
process waiting for an I/O operation to free up space for another process to execute.

10.Compare paging and segmentation.

11.What are the different file allocation method ? Explain any one in detail
with example.
There are several file allocation methods used by operating systems to manage how files are
stored and accessed on storage devices like hard drives. The primary file allocation methods
include:

1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation

P.T.O.
1. Contiguous Allocation

• In contiguous allocation, each file is stored in a single contiguous block of storage. This
means that all the blocks that make up the file are located next to each other on the disk.
The file system keeps track of the starting block and the length of the file, which allows for
quick access to the file’s content.
• Advantages:
o Simple to implement and fast for file access since the file is stored in a continuous
block.
o Good for sequential access to files, as there are no interruptions in accessing blocks.
• Disadvantages:
o External fragmentation: Over time, free space on the disk becomes fragmented,
which may lead to situations where no contiguous space is large enough for a new
file.
o Difficulty in resizing files: If a file needs more space, it may not fit in a contiguous
space on the disk, requiring complex file movements.
• Example: A simple example is storing a video file in a block of consecutive sectors on a hard
disk. If the file occupies 10 blocks, the blocks will be placed next to each other, and the
system just needs to know the starting block and the number of blocks used.

2. Linked Allocation

• In linked allocation, each file is stored as a linked list of blocks scattered across the disk.
Each block contains a pointer to the next block in the file. There is no need for contiguous
space, so it avoids fragmentation issues.
• Advantages:
o Eliminates external fragmentation because blocks can be scattered across the disk.
o No need to know the file size in advance; files can grow dynamically by adding more
blocks.
• Disadvantages:
o Slower random access: To access a particular block, the system must traverse the
list from the first block, which makes access time slower compared to contiguous
allocation.
o Overhead for storing pointers: Each block must store a pointer to the next block,
using up space.
• Example: A file system stores a text file in separate blocks, with each block containing part
of the file and a pointer to the next block. If you want to access the middle of the file, the
system must traverse through the linked blocks to reach the desired position.

3. Indexed Allocation

• In indexed allocation, a special index block (or table) is created to store the addresses of all
the blocks used by a file. This index block contains pointers to all the file’s data blocks,
which can be scattered across the disk. The file system stores a separate index for each file.
• Advantages:
o No external fragmentation: The blocks can be scattered across the disk, but the
index keeps track of the file’s blocks.
o Fast random access: Accessing any block in the file is direct by using the index table.
• Disadvantages:
P.T.O.
o Extra storage for the index table: The index table itself takes up space, especially for
large files.
o Can have overhead for large files, as the index table may need to be split into
multiple blocks if the file is too large.
• Example: If a file is made up of 10 blocks, the index block will store pointers to each of the
10 blocks, so when accessing the file, the operating system can use the index to directly
jump to the required block, improving access speed.

12.What is partitioning ? Explain concept of variable memory


partitioning withexample.
Partitioning in the context of memory management refers to the technique of dividing the
physical memory of a computer into separate sections or partitions. These partitions are
allocated to processes to ensure that each process has its own memory space, preventing one
process from interfering with another.

Partitioning helps in managing memory efficiently and isolates processes from each other.
There are two primary methods of partitioning memory:

1. Fixed Partitioning
2. Variable Partitioning

Variable Memory Partitioning

In variable memory partitioning, memory is divided into partitions that can vary in size
based on the requirements of the processes. Each partition is dynamically allocated based on
the size of the process, allowing the system to efficiently use the available memory. Unlike
fixed partitioning, where each partition has a predefined, fixed size, variable partitioning
allows more flexibility in the allocation of memory, accommodating processes of different
sizes.

How Variable Partitioning Works:

1. Memory Allocation: When a process arrives, the operating system searches for a
free memory block that is large enough to accommodate the process. The process is
assigned the exact amount of memory it needs, and the remaining free memory is left
as a smaller partition.
2. Dynamic Adjustment: When a process terminates, the memory it used is freed, and
the space can be used by another process. The free memory is not fixed but varies
depending on the processes that have been loaded and unloaded over time.
3. External Fragmentation: One of the main issues with variable partitioning is
external fragmentation, where free memory becomes scattered in small blocks
throughout the system. These small blocks may not be large enough to accommodate

P.T.O.
new processes, even though the total free memory is sufficient.

13.Consider the following page reference string – 1, 2, 3, 4, 5, 3, 4, 1, 6,


7, 8, 7, 8, 9, 7, 8, 9, 5, 4, 5. How many page faults occur for FIFO
replacement algorithm assuming 3 frames ?

14.State and describe different memory management technique


Follow technical publication book

15.Describe Shortest Remaining Time (SRTN) algorithm with the help of


example.

16.Describe the sequential file access method.


Sequential File Access Method refers to the way data is accessed in a file, where records
are read or written one after another, in a predefined sequence. In this method, the file is
processed from the beginning to the end, and all operations on the file (such as reading,
writing, or modifying data) occur in a linear order, typically starting from the first record and
proceeding sequentially until the last.

Key Characteristics of Sequential File Access:

1. Linear Access: Data in the file is accessed sequentially from the first record to the
last. You cannot randomly access a specific record without traversing all the previous
ones.
2. Simple to Implement: Sequential file access is simple to implement as it does not
require complex indexing or searching algorithms. Data is processed in the order it is
stored.
3. File Structure: In sequential access, data is usually stored in a continuous block or
sequence, where each record or piece of data is written one after the other, often with
P.T.O.
a delimiter or record separator indicating the boundaries between records.
4. Read/Write Operations: In sequential access, reading or writing operations are
performed in a sequence. Once a record is accessed, the pointer (or file cursor) moves
to the next record, making sequential reading or writing mandatory.
5. Data Integrity: Since records are stored in a specific order, maintaining data
integrity is straightforward. Data is written and read in a predictable manner.

17.Explain LRU page replacement algorithm by taking suitable example.

P.T.O.
LRU (Least Recently Used) Page Replacement Algorithm

The LRU (Least Recently Used) page replacement algorithm is a popular page replacement
strategy used in operating systems to manage the contents of the page table. It replaces the
page that has not been used for the longest period of time when a new page needs to be
loaded into memory, and there is no space available in memory to accommodate the new
page.

How LRU Works:

• The LRU algorithm maintains a record of the order in which pages are accessed. When a
new page must be loaded into memory, the page that has not been used for the longest
time (the least recently used page) is replaced.
• In this way, the LRU algorithm ensures that the pages most recently accessed are kept in
memory while the pages that haven't been accessed for a long time are replaced first.

Steps in the LRU Algorithm:

1. Keep Track of Page Access: Maintain a list or stack of pages in memory, ordered by their
most recent access. When a page is accessed (either for reading or writing), it is moved to
the front of the list.
2. Page Fault: When a page fault occurs (i.e., a page that is not in memory is requested), the
operating system checks if there is enough space in memory. If there is space, the page is
simply loaded into memory.
3. Page Replacement: If there is no space in memory (i.e., the memory is full), the page that
has not been used for the longest time (the least recently used page) is replaced by the new
page.
4. Repeat: This process continues for every page access.

Advantages of LRU:

1. Optimal in Many Cases: LRU often performs better than other algorithms because it
replaces the page that is least likely to be used in the near future.
2. Relatively Simple: Though LRU requires some bookkeeping, it is relatively simple to
implement compared to other algorithms like optimal page replacement.

Disadvantages of LRU:

1. Overhead: Maintaining the order of page accesses requires additional overhead in terms of
time and space. For example, a linked list or stack is needed to track the order of page
accesses, which can be costly in terms of performance.
2. Approximation: In practice, perfect LRU may not always be possible, and approximations
(like using counters or clocks) are often used, which can lead to slightly less optimal
behavior.

P.T.O.
18.Explain static and dynamic memory partioning with advantages and
drawback.

P.T.O.
Static and Dynamic Memory Partitioning

In memory management, partitioning refers to the way memory is divided into regions to
allocate space for processes. These partitions can be either static or dynamic, depending on
how the memory is allocated and managed.

1. Static Memory Partitioning

In static memory partitioning, the total memory available in the system is divided into
fixed-sized partitions at the time of system startup. Each partition is then allocated to
processes as they arrive, and the size of the partition cannot be changed during system
operation.

Key Characteristics of Static Partitioning:

• Fixed partitions: The memory is divided into a fixed number of partitions before the system
starts.
• Partition size: The size of each partition is predetermined and fixed. If a process does not fit
into a partition, it cannot be placed in memory.
• Simple implementation: It is easy to implement and does not require complex memory
management algorithms.

Example of Static Partitioning:

Imagine a system with 100 MB of physical memory and three partitions, each of 30 MB, 40
MB, and 30 MB, respectively. If a process that needs 35 MB arrives, it cannot be placed in
any partition, as all are too small to fit the process.

Advantages of Static Partitioning:

1. Simplicity: It is easy to implement and manage because the partitions are predefined.
2. Low overhead: No complex algorithms are required for partition management.

Drawbacks of Static Partitioning:

1. Internal Fragmentation: If a process is smaller than the partition size, the remaining unused
memory within the partition is wasted. For example, if a process only needs 25 MB but the
partition is 30 MB, the remaining 5 MB in the partition is unused (internal fragmentation).
2. External Fragmentation: If processes of different sizes are loaded and unloaded, free
memory areas may be scattered throughout the system, making it difficult to allocate
memory for larger processes (external fragmentation).
3. Inefficient memory usage: Some partitions may remain empty while others may be
overfilled, leading to inefficient use of available memory.

2. Dynamic Memory Partitioning

P.T.O.
In dynamic memory partitioning, the memory is allocated to processes in variable-sized
partitions at the time of process arrival. The system divides the memory into partitions of
different sizes based on the process's requirements, and these partitions can change
dynamically during the operation of the system.

Key Characteristics of Dynamic Partitioning:

• Variable partition sizes: Memory is allocated in different sizes, based on the size of the
process.
• No predefined partitions: The system adjusts the partitions dynamically based on the needs
of the processes.
• Efficient memory allocation: The system tries to allocate only the memory that is required
by a process, reducing wastage.

Example of Dynamic Partitioning:

Imagine a system with 100 MB of physical memory, and processes that need 25 MB, 35
MB, and 40 MB. The system allocates 25 MB to the first process, 35 MB to the second
process, and 40 MB to the third process. If a new process arrives that needs 50 MB, the
system will dynamically adjust the memory allocation and allocate 50 MB from the
remaining free memory.

Advantages of Dynamic Partitioning:

1. No Internal Fragmentation: Since partitions are allocated based on the process size, there
is no internal fragmentation.
2. Efficient use of memory: Memory is allocated based on the actual size of the processes,
which leads to better memory utilization.
3. Flexibility: The system is more flexible and can adapt to processes of various sizes.

Drawbacks of Dynamic Partitioning:

1. External Fragmentation: As processes are loaded and unloaded, free memory spaces
become scattered, and it may be difficult to allocate larger contiguous blocks of memory.
Over time, this fragmentation may result in inefficient memory usage.
2. Complex memory management: Dynamic partitioning requires more complex algorithms to
manage the allocation and deallocation of memory, as well as to handle fragmentation.
3. Overhead: The system needs to track free memory spaces and perform additional
operations to allocate and deallocate memory dynamically.

19.With suitable diagram explain contiguous allocation method.

P.T.O.
20.Write steps for Banker’s algorithm to avoid deadlock. Also give one
example.

P.T.O.
Steps for Banker's Algorithm to Avoid Deadlock

Banker's Algorithm is a deadlock-avoidance algorithm used in operating systems. It checks


if resource allocation is safe by simulating the allocation for each process and ensuring the
system can satisfy all current and future resource demands.

Here are the steps to implement Banker's Algorithm:

1. Initialize:
o Let Available be the total number of available resources of each type.
o Let Max be the maximum demand of each process for each resource.
o Let Allocation be the matrix that represents the number of resources currently
allocated to each process.
o Let Need be the matrix that shows how many more resources each process needs
to reach its maximum demand.
2. Check Request Validity:
o If a process requests resources, check if the request is less than or equal to the
Need for that process.
o If it is greater, the request is invalid (it exceeds the maximum claim).
3. Check Availability:
o Ensure that the requested resources are less than or equal to the Available
resources.
o If the requested resources are not available, the process must wait.
4. Pretend Allocation:
o Temporarily allocate the requested resources to the process by updating the
Available, Allocation, and Need matrices.
5. Safety Check:
o Perform a safety check to determine if the system is in a safe state.
o Define a Finish array for each process (initially set to false) and create a Work
vector equal to Available.
o Find a process PiP_iPi such that Need[i] <= Work and Finish[i] is false.
o If found, allocate resources to PiP_iPi, update Work = Work + Allocation[i], and
mark Finish[i] as true.
o Repeat until all processes are marked Finish = true (safe state) or no such process
exists (unsafe state).
6. Decision:
o If the system is in a safe state, grant the request.
o If the system is in an unsafe state, deny the request and return the state to the
original allocation.

21.Explain concept of page replacement with suitable diagram


Page replacement is a memory management technique used in operating systems when a
page fault occurs (i.e., the required page is not found in physical memory). When a process
P.T.O.
tries to access a page that is not currently in main memory, the operating system must bring
it in from secondary storage (disk). If there is no free space in memory, the OS must replace
one of the existing pages in memory with the needed page.

The goal of page replacement is to choose pages for replacement that minimize the
likelihood of future page faults, thereby optimizing memory usage and performance. There
are several page replacement algorithms designed for this purpose, including FIFO (First-In-
First-Out), LRU (Least Recently Used), and Optimal Page Replacement.

Key Page Replacement Algorithms

1. FIFO (First-In-First-Out) Page Replacement:


o The oldest loaded page is replaced first.
o Simple to implement but may lead to the “Belady’s anomaly,” where increasing the
number of frames results in more page faults.
2. LRU (Least Recently Used) Page Replacement:
o The page that has not been used for the longest period is replaced.
o Better than FIFO in most cases but requires more overhead for tracking the usage
order of pages.
3. Optimal Page Replacement:
o The page that will not be used for the longest time in the future is replaced.
o This is the best theoretical algorithm but is impractical because future knowledge is
required.

Diagram Example: Page Replacement Process

Let's use a simple example with the FIFO page replacement algorithm, assuming we have 3
frames (physical memory slots) and a page reference string:

22.Explain multilevel queue scheduling with example.

P.T.O.
23.Explain two level directory structure with the help of diagram.
A two-level directory structure is a type of file system directory organization where each
user has a separate directory under the root directory. This structure is often used in multi-
user operating systems to organize files in a way that maintains privacy and prevents
conflicts in file names.

In a two-level directory structure:

1. The Root Directory contains subdirectories for each user. Each user has their own
unique directory under this root.
2. Each User Directory contains the files and folders specific to that user. Users have
their own private space, so file names in one user’s directory do not affect file names
in another user’s directory.

This structure is relatively simple and provides a basic level of isolation between users,
allowing them to have files with the same name as other users without conflict.

Advantages of Two-Level Directory Structure

1. Isolation: Each user has a unique directory, so they cannot access each other's files unless
permissions are explicitly granted.
2. No Naming Conflicts: Since each user directory is separate, files with the same name can
exist in different user directories without issue.
3. Simple to Implement: The structure is straightforward and easy to navigate, as users only
need to look within their own directory.

Disadvantages of Two-Level Directory Structure

1. Limited Organization: Users cannot create nested subdirectories within their own
directories, which limits organizational flexibility.
2. Scalability Issues: For systems with many users, this structure can become cumbersome, as
all user directories reside directly under the root directory.

P.T.O.
24.List different types of files. Explain basic operations on file.

Follow technicall pub

25.Describe working of sequential and direct access methods.


Files in an operating system can be accessed in different ways depending on how data is
stored and how frequently specific data points need to be retrieved. Two primary file access
methods are Sequential Access and Direct (or Random) Access. Here’s how each method
works:

1. Sequential Access

Sequential access is a method in which data is read or written in a fixed order, one record
after another, from the beginning of the file to the end. This is the simplest access method, as
data is processed in a specific sequence.

Working of Sequential Access:

• The file pointer starts at the beginning of the file.


• Data is accessed in order, one record at a time, and the pointer advances automatically
after each read or write.
• To access data in the middle of the file, all preceding data must be read or skipped over
until the desired data is reached.
• Typical uses: Files that are primarily read or written from start to end, such as log files,
media files (audio or video streams), and text files.

Advantages:

• Simple to implement and efficient for files that are naturally processed in order.
• Requires minimal metadata to manage the file pointer.

Disadvantages:

• Inefficient for random or infrequent access to specific data points, as each access requires
reading through preceding records.

2. Direct (Random) Access

Direct (or random) access allows a program to directly reach any part of the file without
needing to sequentially read through preceding data. This method relies on a file structure

P.T.O.
that allows jumping to specific data locations, usually using an index or byte offset.

Working of Direct Access:

• The file is conceptually divided into fixed-size logical blocks, records, or bytes.
• Using a seek operation, the file pointer can be positioned at any desired location in the file,
based on record numbers or byte offsets.
• Data can be accessed, read, or written directly at the specified location without processing
the entire file.
• Typical uses: Databases, index files, and large data files where frequent, non-sequential
access is required.

Advantages:

• Efficient for random or frequent access, as specific data can be accessed directly.
• Suitable for applications that need quick retrieval of specific records (e.g., databases).

Disadvantages:

• More complex to implement as it requires maintaining an index or structure to manage


direct locations within the file.
• May lead to fragmentation or complex file management if not organized properly.

26.Describe concept of virtual memory with suitable example.


Virtual memory is a memory management technique used in operating systems to give
applications the illusion of having more memory than is physically available in the
computer. It allows the system to use both RAM (physical memory) and disk storage
(secondary storage) to manage larger applications and multiple processes efficiently. By
doing so, it provides each process with a large address space and improves multitasking
capabilities.

Key Components of Virtual Memory

1. Paging: The process of dividing the virtual memory into equal-sized blocks called
pages and physical memory into blocks of the same size called page frames. Each
page can be mapped to any available frame in physical memory.
2. Page Table: A data structure that maintains a mapping between virtual addresses and
physical addresses. Each process has its own page table.
3. Page Fault: When a process tries to access a page that is not currently in physical
memory, a page fault occurs. The operating system then retrieves the page from disk
(secondary storage) and loads it into a free page frame in RAM.

P.T.O.
Working of Virtual Memory

When a program needs data, the system first checks if it is in physical memory (RAM):

1. If the data is in RAM, it is directly accessed, and this is called a page hit.
2. If the data is not in RAM, a page fault occurs, and the OS loads the required page from
secondary storage (usually the hard drive) into a page frame in RAM.

Since RAM is limited, if it is full, the OS must replace an existing page with the new one.
The page replacement algorithm decides which page to replace (e.g., Least Recently Used
(LRU) or First-In-First-Out (FIFO)).

Example of Virtual Memory

Let’s say we have:

• 4 KB of physical memory (RAM).


• 16 KB of virtual memory.
• Each page is 1 KB in size, so we have 4 physical frames in RAM and 16 virtual pages.

Assume a program requests data in the following sequence of virtual pages: 0, 1, 2, 3, 4, 0, 2,


5.

1. Initial Requests:
o Pages 0, 1, 2, and 3 are loaded into the 4 available physical frames.
2. Request for Page 4:
o Since RAM has only 4 frames and all are occupied, a page replacement must occur.
o If we use the FIFO replacement algorithm, the first page loaded (Page 0) will be
removed to make space for Page 4.
3. Subsequent Requests:
o If Page 0 is requested again, a page fault occurs as it has been swapped out. The OS
will bring it back, possibly replacing another page based on the page replacement
strategy.

27.State and explain criteria in CPU scheduling.


CPU scheduling is a critical task in an operating system that aims to optimize system
performance by managing how processes access the CPU. The OS scheduler uses various
criteria to determine the most effective way to assign CPU resources to processes. Here are
the main criteria for CPU scheduling:

P.T.O.
1. CPU Utilization:
o Definition: Measures the percentage of time the CPU is actively working on
processes rather than being idle.
o Goal: Maximize CPU utilization to ensure that the processor is being used
efficiently.
o Importance: High CPU utilization means better system performance, as
resources are not being wasted.
2. Throughput:
o Definition: Refers to the number of processes completed in a given period.
o Goal: Increase throughput to complete more tasks within a specific
timeframe.
o Importance: High throughput indicates that the system can handle more
processes in less time, making it responsive and productive.
3. Turnaround Time:
o Definition: The total time taken from the moment a process is submitted to
the time it completes execution.
o Calculation:
Turnaround Time=Completion Time−Arrival Time\text{Turnaround Time} =
\text{Completion Time} - \text{Arrival
Time}Turnaround Time=Completion Time−Arrival Time.
o Goal: Minimize turnaround time to reduce the waiting period for processes.
o Importance: Low turnaround time improves the overall user experience,
especially for batch processing systems.
4. Waiting Time:
o Definition: The total time a process spends waiting in the ready queue before
it gets CPU time.
o Calculation: Sum of all time periods a process spends in the ready queue.
o Goal: Minimize waiting time to improve process efficiency.
o Importance: Reducing waiting time is essential for interactive and real-time
systems to maintain responsiveness.
5. Response Time:
o Definition: The time from when a request is submitted until the system
produces the first response.
o Calculation:
Response Time=First Response Time−Arrival Time\text{Response Time} =
\text{First Response Time} - \text{Arrival
Time}Response Time=First Response Time−Arrival Time.
o Goal: Minimize response time to ensure the system responds quickly to user
or process input.
o Importance: Low response time is crucial in interactive systems (e.g.,
command-line applications, GUI applications) where immediate feedback is
necessary.

P.T.O.
28.Write in short on basic memory management.
Memory management is a fundamental function of an operating system that manages the
allocation and deallocation of memory resources to ensure efficient use of system memory
(RAM). It is essential for multitasking, as it allows multiple processes to run simultaneously
by managing their memory requirements. Basic memory management includes several key
functions:

1. Memory Allocation:
o Purpose: Assigns memory space to processes or programs.
o Types:
▪ Static Allocation: Memory is assigned at compile time and remains
fixed throughout program execution (e.g., global variables).
▪ Dynamic Allocation: Memory is assigned at runtime, allowing more
flexibility and efficient use of memory (e.g., heap allocation for
dynamically created objects).
2. Memory Deallocation:
o Purpose: Frees up memory that is no longer in use by a process so that it can
be reallocated to other processes.
o Proper deallocation is crucial to prevent memory leaks, where unused
memory remains occupied, leading to inefficient memory usage.
3. Memory Partitioning:
o Divides memory into fixed or variable-sized blocks to allocate to processes.
o Types:
▪ Fixed Partitioning: Memory is divided into fixed-size blocks. Simple
but can cause internal fragmentation (unused space within allocated
memory).
▪ Dynamic Partitioning: Memory is divided based on process needs,
reducing internal fragmentation but potentially causing external
fragmentation (unused space between allocated blocks).
4. Paging and Segmentation:
o Techniques to manage memory in a way that optimizes space and prevents
fragmentation.
o Paging: Divides memory into equal-sized pages and allocates memory in
fixed-size frames. Helps in implementing virtual memory.
o Segmentation: Divides memory based on logical segments (like code, data,
stack), each of varying size, which can be allocated and deallocated
independently.
5. Virtual Memory:
o Allows processes to use more memory than physically available by extending
RAM to disk storage.
o Paging is often used to implement virtual memory, where pages are swapped
between RAM and disk as needed, enabling efficient multitasking.

P.T.O.
29.Describe following terms
1. Scheduling queues 2) Schedular
3) Thread 4) Multithreading.

30.Describe optimal page replacement algorithm with example.

31.Define swapping ? When it is used ?

32.Explain the concept of variable memory partitioning with example.

33.Explain Bit map free-space management technique.

34.Write use of following system calls.

i. fork( )

ii. exec( )

P.T.O.
iii. abort( )

iv. end( )

35.Consider the following set of processes, with the length of


theCPU burst given in milliseconds.

Process Burst Priority


Time
P1 10 3

P2 1 1

P3 2 3

P4 1 4

P5 5 2

Find out average waiting time by using

1. nonpreemptive priority

2. Round-Robin (RR) (quantum = 1)


36.The job are scheduled for execution as follows solve the problem
using:

i. SJF

ii. FCFS

P.T.O.
also find average waiting time using Gantt chart.

Process Arrival Burst


time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

37.Give difference between


External fragmentation and
Internal fragmentation (four
points)

38.Consider the following four jobs.

Job Burst
Time
J1 8
J2 5
J3 5
J4 13

Find average waiting time for

(i) FCFS

(ii) SJF
P.T.O.
39.Describe indexed allocation method with advantage and
disadvantage.

P.T.O.
Indexed Allocation Method

In the indexed allocation method, each file has an index block that contains pointers to all
of the file's data blocks. Instead of linking data blocks in sequence (as in linked allocation) or
storing them in contiguous locations (as in contiguous allocation), indexed allocation uses an
index table to store the addresses of each block that a file occupies. This allows the OS to
access each block directly via the index, which serves as a table of contents for the file's
blocks.

Working of Indexed Allocation

1. Index Block Creation: When a file is created, the system allocates an index block specifically
for that file. The index block holds pointers to all the blocks of the file.
2. Data Block Allocation: For each part of the file, the system allocates separate data blocks
(non-contiguously) and stores pointers to these blocks in the index block.
3. Accessing Data: To access a particular block of data, the system uses the index block to
retrieve the location of the needed data block directly.

Advantages of Indexed Allocation

1. Direct Access:
o Since each data block’s address is stored in the index block, files can be accessed
randomly and directly, making it efficient for files that need frequent random
access.
2. No External Fragmentation:
o Data blocks can be stored non-contiguously, eliminating the need for continuous
free space and reducing external fragmentation issues.
3. Dynamic File Size Management:
o Files can grow dynamically by adding more pointers to data blocks in the index
block, making it easy to manage files of varying sizes.

Disadvantages of Indexed Allocation

1. Index Block Overhead:


o Each file requires an index block, which consumes additional disk space, especially
for small files. If many files are stored, the overhead can become significant.
2. Limited Block Pointers:
o The size of the index block limits the maximum size of a file. If a file needs more
data blocks than the pointers an index block can hold, a more complex structure
(such as multilevel indexing) is required.
3. Pointer Storage Requirement:
o Each index block must store multiple pointers, which takes up additional space. This
overhead can be inefficient if pointers occupy more space than the actual data in
the case of small files.

P.T.O.
40.Explain single level directory structure.

P.T.O.
Single-Level Directory Structure

In a single-level directory structure, all files are stored in the same directory, or root
directory, without any hierarchical organization. This means there is only one directory
where all files created by all users are placed, and no subdirectories are allowed.

Characteristics of Single-Level Directory Structure

1. Flat Organization:
o All files are organized within a single, flat structure. This structure makes it simple
to understand and manage since all files are in one place.
2. Unique File Names:
o Each file must have a unique name within this directory since there’s no way to
group files into separate subdirectories. For example, two users cannot create files
with the same name, which can be restrictive.
3. Simple to Implement:
o The single-level structure is easy to implement as it requires only basic
management of file names and storage locations, making it suitable for simple
systems.

Advantages of Single-Level Directory Structure

1. Simplicity:
o The structure is easy to understand and navigate, as all files are stored in the same
directory. It is also simple for the OS to manage due to its flat organization.
2. Quick Access:
o With all files in one directory, access times can be faster as there are no
subdirectory levels to search through.

Disadvantages of Single-Level Directory Structure

1. Name Conflicts:
o Since all files are stored in one directory, each file must have a unique name, which
can be challenging to manage, especially in a multi-user environment where
different users may want to use the same file names.
2. Poor Organization:
o As the number of files grows, a single-level directory can become cluttered, making
it difficult to locate files. There is no way to group related files together, which
limits efficient file organization.
3. Not Scalable:
o This structure is not suitable for large systems or systems with many users, as it
becomes unwieldy and inefficient with many files.

P.T.O.
41.Calculate average locating time for SJF (Shortest Job
First)and round robin (RR) algorithm for following table.

Process Burst time


P1 10

P2 04

P3 09

P4 06

(Time slice 4 m sec).

42.Explain context switch with help of diagram.

43.Explain following terms with respect to Memory management :


• Compaction
• Swapping

44. Given a page reference reference string(arrival) with four page


frames, calculate the page faults with FIFO and LRU page
replacement algorithms respectively : 12, 3, 4, 5, 1, 2, 5, 1, 2, 3, 4, 5,
1,6,7,8,7,8,9,7,8,9,5,4,4,5,4,2.
P.T.O.
45. Solve given problem by Using FCFS to calculate average waiting
time and turnaround time.
Proce Arrival Burst
ss time time
P1 0 7
P2 1 4
P3 2 9
P4 3 6
P5 4 8
46. Compare between bitmap and linked list free space management
techniques.(any six points)

P.T.O.
47. Construct and explain directory structure of a file system in terms of
single level, two level and tree structure.

48. Differentiate between Long term scheduler and Short term scheduler
w.r.t. following points:
i) Selection of job

ii) Frequency of execution

iii) Speed

iv) Accessing which part of system.

P.T.O.
MSBTE NEXT ICON COURSE YOUTUBE CHANNEL – UR ENGINEERING FRIEND

You might also like