0% found this document useful (0 votes)
288 views32 pages

Operating System ANSWERS

The document is a question bank on operating systems covering various topics such as the need for an operating system, microkernel and monolithic architectures, real-time operating systems, process management, and synchronization problems like the dining philosophers and producer-consumer issues. It includes questions and explanations about operating system services, process states, scheduling algorithms, critical sections, race conditions, and semaphores. The document serves as a comprehensive resource for understanding key concepts and problems in operating systems.

Uploaded by

kavya.jagtap04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
288 views32 pages

Operating System ANSWERS

The document is a question bank on operating systems covering various topics such as the need for an operating system, microkernel and monolithic architectures, real-time operating systems, process management, and synchronization problems like the dining philosophers and producer-consumer issues. It includes questions and explanations about operating system services, process states, scheduling algorithms, critical sections, race conditions, and semaphores. The document serves as a comprehensive resource for understanding key concepts and problems in operating systems.

Uploaded by

kavya.jagtap04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Operating System Question Bank

Q1. What is the need for an operating system?


Interface between the user and the computer:- An OS provides a very easy way to interact with the
computer. It provides different features and GUI so that we can easily work on a computer. We have to
interact just by clicking the mouse or through the keyboard. Thus, we can say that an OS makes working
very easy and efficient.

Booting:-Booting is basically the process of starting the computer. When the CPU is first switched ON it
has nothing inside the memory. So, to start the computer, we load the operating system into the main
memory. Therefore, loading the OS to the main memory to start the computer is booting. Hence, the Os
helps to start the computer when the power is switched ON.

Managing the input/output devices:- The OS helps to operate the different input/output devices. The
OS decides which program or process can use which device. Moreover, it decides the time for usage. In
addition to this, it controls the allocation and deallocation of devices.

Multitasking:-The OS helps to run more than one application at a time on the computer. It plays an
important role while multitasking. Since it manages memory and other devices during multitasking.
Therefore, it provides smooth multitasking on the system.

Platform for other application software:-Users require different application programs to perform
specific tasks on the system. The OS manages and controls these applications so that they can work
efficiently. In other words, it acts as an interface between the user and the applications.

Q2. Explain the architecture of Microkernel based operating system.


Q3. Explain the features of Real Time Operating System.
Real-time System is a system that is put through real-time which
means response is obtained within a specified timing constraint or
system meets the specified deadline. Real-time system is of two
types – Hard and Soft. Both are used in different cases. Hard real-
time systems are used where even the delay of some nano or
microseconds are not allowed. Soft real-time systems provide some
relaxation in time expression.
Characteristics of Real-time System:
Time Constraints: Time constraints related to real-time systems
simply mean the time interval allotted for the response of the
ongoing program. This deadline means that the task should be
completed within this time interval. Real-time system is responsible
for the completion of all tasks within their time intervals.
Correctness: Correctness is one of the prominent part of real-time
systems. Real-time systems produce correct result within the given
time interval. If the result is not obtained within the given time
interval then also result is not considered correct. In real-time
systems, correctness of result is to obtain correct result in time
constraint.
Embedded: All the real-time systems are embedded nowadays.
Embedded system means that combination of hardware and
software designed for a specific purpose. Real-time systems collect
the data from the environment and passes to other components of
the system for processing.
Safety: Safety is necessary for any system but real-time systems
provide critical safety. Real-time systems also can perform for a
long time without failures. It also recovers very soon when a failure
occurs in the system and it does not cause any harm to the data and
information.
Concurrency: Real-time systems are concurrent which means it can
respond to several processes at a time. There are several different
tasks going on within the system and it responds accordingly to
every task in short intervals. This makes the real-time systems
concurrent systems. Distributed: In various real-time systems, all
the components of the systems are connected in a distributed way.
The real-time systems are connected in such a way that different
components are at different geographical locations. Thus all the
operations of real-time systems are operated in distributed ways.
Stability: Even when the load is very heavy, real-time systems
respond in the time constraint i.e. real-time systems do not delay the
result of tasks even when there are several tasks going on at the
same time. This brings stability to real-time systems.
Q4. Explain the architecture of Monolithic kernel-based operating
system.
A monolithic design of the operating system architecture makes no
special accommodation for the special nature of the operating system.
Although the design follows the separation of concerns, no attempt is
made to restrict the privileges granted to the individual parts of the
operating system. The entire operating system executes with maximum
privileges. The communication overhead inside the monolithic operating
system is the same as that of any other software, considered relatively
low. CP/M and DOS are simple examples of monolithic operating
systems. Both CP/M and DOS are operating systems that share a single
address space with the applications.
● In CP/M, the 16-bit address space starts with system variables and the
application area. It ends with three parts of the operating system, namely
CCP (Console Command Processor), BDOS (Basic Disk Operating
System), and BIOS (Basic Input/Output System).
● In DOS, the 20-bit address space starts with the array of interrupt
vectors and the system variables, followed by the resident part of DOS
and the application area and ending with a memory block used by the
video card and BIOS.
Advantages of Monolithic Architecture:
● Simple and easy to implement structure.
● Faster execution due to direct access to all the services
Disadvantages of Monolithic Architecture:
● The addition of new features or removal of obsolete features is very
difficult.
● Security issues are always there because there is no isolation among
various servers present in the kernel
Q5. Explain in brief structure of operating System.

Q6. What are the services to be provided by the Operating System?


An Operating System provides services to both the users and to the
programs.
• It provides programs an environment to execute.
• It provides users the services to execute the programs in a
convenient manner. Following are a few common services provided
by an operating system –
Program execution
• Loads a program into memory.
• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization, process
communication, and deadlock handling.
I/O operations
• An Operating System manages the communication between user
and device drivers.
• I/O operation means to read or write operation with any file or any
specific I/O device.
• Operating system provides the access to the required I/O device
when required.
File system manipulation
• Program needs to read a file or write a file.
• The operating system gives permission to the program for operation
on file.
• Permission varies from read-only, read-write, denied, and so on.
• Operating System provides an interface for the user to create/delete
files and directories.
• Operating System provides an interface to create the backup of file
system.
Communication
• Two processes often require data to be transferred between them
• Both the processes can be on one computer or on different
computers, but are connected through a computer network.
• Communication may be implemented by two methods, either by
Shared Memory or by Message Passing.
Error handling
• The OS constantly checks for possible errors
. • The OS takes appropriate action to ensure correct and consistent
computing.
Resource Management
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.
Protection
• The OS ensures that all access to system resources is controlled.
• The OS ensures that external I/O devices are protected from invalid
access attempts.
• The OS provides authentication features for each user by means of
passwords
Q8. What is the difference between System Call and system program?
Q9. What is the difference between a schedular and a dispatcher?

Q10. What are the steps to be followed for process management when an interrupt comes to a
running process?
Ans-
If any interrupt occurs, it is indicated by an interrupt flag. The CPU will go to interrupt handler
routine. Interrupt handler then checks the type of interrupt and executes the appropriate
function. It involves overhead but is still better than the CPU waiting for I/O completion
or other activities. Interrupt handler activates most prior able activity first and later
deferrable part will be handled.
interrupt processing
Step 1 − First device issues interrupt to CPU.
Step 2 − Then, the CPU finishes execution of current instruction.
Step 3 − CPU tests for pending interrupt requests. If there is one, it sends an acknowledgment
to the device which removes its interrupt signal.
Step 4 − CPU saves program status word onto control stack.
Step 5 − CPU loads the location of the interrupt handler into the PC register.
Step 6 − Save the contents of all registers from the control stack into memory.
Step 7 − Find out the cause of interrupt, or interrupt type, or invoke appropriate routine.
Step 8 − Restore saved registers from the stack
Step 9 − Restore PC to dispatch the original process.

Q11. Consider the following set of processes with the length of CPU burst time given in the
mili seconds.Draw the Gantt chart and calculate Average Turnaround Time , Average
Waiting Time for FCFS and RR(Q= 2)Scheduling Algorithm.

Process CPU burst Arrival


time Time
P1 5 0
P2 2 2
P3 3 3
P4 7 5
Q12. What is Process State? Explain different states of a process with various queues
generated at each stage.
Q13. Consider the following set of processes with the length of CPU burst time given in the
mili seconds. Draw the Gantt chart and calculate Average Turnaround Time , Average
Waiting Time for Preemptive and Non-preemptive based Priority Scheduling Algorithm.

Process CPU Arrival Priority


burst Time
time
P1 6 0 2
P2 8 2 1
P3 7 3 3
P4 3 5 4
Q14. Consider the following set of processes with the length of CPU burst time given in the
mili seconds. Calculate Average Turnaround Time , Average Waiting Time for Shortest
Job First and Round Robin(Time slice = 1ms) Scheduling Algorithm.

Process CPU burst time Arrival Time


A 3 0
B 6 2
C 4 4
D 5 6
E 2 8
Q15. What do you know about Process Control Block?
Q16. What is CPU bound and I/O bound process?
Q17. What is a critical region? Explain with example.
Q18. What is Critical Section Problem? Explain requirements of critical Section problem.
Ans-
Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU, or any IO
device. The critical section cannot be executed by more than one process at the same time;
operating system faces difficulties in allowing and disallowing the processes from entering
the critical section. The critical section problem is used to design a set of protocols which
can ensure that the Race condition among the processes will never arise. Requirements of
Synchronization mechanisms The critical section problem needs a solution to synchronize
the different processes. The solution to the critical section problem must satisfy the
following conditions –
• Mutual Exclusion Mutual exclusion implies that only one process can be inside the critical
section at any time. If any other processes require the critical section, they must wait until it
is free.
• Progress Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a critical
section if it is free.
• Bounded Waiting Bounded waiting means that each process must have a limited waiting
time. It should not wait endlessly to access the critical section.
Q19. Differentiated between Direct communication and Indirect communication.

Q20.What is Race Condition? Explain with example.


Ans- A race condition is an undesirable situation that occurs when a device or system
attempts to perform two or more operations at the same time, but because of the nature of the
device or system, the operations must be done in the proper sequence to be done correctly.
Race conditions are most commonly associated with computer science and programming.
They occur when two computer program processes, or threads, attempt to access the same
resource at the same time and cause problems in the system. Race conditions are considered a
common issue for multithreaded applications.
A simple example of a race condition is a light switch. In some homes, there are multiple light
switches connected to a common ceiling light. When these types of circuits are used, the
switch position becomes irrelevant. If the light is on, moving either switch from its current
position turns the light off. Similarly, if the light is off, then moving either switch from its
current position turns the light on.
With that in mind, imagine what might happen if two people tried to turn on the light using
two different switches at the same time. One instruction might cancel the other or the two
actions might trip the circuit breaker. In computer memory or storage, a race condition may
occur if commands to read and write a large amount of data are received at almost the same
instant, and the machine attempts to overwrite some or all of the old data while that old data is
still being read. The result may be one or more of the following:
• the computer crashes or identifies an illegal operation of the program
• errors reading the old data
• errors writing the new data A race condition can also occur if instructions are processed in
the incorrect order.
OR
A race condition is an undesirable situation that occurs when a device or system attempts to
perform two or more operations at the same time, but because of the nature of the device or
system, the operations must be done in the proper sequence to be done correctly.
A simple example of a race condition is a light switch. In some homes, there are multiple light
switches connected to a common ceiling light. When these types of circuits are used, the
switch position becomes irrelevant. If the light is on, moving either switch from its current
position turns the light off.
Q21. What is semaphore? How can you solve the Dining-Philosophers problem using
semaphores?
The dining philosopher's problem is the classical problem of synchronization which
says that Five philosophers are sitting around a circular table and their job is to
think and eat alternatively. A bowl of noodles is placed at the center of the
table along with five chopsticks for each of the philosophers. To eat a
philosopher needs both their right and a left chopstick. A philosopher can only
eat if both immediate left and right chopsticks of the philosopher is available.
In case if both immediate left and right chopsticks of the philosopher are not
available then the philosopher puts down their (either left or right) chopstick
and starts thinking again.
For process synchronization, it employs two atomic operations:
1) Wait and 2) Signal. Depending on how it is configured, a semaphore either
allows or denies access to the resource. A fork can be picked up by performing a
wait operation on the semaphore and released by performing a signal operation
on the semaphore.

Q22. What is semaphore? How can you solve Producer- Consumer Problem using
semaphore?
Semaphore was proposed by Dijkstra in 1965 which is a very significant
technique to manage concurrent processes by using a simple integer value,
which is known as a semaphore. Semaphore is simply an integer variable
that is shared between threads. This variable is used to solve the critical
section problem and to achieve process synchronization in the
multiprocessing environment.

Producer-Consumer problem The Producer-


Consumer problem is a classical multi-process synchronization problem, that is
we are trying to achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem, Producer is
producing some items, whereas there is one Consumer that is consuming the
items produced by the Producer.
The same memory buffer is shared by both producers and consumers which is of
fixed size.
The task of the Producer is to produce the item, put it into the memory buffer,
and again start producing items. Whereas the task of the Consumer is to
consume the item from the memory buffer.
Let's understand what is the problem? Below are a few points that are considered
as the problems that occur in Producer-Consumer:
1. The producer should produce data only when the buffer is not full. In case it is
found that the buffer is full, the producer is not allowed to store any data
into the memory buffer.
2. Data can only be consumed by the consumer if and only if the memory buffer
is not empty. In case it is found that the buffer is empty, the consumer is not
allowed to use any data from the memory buffer.
3. Accessing memory buffer should not be allowed to producer and consumer at
the same time.
the solution to the Producer-Consumer Problem using Semaphore
The above problems of Producer and Consumer which occurred due to context
switch and producing inconsistent result can be solved with the help of
semaphores.
To solve the problem occurred above of race condition, we are going to use
Binary semaphores and Counting semaphores
Binary Semaphore: In Binary Semaphore, only two processes can compete to
enter into its CRITICAL SECTION at any point in time, apart from this the
condition of mutual exclusion is also preserved.
Counting Semaphore: In counting semaphore, more than two processes can
compete to enter into its CRITICAL SECTION at any point of time apart
from this the condition of mutual exclusion is also preserved.

Q23. Explain the different schedulers with suitable queuing diagram.


Q24. Describe an algorithm with an example related to deadlock avoidance.
Q25.Discuss Peterson’s algorithm with its merits and demerits.

Advantages of Peterson's solution:


It assures Mutual Exclusion, as only one process can access the critical section at a
time.
It assures progress, as no process is blocked due to processes that are outside.
It assures Bound Waiting as every process gets a chance.
Disadvantages of petersons:
Peterson's solution works for two processes, but this solution is best scheme in user
mode for critical section.
This solution is also a busy waiting solution so CPU time is wasted. So that “SPIN
LOCK” problem can come. And this problem can come in any of the busy waiting
solution.
Q26. Consider the following snapshot of a system:

Allocation Max Available


ABCD ABCD ABCD
P0 0012 0012 1520
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656
Answer the following questions using the Banker’s algorithm:
1) What is the content of the matrix Need?
2) Is the system in a safe state?
3) If a request from process P1 arrives for (0,4,2,0), can the request
be granted immediately?

Q27. Describe safe state and unsafe state?


A state is safe if the system can allocate resources to each process( up to its maximum
requirement) in some order and still avoid a deadlock. Formally, a system is in a safe state
only, if there exists a safe sequence. So a safe state is not a deadlocked state and conversely a
deadlocked state is an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from requesting resources in
such a way that any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an
unsafe state may lead to a deadlock.

Q28. Consider a system with 5 processes and 3 resources types A, B, C. Resource type A has
7 instances, resource type B has 3 instances and resources type C has 6 instances
Allocation Max
ABC ABC
P0 001 111
P1 200 323
P2 132 431
P3 101 001
P4 001 321
Calculate Available matrix, need of matrix using the Banker’s algorithm, Is the system in a
safe state? If yes, what is the safe sequence?
Q29. Explain the must satisfied conditions to dead locked system.

Q30. Describe User level threads and Kernel level threads with its advantages and
disadvantages.
Q31. Explain the different schedulers with suitable queuing diagram.
Q32. Give the comparison between paging and segmentation.

Q33. What is compaction? Explain with example.


Q34. In a paging system with TLB, it takes 30ns to search the TLB and 90ns to access the
memory. If the TLB hit ratio is 70%, find the effective memory access time.
Q35.Discuss the implementation of LRU page-replacement algorithm using a counter, a stack
with example.
Q36. What is the principal of locality crucial to the use of virtual memory?
ANS-
The principle of locality states that program and data references within a process tend to
cluster. This validates the assumption that only a few pieces of a process are needed over a
short period of time. This also means that it should be possible to make intelligent guesses
about which pieces of a process will be needed in the near future, which avoids thrashing.
These two things mean that virtual memory is an applicable concept and that it is worth
implementing.

Q37. What is dynamic storage allocation problem? How will you provide the solution for this.
ANS-
Dynamic memory allocation is when an executing program requests that the operating system
give it a block of main memory. The program then uses this memory for some purpose.
Usually the purpose is to add a node to a data structure. In object oriented languages, dynamic
memory allocation is used to get the memory for a new object. Solution for Dynamic memory
allocation problem
First Fit
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Best Fit
The best fit deals with allocating the smallest free partition which meets the requirement of
the requesting process. This algorithm first searches the entire list of free partitions and
considers the smallest hole that is adequate. It then tries to find a hole which is close to actual
process size needed.
Worst fit
In worst fit approach is to locate largest available free portion so that the portion left will be
big enough to be useful. It is the reverse of best fit

Q38. What is the role of valid – invalid bits in a page table?

Q39. Explain Contiguous file allocation method.

Q40. Consider the following page reference string, Frame size = 3

7,0,1,2,0,3,0,4,2,3 0,3,2,1,2,0,1,7,0,1

Calculate the page fault rate for the following algorithm


i) LRU replacement
ii) FIFO replacement
Q42. What is the difference between Logical address (Vs) Physical addresses? Explain the
concept of swapping.

Q43. Explain different file operations in brief.


Q44.Differentiate between absolute and relative path name of a file.

Q45. What is disk partitioning?


Disk partitioning is one step of disk formatting. It is the process of dividing a disk into one or
more regions, the so called partitions. If a partition is created, the disk will store the information
about thelocation and size of partitions in partition table that is usually located in the first
sector of a disk.
With the partition table, each partition can appear to the operating system as a logical disk and
users can read and write dataon disk. And each partition can be managed separately.
Partitions are categorized as primary partition and extended partition. As we all know,
a MBR disk can be at most divided into 4 primary partitions or 3 primary partitions plus 1
extended partition. And the extended partition can be divided into a number of logical
partitions. As for a GPT disk, it can be at most divided into 128 primary partitions without
extended partition.

Q46. Consider a disk queue with I/O request on the following cylinders in their arriving
order:

6,10,12,54,97,73,128,15,44,110,34,45

The disk head is assumed to be at cylinder 23 and moving in the direction of decreasing number
of cylinders. The disk consists of total 150 cylinders.

1) Draw track chart for LOOK, C-LOOK disk scheduling algorithm.


2) Determine total head movement in tracks in each case.
Q47. Consider a disk queue with I/O request on the following cylinders in their arriving
order:
6,10,12,54,97,73,128,15,44,110,34,45

The disk head is assumed to be at cylinder 23 and moving in the direction of decreasing numberof
cylinders. The disk consists of total 150 cylinders.

1) Draw track chart for SCAN,C-SCAN disk scheduling algorithm.


2) Determine total head movement in tracks in each case.
Q48 . Consider a disk queue with I/O requests on the following cylinder in their arriving
order:
87, 171, 40, 150, 36, 72, 66, 15

The head disk is assumed to be at cylinder 60.

a) Draw track chart for FCFS, SSTF, SCAN, C-SCAN algorithms of disk
scheduling.

b) Determine total head movement in tracks in each case.

Q49. What are the reasons to provide I/O buffering? mention its types.
Q50. Write notes on

a) Copy-on-write
Copy on Write or simply COW is a resource management technique. One of its main use is in the
implementation of the fork system call in which it shares the virtual memory(pages) of the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as
the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then both of these
processes initially will share the same pages in memory and these shared pages will be marked as copy-
on-write which means that if any of these processes will try to modify the shared pages then only a copy
of these pages will be created and the modifications will be done on the copy of pages by that process
and thus not affecting the other process.

b) Rate – Monotonic scheduling


Rate monotonic scheduling is a priority algorithm that belongs to the static priority scheduling
category of Real Time Operating Systems. It is preemptive in nature. The priority is decided
according to the cycle time of the processes that are involved. If the process has a small job duration,
then it has the highest priority. Thus if a process with highest priority starts execution, it will preempt
the other running processes. The priority of a process is inversely proportional to the period it will run
for.
A set of processes can be scheduled only if they satisfy the following equation
:

Where n is the number of processes in the process set, Ci is the computation


time of the process, Ti is the Time period for the process to run and U is the
processor utilization.

c)Deadlock Detection Algorithm


If a system does not employ either a deadlock prevention or deadlock
avoidance algorithm then a deadlock situation may occur. In this case-
• Apply an algorithm to examine state of system to determine whether
deadlock has occurred or not.
• Apply an algorithm to recover from the deadlock. For more
refer- Deadlock Recovery
Deadlock Avoidance Algorithm/ Bankers Algorithm:
The algorithm employs several times varying data structures:

• Available –
A vector of length m indicates the number of available resources of
each type.
• Allocation –
An n*m matrix defines the number of resources of each type currently
allocated to a process. Column represents resource and rows
represent process.
• Request –
An n*m matrix indicates the current request of each process. If
request[i][j] equals k then process Pi is requesting k more instances
of resource type Rj.

d) Time Sharing Operating System


Multiprogrammed, batched systems provided an environment where various
system resources were used effectively, but it did not provide for user
interaction with computer systems. Time sharing is a logical extension of
multiprogramming. The CPU performs many tasks by switches are so frequent
that the user can interact with each program while it is running. A time shared
operating system allows multiple users to share computers simultaneously.
Each action or order at a time the shared system becomes smaller, so only a
little CPU time is required for each user. As the system rapidly switches from
one user to another, each user is given the impression that the entire
computer system is dedicated to its use, although it is being shared among
multiple users. A time shared operating system uses CPU scheduling and
multi-programming to provide each with a small portion of a shared computer
at once. Each user has at least one separate program in memory. A program
loaded into memory and executes, it performs a short period of time either
before completion or to complete I/O.This short period of time during which
user gets attention of CPU is known as time slice, time slot or quantum.It is
typically of the order of 10 to 100 milliseconds. Time shared operating systems
are more complex than multiprogrammed operating systems. In both, multiple
jobs must be kept in memory simultaneously, so the system must have
memory management and security. To achieve a good response time, jobs
may have to swap in and out of disk from main memory which now serves as a
backing store for main memory. A common method to achieve this goal is
virtual memory, a technique that allows the execution of a job that may not be
completely in memory.
e) Earliest Deadline first Scheduling
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling
algorithm used in real-time systems.
It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the
task according to the absolute deadline. The task whose deadline is
closest gets the highest priority. The priorities are assigned and
changed in a dynamic fashion. EDF is very efficient as compared to
other scheduling algorithms in real-time systems. It can make the CPU
utilization to about 100% while still guaranteeing the deadlines of all the
tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than
100%, then it means that all the tasks have met the deadline. EDF finds
an optimal feasible schedule. The feasible schedule is one in which all
the tasks in the system are executed within the deadline. If EDF is not
able to find a feasible schedule for all the tasks in the real-time system,
then it means that no other task scheduling algorithms in real-time
systems can give a feasible schedule. All the tasks which are ready for
execution should announce their deadline to EDF when the task
becomes runnable.
EDF scheduling algorithm does not need the tasks or processes to be
periodic and also the tasks or processes require a fixed CPU burst time.
In EDF, any executing task can be preempted if any other periodic
instance with an earlier deadline is ready for execution and becomes
active. Preemption is allowed in the Earliest Deadline First scheduling
algorithm.

f) Optimal page replacement algorithm


In operating systems, whenever a new page is referred and not present in
memory, page fault occurs and Operating System replaces one of the existing
pages with newly needed page. Different page replacement algorithms
suggest different ways to decide which page to replace. The target for all
algorithms is to reduce number of page faults. In this algorithm, OS replaces
the page that will not be used for the longest period of time in future
g) Thrashing
h) Multiprogramming Operating System.

A multiprogramming operating system may run many programs on a single


processor computer. If one program must wait for an input/output transfer in a
multiprogramming operating system, the other programs are ready to use the
CPU. As a result, various jobs may share CPU time. However, the execution of
their jobs is not defined to be at the same time period.

When a program is being performed, it is known as a "Task", "Process",


and "Job". Concurrent program executions improve system resource
consumption and throughput as compared to serial and batch processing
systems.

The primary goal of multiprogramming is to manage the entire system's


resources. The key components of a multiprogramming system are the file
system, command processor, transient area, and I/O control system. As a result,
multiprogramming operating systems are designed to store different programs
based on sub-segmenting parts of the transient area. The resource management
routines are linked with the operating system core functions.

You might also like