Os-Sol Bank
Os-Sol Bank
[NOV/DEC 2016]
5. DEFINE TIME SHARING SYSTEM.
[NOV/DEC 2017]
6. WHAT ARE THE MAIN FUNCTIONS OF OPERATING SYSTEM?
7. DIFFERENTIATE PROCESS AND PROGRAM.
[NOV/DEC 2018]
8. MENTION THE DIFFERENT OPERATING SYSTEM COMPONENTS.
9. WHAT IS CONCURRENT EXECUTION?
10. WHAT IS THE DIFFERENCE BETWEEN MULTI-TASKING AND MULTI-USER
SYSTEM?
[NOV/DEC 2015]
15. EXPLAIN TEST() AND SET () SYNCHRONISATION HARDWARE.
16. MENTION THE METHODS USED TO HANDLE DEADLOCKS.
[NOV/DEC 2016]
17. WHAT IS AGING?
18. WHAT IS MONITOR?
19. DEFINE DEADLOCK WITH AN EXAMPLE.
20. DEFINE COMPACTION. [REPEATED IN NOV/DEC 2017]
[NOV/DEC 2017]
21. WHAT IS CONVOY EFFECT?
22. WHAT IS MUTUAL EXCLUSION?
23. WHAT ARE THE NECESSARY CONDITIONS FOR DEADLOCK?[REPEATED IN
2018]
[NOV/DEC 2018]
24. WHAT IS SEMAPHORE?
[NOV/DEC 2015]
29. WHAT IS DYNAMIC LOADING?
30. EXPLAIN OVERLAYS.[REPEATED IN 2018]
[NOV/DEC 2016]
31. DEFINE VIRTUAL MEMORY.[REPEATED IN NOV/DEC 2017]
32. DEFINE LOGICAL AND PHYSICAL ADDRESS.
[NOV/DEC 2017]
33. WHAT IS DEMAND PAGING?
[NOV/DEC 2018]
34. WHAT IS FRAGMENTATION?
UNIT-IV
[NOV/DEC 2015]
39. DEFINE THRASHING.
40. LIST DIFFERENT TYPES OF FILES.
41. WHAT IS DISK FORMATTING?[REPEATED IN NOV/DEC 2018]
[NOV/DEC 2016]
42. MENTION ANY FOUR ATTRIBUTES OF FILE.
43. WHAT IS A BIT VECTOR?
44. DEFINE SEEK TIME.[REPEATED IN NOV/DEC2018]
[NOV/DEC 2017]
45. MENTION ANY FOUR FILE OPERATIONS.[REPEATED IN NOV/DEC2018]
[NOV/DEC 2018]
46. WHAT IS THE DIFFERENCE BETWEEN ABSOLUTE PATH AND RELATIVE
PATH NAME?
47. EXPLAIN CONTIGUOUS MEMORY MANAGEMENT TECHNIQUES.
UNIT-V
[NOV/DEC 2015]
52.DEFINE ENCRYPTION
[NOV/DEC 2016]
53.WHAT IS WORM?
[NOV/DEC 2017]
54.WRITE ANY TWO ANTIVIRUS SOFTWARE.
[NOV/DEC 2018]
55.WHAT IS AN ACCESS MATRIX?
[NOV/DEC 2015]
1. EXPLAIN SPOOLING WITH A DIAGRAM.
[NOV/DEC 2016]
2. EXPLAIN MULTIPROGRAMMING SYSTEM. MENTION ITS ADVANTAGES.
[NOV/DEC 2017]
3.EXPLAIN TIME SHARING SYSTEM.
4. WHAT IS SYSTEM CALL? EXPLAIN THE TYPES OF SYSTEM CALLS.
5. EXPLAIN DIFFERENT PROCESS STATES WITH A NEAT DIAGRAM.[REPEATED
IN NOV/DEC 2016 & 2018]
[NOV/DEC 2018]
6. What is an operating system? Give four functions of OS.
7. What is multiprogramming? Difference between multiprogramming,
multiprocessing and distributed processing.
UNIT II
[NOV/DEC 2015]
[NOV/DEC 2016]
11. Explain the critical section problem and requirements of a critical section
problem.[REPEATED IN NOV/DEC 2018]
[NOV/DEC 2017]
12. WHAT IS SEMAPHORE? EXPLAIN DIFFERENT TYPES OF SEMAPHORE.
13. EXPLAIN BANKER’S ALGORITHM.
UNIT III
[NOV/DEC 2015]
14. WRITE A NOTE ON FRAGMENTATION .
[NOV/DEC 2016]
15. EXPLAIN THE TERMS FIRST-FIT, BEST-FIT AND WORST-FIT .[REPEATED IN
NOV/DEC 2017 & 2018]
UNIT IV
[NOV/DEC 2015]
16. EXPLAIN LRU PAGE REPLACEMENT ALGORITHM[REPEATED IN NOV/DEC
2017]
17. EXPLAIN VARIOUS FILE ACCESSING METHOD[REPEATED IN NOV/DEC 2016]
[NOV/DEC 2016]
18. Describe the frame allocation algorithm.
UNIT V
[NOV/DEC 2015]
19. WHAT IS VIRUS? EXPLAIN DIFFERENT TYPES OF VIRUS.[REPEATED IN
NOV/DEC 2017 & NOV/DEC 2018]
[NOV/DEC 2016]
20. List any three goals of protection.
[NOV/DEC 2018]
SECTION C
UNIT I
[NOV/DEC 2015]
1. Explain time sharing and real time operating system.
2. Explain various services offered by an operating system.
[NOV/DEC 2016]
3. EXPLAIN DIFFERENT TYPES OF SCHEDULERS[REPEATED IN NOV/DEC 2017]
[NOV/DEC 2018]
4. Define, compare and contrast each of the following terms:
a. Batch processing
b. Time sharing
c. Real time processing
UNIT II
[NOV/DEC 2015]
5. EXPLAIN DIFFERENT PROCESS STATES WITH A NEAT DIAGRAM.
6. Consider the following processes with their CPU burst time in milliseconds.
PROCESS CPU BURST
P1 10
P2 1
P3 2
P4 10
P5 5
The processes arrive in the order P1,P2, P3,P4,P5. Draw the Gantt chart illustrating the
execution of these processes using FCFS.
Calculate
1. Average waiting time
2. Average turnaround time
7. WHAT IS SEMAPHORE? EXPLAIN DIFFERENT TYPES OF SEMAPHORE.
8. Explain different methods of deadlock prevention[REPEATED IN NOV/DEC 2016 &
2017]
[NOV/DEC 2016]
9. Explain FCFS and round robin scheduling with example[REAPEATED IN NOV/DEC
2017]
10. EXPLAIN BANKER’S ALGORITHM.
[NOV/DEC 2017]
UNIT IV
[NOV/DEC 2015]
18. Explain various method used to allocate space to files[REPEATED IN
NOV/DEC 2017]
19. Explain any two disk scheduling algorithm[REPEATED IN NOV/DEC
2016 & 2017]
[NOV/DEC 2016]
20. Explain any two page replacement algorithm with an example.
21. EXPLAIN VARIOUS FILE ACCESSING METHOD[REPEATED IN
NOV/DEC 2017]
22. Explain single level and two level directory.
[NOV/DEC 2018]
23. Explain any three disk scheduling algorithms with example
UNIT V
[NOV/DEC 2016]
24. WHAT IS VIRUS? EXPLAIN DIFFERENT TYPES OF VIRUES
[NOV/DEC 2017]
25. EXPLAIN USER AUTHENTICATION IN DETAIL[REPEATED IN NOV/DEC
2018]
SECTION D
UNIT I
[NOV/DEC 2017]
1. WRITE A SHORT NOTE ON OPERATING SYSTEM COMPONENTS.
UNIT II
[NOV/DEC 2015]
2. WRITE A SHORT NOTE ON:
PRE-EMPTIVE AND NON-PREEMPTIVE SCHEDULING
[NOV/DEC 2016]
3.WRITE A SHORT NOTE ON:
i. PCB [REPEATED IN NOV/DEC 2018]
ii. SEMAPHORE
iii. DINING-PHILOSOPHER PROBLEM [REPEATED IN NOV/DEC 2018]
[NOV/DEC 2017]
4. WRITE A SHORT NOTE ON MULTI-LEVEL QUEUE SCHEDULING
UNIT III
[NOV/DEC 2016]
5. WRITE A SHORT NOTE ON OVERLAYS [REPEATED IN NOV/DEC 2017]
UNIT IV
[NOV/DEC 2015]
6. WRITE A SHORT NOTE ON SWAP SPACE MANAGEMENT
[NOV/DEC 2017]
7. WRITE A SHORT NOTE ON OPTIMAL PAGE REPLACEMENT TECHNIQUE
UNIT V
[NOV/DEC 2015]
8. WRITE A SHORT NOTE ON ANY FIVE OBJECTS OF WINDOWS EXECUTIVE
9. WRITE A SHORT NOTE ON SECURITY MECHANISM USED IN LINUX
[NOV/DEC 2018]
11.WHAT IS BUFFERING?
Ans: Buffering is a method of overlapping input/output and processing a job.
12.WHAT IS SPOOLING?
Ans: Spooling allows overlapping of input/output of one job with the computation of other
jobs thereby increasing the performance of CPU.
13.WHAT ARE THE FUNCTIONS OF DISTRIBUTED OPERATING SYSTEM?
Ans: i. Resource sharing
ii. Computation speedup
iii. Reliability
iv. Communication
Ans: A semaphore S is an integer variable with non-negative values, which can be accessed
only through two standard atomic operations: wait () and signal ().
25.WHAT IS RACE CONDITION?
Ans: When several processes access and manipulate the same data concurrently and the final
outcome of the execution depends on the particular order in which the access takes place, is
called race condition.
26.WHAT ARE THE STATES OF RESOURCE?
Ans: i. Request
ii. Use
iii. Release
27.GIVE THE CHARACTERISTICS OF A DEADLOCK SITUATION.
Ans: i. Necessary condition
ii. Resource-allocation graph
28.WHAT IS SAFE STATE?
Ans: A state is safe if the system can allocate resources to each process in some order and
still avoid a deadlock.
Ans: In dynamic loading, a program or routine is not loaded until it is called. All routines are
stored on the disk in a relocatable load format.
Ans: Suppose the number of frames allocated is not sufficient to hold the pages, the process
will quickly page fault. Next it must replace some page. Since all the pages of the process are
active, it must replace a page which is in current use. As a result, it quickly faults again, and
again and again. Thus the process continues to fault, replacing pages, for which it faults and
brings it back immediately.
46. WHAT IS THE DIFFERNCE BETWEEN ABSOLUTE PATH AND RELATIVE PATH
NAME?
Ans: Absolute path, also referred to as file path or full path, refers to a specific location in the
file system, irrespective of the current working directory. It is the location of a file or
directory in a computer which contain the root directory and the complete directory list that is
required to locate the file or directory.
A relative path, on the contrary, refers to the location of a directory using current directory as
a reference, which avoids the need to specify the full absolute path. Thus, a relative path is
also called as a non-absolute path.
47. EXPLAIN CONTIGUOUS MEMORY MANAGEMENT TECHNIQUES.
Ans: Contiguous memory allocation is one of the oldest memory allocation schemes. When a
process needs to execute, memory is requested by the process. The size of the process is
compared with the amount of contiguous main memory available to execute the process. If
sufficient contiguous memory is found, the process is allocated memory to start its execution.
The contiguous memory allocation scheme can be implemented in operating systems with the
help of two registers, known as the base and limit registers.
48.WHAT IS A FILE?
Ans: File provides a mechanism for online storage and access to both data and programs for
the operating system and all the users of the computer system.
49.WHAT ARE THE VARIOUS FILE ACCESS METHODS.
Ans: i. Sequential access
ii. Direct access
iii. Sequential direct access
50. LIST THE FUNCTIONS OF FILE MANAGEMENT SYSTEM.
Ans: i. To provide storage of data
ii. Keeping track of all files in a system.
51.DEFINE TRACK.
Ans: Each disk surface is divided into concentric circles called tracks.
52. DEFINE ENCRYPTION.
Ans: Encryption is the conversion of plain text or data into unintelligible form by means of a
reversible mathematical computation.
Advantages:
i. It overlaps I/O jobs with the computation of other jobs
ii. It can be used to process data at remote sites
iii. It increases the performance of the system.
Ans: The multiprogramming operating system can execute several jobs concurrently by
switching the attention of the CPU back and forth among them.
The primary reason multiprogramming operating system was developed and the reason they
are popular, is that they enable the CPU to be utilized more efficiently.
File management:
Create file, delete a file
Open, close a file
Read, write, reposition on file
Get a file attributes, set file attributes
Device management:
Request device, release device
Read, write, reposition
Get the device attributes, set device attributes
Logically attach, detach devices
Information maintenance:
Get time/date, set time/date
Get system data, set system data
Set process, file or device attributes
Get process, file or device attributes
Communication Management
Create, delete connection
Send receive message
Attach, detach remote devices
Transfer status information
5. EXPLAIN DIFFERENT PROCESS STATES WITH A NEAT DIAGRAM.
Ans:
New state: first the process is being created and it is loaded from secondary
storage device to main memory.
Running state: instructions are being executed.
Waiting state: the process is waiting for some event to occur such as an i/o
completion
Ready state: the process is waiting to be allocated to a processor. The process
generally Comes to this state immediately after it is created.
Terminated: A process terminates when it is finishes execution.
Ans: An operating system is system software that manages the computer hardware. It acts as an
interface between a user and the hardware of a computer.
Functions of OS:
User interface: All operating systems provide either command line interface or
graphical user interface (GUI)
Program execution: The operating system must be able to load a program into
memory or run the program
I/O operation: A running program may require I/O from a file or I/O device. The OS
provides an interface to perform I/O operations.
Error detection: OS needs to be constantly aware of errors like power failure, memory
error etc.
Accounting: It keeps track of resources used by different users.
Ans: Multiprogramming is the rapid switching of CPU between multiple processes in memory. It
organizes job in such way that CPU always have atleast one job to execute. As there is only
one processor, different programs cannot be executed simultaneously.
Process state: The state may be new, ready, running, waiting and so on.
Program Counter: It holds the address of the next instruction to be executed for that process.
CPU Registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers.
CPU Scheduling information: This information includes a process priority and other
scheduling parameters.
Accounting Information: This information may include the amount of CPU and real time used.
I/O status information: This information includes the list of I/O device allocated to the
process and so on.
11. Explain the critical section problem and requirements of a critical section problem.
Ans: Each process has a section of code, called a critical section in which the process may be
changing common variables, updating a table, writing a file and so on. The important feature
of the system is that when one process is executing its critical section, no other process is to
be allowed to execute in its critical section. Thus the execution of critical sections by process
is mutually exclusive in time.
Ans: There are two techniques for allocation of frames to user process.
They are:
i. Equal allocation: The easiest way to split m frames among n processes is to give everyone
an equal share, m/n frames. This scheme is called equal allocation.
For example: If there are 93 frames and five processes, each process will get around 18
frames. The leftover three frames can be used as a free-frame buffer pool.
ii. Proportional allocation: An alternative method is to recognize the fact that different
processes will need differing amounts of memory. For example, consider a system with a
1KB frame size. If a small process of 10KB and an interactive database of 127KB are the
only 2 processes running in a system with 62frames, it does not make much sense to give
each process 31 frames. The smaller process does not need more than 10 frames, so the other
21 frames are wasted.
To solve this problem, proportional allocation can be used, in which we allocate available
memory to each process according to its size.
Let the size of the virtual memory for process p be s and define
i i
S=∑si
Then, if the total number of available frames is m, we allocate ai frames to process p , where
i
ai is approximately,
ai=si/S*m
a must be adjusted to an integer that is greater than the minimum number of frames required
i
Ans: We need to provide protection for several reasons. The most common goals are as
follows:
To prevent mischievous, intentional violation of access restriction by a user.
To ensure that each program component active in the system uses resources only in
the ways consistent with stated policies.
To improve reliability by detecting errors at the interfaces between component
subsystems. Early detection of errors can prevent contamination of a healthy
subsystem by a malfunctioning subsystem.
To provide a mechanism to enforce the policies governing the usage of resources.
To guard resources created and supported by an application programmer against
misuse.
21. UNIX system process scheduling algorithm
Ans: Linux has two separate process- scheduling algorithms.
One is time-sharing algorithm for fair, pre-emptive scheduling among multiple
processes
The second algorithm is designed for real-time tasks, where priorities of tasks are more
important than fairness.
The LINUX scheduler is a pre-emptive, priority based algorithm with two separate
priority ranges:
A real-time ranges from 0 to 99 and a nice value ranging from 100 to 140.
These ranges map into a global priority scheme whereby numerically lower values
indicate higher priorities. The peculiarity of LINUX is it assigns higher priority tasks
longer time quantum and vice versa.
The relationship between priorities and time-slice length is shown in fig.
SECTION C
1. Explain time sharing and real time operating system.
Ans:
TIME SHARING OPERATING SYSTEM
Time sharing is a technique which enables many people, located at various terminals to use a
particular computer resources at the same time. Processor’s time which is shared among
multiple users simultaneously is termed as time sharing.
Its objective is to minimize response time.
Advantages:
Reduces CPU idle time
Provides the advantage of quick response
Disadvantage:
Problem of reliability
a. Batch processing
b. Time sharing
Batch processing is a technique in which an OS collects the programs and data together in a
batch before processing starts. The OS keeps a number of jobs in memory and executes them
without any manual information. Jobs are processed in the order of submission i.e First come
first served.
Advantage:
i. Increased performance as a new job get started as soon as the previous job is finished.
Disadvantage:
i. Due to lack of protection, one batch job can affect pending jobs.
Time sharing
Time sharing is a technique which enables many people, located at various terminals to use a
particular computer resources at the same time. Processor’s time which is shared among
multiple users simultaneously is termed as time-sharing. Its objective is to minimize response
time.
Advantage:
Disadvantage:
i. Problem of reliability
Real-time processing
Real-time processing is the execution of data in a short time period. It is a data processing
that occurs as the user enters in the data or a command .
Advantage:
Disadvantage:
The processes arrive in the order P1,P2, P3,P4,P5. Draw the Gantt chart illustrating the execution
of these processes using FCFS.
Calculate
1. Average waiting time
2. Average turnaround time
Ans: Gantt Chart
P1 P2 P3 P4 P5
0 10 11 13 23 28
Process Burst
time
8888 P1 8
4 P2 4
1 P3 1
P1 P2 P3
0 8 12 13
Round Robin scheduling
The RR scheduling algorithm is designed especially for timesharing systems. It is
similar to FCFS scheduling but pre-emption is added.
Each and every process gets a small unit of CPU time called a time quantum or time
slice, which is usually 10-100 ms.
The ready queue is treated as a circular queue. The CPU scheduler goes around the
ready queue allocating CPU to each process for a time interval of up to 1 time
quantum. After the time elapses, the process is automatically pre-empted by the
operating system and added at the tail of the ready queue.
Example;
process Burst time
P1 25
P2 3
P3 3
P1 P2 P3 P1 P1 P1 P1
0 5 8 11 16 21 26 31
In the above diagram, resource 1 and resource 2 have single instances. There
is a cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
2. If there are multiple instances of resources:
Detection of the cycle is necessary but not sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies
according to different situations.
Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock recovery
as it is time and space consuming process. Real-time operating systems use
Deadlock recovery.
Recovery method
1. Killing the process: killing all the process involved in the deadlock. Killing
process one by one. After killing each process check for deadlock again keep
repeating the process till system recover from deadlock.
2. Resource Pre-emption: Resources are pre-empted from the processes
involved in the deadlock, preempted resources are allocated to other processes
so that there is a possibility of recovering the system from deadlock. In this
case, the system goes into starvation.
Ans:
PAGING SEGMENTATION
i. Main memory is divided into fixed size i. Main memory is divided into variable sized
partitions called pages. partitions called segments.
ii. Pages are mapped directly to frames. ii. Segments with base address and limit are
mapped to main memory.
iii. Does not support user’s view of iii. Supports user’s view of memory
memory
iv. Does not support growing and iv. Supports dynamic growing and shrinking of
shrinking of pages segments
v. Suffers from internal fragmentation v. Suffers from external fragmentation
function ,etc.
Segments are formed at the time of program translation by grouping logically related entities
per segment .Formation of segments depend on the compiler. When a program is complied
,the complier automatically creates segment of the input program. A logical address space is a
collection of segments.
Each segment has a name and length. The components of a single segment reside in one
contigious . Different segments of the same process may occupy non contigious area of
physical memory.
To address specify both the segment name and offset within the segment .The user therefore
specifies each address by two quantities: a segment name and an offset.
17. WRITE A NOTE ON FRAGMENTATION .
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks
are too small to satisfy any request .The different type of fragmentation are:
1.External fragmentation
2.Internal fragmentation
1.EXTERNAL FRAGMENTATION:
*In the case of multi-partitioned allocation as the process are loaded and removed from the
memory ,the free space gets divided into little pieces ,in some situation even though there is
enough total memory space to allocate for the process ,the available space is not contigious
i.e , the storage space is fragmented into a number of small holes .
* In extreme cases there may be a block of free memory between every two processes. If all
the small holes could be combined into one big free block, it would accommodate several
more processes. Therefore wastage of memory between partitions, due to scattering of free
space into a number of discontiguous areas, is known as External Fragmentation.
* Depending on the total size of memory and average size of a program, external
fragmentation may either be a minor or major problem.
* The selection of first-fit or best-fit algorithm can affect the amount of fragmentation. For
example statistical analysis of first fit algorithm shows that for N blocks of allocated
memory, another 0.5N will be lost to fragmentation.
18. Explain various method used to allocate space to files.
Ans: There are three methods of file allocation:
i. Contiguous allocation
ii. Linked allocation
iii. Indexed allocation
i. Contiguous Allocation:
a. Contiguous allocation requires that each file occupy a set of contiguous blocks on the disk.
The word contiguous means continuous
b. The seek time is minimal over here. Consequently, access time of a file and the I/O
performance is greatly improved.
c. To access a file, there we only need to know the starting location and length of the file
which are stored in the directory as shown in the figure.
Advantage:
ii. Contiguous allocation is easy to implement.
Disadvantage:
i. Each block of a file contains a pointer to the next block after it in the list.
ii . Linked Allocation:
With the linked allocation approach, disk blocks of a file are chained together with a linked-
list. The directory entry of a file contains a pointer to the first block and a pointer to the last
block.
To create a file, we create a new directory entry and the pointers are initialized to nil. When a
write occurs, a new disk block is allocated and chained to the end of the list.
a. This method solves the problems associated with contiguous allocation.
b. Here the blocks of a single file can be scattered anywhere on the disk. The reason because
the entire file is implemented as a Linked List.
c. Here every file is an element of Linked List (similar to concept of Linked List in Data
Structures).
d. The directory maintained by the OS contains a pointer to the first and the last blocks of a
file.
e. Each block of a file contains a pointer to the next block after it in the list.
Advantages include no external fragmentation, size of file need not be declared at start, a file
can grow as long as free blocks are available on disk.
Disadvantages: It works perfectly for Sequential access only, space needs to be allocated in
block for pointers.
iii .Indexed Allocation
Ans:
This page replacement technique has the least page fault rate. The concept behind this
algorithm is that “replace the page that will not be used for a longest period of time”. Using
this page replacement algorithm guarantees the lowest possible page fault rate for a fixed
number of frames.
Example:
7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
The first three references cause faults that fill three empty frames.
Reference to page 2 replaces page 7 because 7 will not be used until reference 18,
whereas page 0 will be used at 5, and page 1 at 14.
Next reference to 0 does not fault.
The reference to page 3 replaces page 1 as page 1 will be the last of three pages in
memory to be referenced again.
This algorithm results in 9 page faults.
Page fault can be calculated as:
This algorithm associates with each page the time of that page’s last use. LRU
chooses the page that has not been used for the longest period of time. This stratergy
is similar to the optimal page replacement, except that LRU looks backward in time
whereas, OPT looks forward in time.
Example:
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
1 1 3 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7
In the example, the first 3 faults(7,0,1) are the same as those for optimal replacement.
Next reference to page 2 replaces 7 as it is least recently used
Next reference to page 0 does not need replacement
Next reference to page 3 replaces 1 which is least recently used from(2,0,1)
When the reference to page 4 occurs, LRU sees that of the three frames in memory,
page 2 was used least recently. Thus LRU replaces page 2 not knowing that page 2 is
about to be used.
When it faults for page 2, it replaces page 3 as it is recently used.
It results in 12 page faults
Page fault rate = no of faults / no of bits in reference
= 12 / 20*100 = 60%
Ans:
The simplest directory structure where all files are stored in a single.
In this all files are stored in the same directory.
A single-level directory has significant limitations when the number of files increases
or when the system has more than one user.
The files in the same directory must have unique names.
Even a single user on a single-level directory finds it difficult to remember the file
names as the number of files increases.
Two-level directory
Since a single level directory leads to confusion of file names among different users, a
better solution is to use a separate directory for each user.
In this directory structure, each user will have his/her own directory(or use or file
directory UFD) whenever the user wants to access his files. First the root directory I
or the master file directory MFD is searched for the UFD of that user. Next the file to
be opened is searched only in her own UFD. Thus, different users can have same file
names, as long as all these file names are unique within that directory.
SECTION D
1. WRITE A SHORT NOTE ON OPERATING SYSTEM COMPONENTS.
Ans:
Process management: A process needs certain resources like CPU time, memory, files and
I/O devices. These resources are either given to the process when it is created or allocated
while it is running.
Main memory management: main memory is a large array of words or bytes. Each byte has
its own address. The CPU fetches instruction fetch cycle, then reads and writes data from
main memory during data fetch cycle.
File management: this is visible component of an operating system. A file is a collection of
related information defined by its creator. Files may be of free form or formatted.
Input-output system management: one of the purposes of an operating system is to hide the
peculiarities of specific hardware devices from the user.
Secondary storage management: the main memory is too small to accommodate all data and
programs, and its data are lost when power is lost, the computer system must provide
secondary storage to backup main memory.
Networking: A distributed system is a collection of processors that do not share memory,
peripheral devices or a clock. Each processor has its own memory and clock.
Protection system: if a computer system has multiple users and allows the concurrent
execution of multiple processes then the various processes must be protected from each
other’s activities.
Command interpreter system: command interpreter is the interface between the user and the
operating system. Many commands are given to the operating system by control statement.
Ans: PRE-EMPTIVE
NON PRE-EMPTIVE
Once the CPU is given to a process, it cannot be taken away from that process
Shorter jobs must wait for completion of longer jobs.
Overheads are low
Suitable for batch processing
Scheduling is done once
If an interrupt occurs, the process is terminated
Jobs is completed according to the allocated time
Process state: The state may be new, ready, running, waiting and so on.
Program Counter: It holds the address of the next instruction to be executed for that process.
CPU Registers: The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers.
CPU Scheduling information: This information includes a process priority and other scheduling
parameters.
Accounting Information: This information may include the amount of CPU and real time used.
I/O status information: This information includes the list of I/O device allocated to the process and so
on.
ii. SEMAPHORE
Ans: A semaphore is a process synchronization tool that represents data structures used by
the operating system kernel to synchronize processes. In the other word semaphore is an
integer variable with non -negative values which can be accessed only through two standard
atomic operations: wait() and signal().
wait(S):the wait() operation was originally termed P . It decrements the semaphore value.
Signal(S):the signal() operation increments the value of its semaphore S.
Types of semaphore
There are 2 types
Binary semaphore
Counting semaphore
Binary semaphore: binary semaphore is a semaphore which can be take only values of 0 and
1. It is also known as mutex locks, since they provide mutual exclusion.
Do{
Wait(mutex);
//Critical section;
Signal(mutex);
//Remainder section;
}
While (TRUE);
Counting semaphore: counting semaphore is a semaphore whose value can range over an
unrestricted domain. It can be used to control access to a given resource containing a finite
number of instances.
A multilevel queue scheduling algorithm partitions the ready queues might be used for
foreground and background processes. The foreground queue may be scheduled by RR
algorithm and background queue may be scheduled by an FCFS algorithm.
Each queue has its own scheduling algorithm. For example, separate queues might be used
for foreground and background processes. The foreground queue may be scheduled by RR
algorithm and background queue may be scheduled an FCFS algorithm.
There must also be scheduling among the queues-
*One method is to assign time-slice to each, with which it can schedule the various
processes in its queue.
*another method is to execute the high priority queues first and then process the lower priority
queues.
Swapping is the process of moving out processes temporarily out of the main memory when
it reaches critically low point in to main memory to the swap space (on to the disk). This is
done to free main memory. Modern computers combine swapping with virtual memory
techniques and swap pages, thereby merging the two concepts of swapping and paging.
Swap space management is a low level task of the OS. The main goal for the design and
implementation of swap space is to provide the best throughput for the virtual memory
system. However, swapping decreases system performance since disk access is much slower
than memory access.
Swap space is used in various ways by different OS , depending on the memory management
algorithms in use, for instance
i. Some system that implements swapping may use swap space to hold an entire process
image, including the code and data
ii. Paging systems simply store pages that have been pushed out of the memory
The first three references cause faults that fill three empty frames
Reference to page2 replaces page 7 because 7 will not be used until reference
18, whereas page 0 will be used at 5, and page 1 at 14.
Next reference to page 0 does not fault.
The reference to page 3 replaces page1 as page 1 will be the last of the three
pages in memory to be referenced again.
This algorithm results in 9 page faults.
Page fault rate can be calculated as :
Page fault rate=number of faults/number of bits in reference string
The main advantage of this architecture is that interactions between modules are kept simple.