Osy Qb Ans [Final]

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

OSY QB ANS

Chapter no 4: CPU Scheduling


Q1. Explain any four scheduling criteria.
• CPU utilization: - In multiprogramming the main objective is to keep CPU
as busy as possible.
• CPU utilization can range from 0 to 100 percent.
• Throughput: - It is the number of processes that are completed per unit
time.
• Turnaround time:-The time interval from the time of submission of a
process to the time of completion of that process is called as turnaround
time.
• It is calculated as: Turnaround Time = Waiting Time + Burst Time or End
Time – Arrival Time
• Waiting time: - It is the sum of time periods spent in the ready queue by
a process. It is calculated as: Waiting Time = Start Time – Arrival Time
• Response time:- The time period from the submission of a request until
the first response is produced is called as response time.

Q2. Explain Deadlock and necessary conditions for deadlock.


A Deadlock is a situation where each of the computer process waits for a
resource which is being assigned to some another process. In this situation,
none of the process gets executed since the resource it needs, is held by some
other process which is also waiting for some other resource to be released.
Four condition for deadlock:
1. Mutual exclusion: Only one process at a time can use non-sharable
resource.
2. Hold and wait: A process is holding at least one resource and is waiting
to acquire additional resources held by other processes.
3. No pre-emption: A resource can be released only voluntarily by the
process holding it after that process completes its task.
4. Circular wait: A circular chain of processes exists, with each process
waiting for a resource held by the next process in the chain.
Q.3 Numerical based on FCFS, SJF, Round Robin
FCFS:
SJF:
Round Robin:
Q.4 Write a note on Deadlock prevention and avoidance
Deadlock Prevention
Deadlock prevention aims to eliminate at least one of the four necessary
conditions for deadlock. Common strategies include:
1. Mutual Exclusion Avoidance: If feasible, make resources shareable.
However, for resources like printers or file locks, this is often impractical.
2. Hold and Wait Prevention: Require processes to request all the
resources they need at once, before they start execution, or to release all
resources if they need to wait for additional resources. This limits
resource utilization flexibility but helps avoid deadlocks.
3. No Pre-emption: Allow pre-emption of resources, meaning resources
can be forcibly taken from a process if necessary. This can be complex
and is often used in contexts like memory and CPU scheduling.
4. Circular Wait Prevention: Impose an ordering of resource requests. If
processes request resources in a predefined order, circular wait cannot
occur, as each process can only request resources that are “higher” in
the order than any it currently holds.
Deadlock Avoidance
Deadlock avoidance differs from prevention by allowing the system to make
decisions dynamically based on current resource allocations, using algorithms
to assess whether a resource allocation request may lead to a deadlock.
Banker’s Algorithm (by Dijkstra) is a common deadlock avoidance algorithm
used in systems with multiple instances of each resource type. It ensures that
resources are allocated only if the resulting state remains safe. The system
checks for:
• Safe State: If at least one sequence of processes exists such that each
process can finish executing without leading to deadlock.
• Unsafe State: A state where it is possible to allocate resources in a way
that leads to a deadlock.

The system simulates resource allocation for each request to determine if it can
be safely granted without leading to an unsafe state.
Chapter no 5: Memory Management
Q.1 Describe paging and segmentation
Paging

• Paging is a mechanism used to retrieve processes from the secondary


storage into the main memory in the form of pages.
• In this technique, each process is divided in the form of PAGES
• The main memory will also be divided in the form of FRAMES
• One page of the process is to be stored in one of the frames of the
memory.
• Pages of the process are brought into the main memory only when they
are required otherwise they reside in the secondary storage.
• Considering the fact that the pages are mapped to the frames , PAGE
SIZE NEEDS TO BE AS SAME AS FRAME SIZE.
Segmentation
• Paging is more close to Operating system rather than the User.
• Operating system doesn't care about the User's view of the process. It
may divide the same function into different pages and those pages
may or may not be loaded at the same time into the memory.
• It is better to have segmentation which divides the process into the
segments. Each segment contain same type of functions such as main
function can be included in one segment and the library functions can
be included in the other segment
• Pages are physical in nature and Segments are logical divisions of a
process
• Segments are variable in size
• Each program is divided into logical parts-SEGMENTS
• The details about each segment are stored in a table called as
segment table
Q.2 Explain partitioning techniques
Fixed Partitioning:

▪ Number of partitions in memory are fixed but size of each partition may
or may not be same.
▪ As it is contiguous allocation, hence no spanning is allowed.
▪ Partition is made before execution or during system configure
▪ simplest technique used to put more than one processes in the main
memory

Advantages:
▪ Easy to implement
▪ Little OS overhead

Disadvantages:
▪ Internal Fragmentation
▪ External Fragmentation
▪ Limited process size
▪ Limitation on Degree of Multiprogramming
Variable partitioning

▪ Initially main memory is empty and partitions are made during the run-
time according to process’s need instead of partitioning during system
configure
▪ The size of partition will be equal to incoming process
▪ Internal fragmentation can be avoided to ensure efficient utilisation of
memory
▪ Number of partitions are not fixed and depends on the number of
incoming process and Main Memory’s size

Advantages:
▪ No Internal Fragmentation
▪ No restriction on Degree of Multiprogramming
▪ No Limitation on the size of the process

Disadvantages:
▪ Difficult Implementation
▪ External Fragmentation
Q.3 Numerical based on Page replacement algorithms.
FIFIO (Other Name of Page Replacement Algorithm):
Q.4 Define: i)Page fault
ii)Virtual Memory
iii)Locality of reference
i) Page Fault:
• A page fault is trap that occurs when the requested page is not loaded
into the memory
• When a program tries to access a chunk of memory that does not exist in
physical memory (main memory) causes a page fault.

ii) Virtual Memory:


• Virtual Memory is a storage scheme that provides user an illusion of
having a very big main memory. This is done by treating a part of
secondary memory as the main memory
• Instead of loading one big process in the main memory, the Operating
System loads the different parts of more than one process in the main
memory
• Virtual memory concept can be implemented using Paging,
segmentation or combined techniques
• Degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased

iii) Locality of Reference:


Locality of reference is a concept that describes how a process tends to access
the same memory locations repeatedly over a short period of time. t can be
used to improve memory performance. This principle is divided into:
• Temporal locality: The same memory locations are accessed repeatedly
within a short period.
• Spatial locality: Memory locations close to each other are accessed
within a short time frame.
Q.5 Explain free space management techniques.
• Free space management is a critical aspect of operating systems as it
involves managing the available storage space on the hard disk or other
secondary storage devices.
• The operating system uses various techniques to manage free space and
optimize the use of storage devices
• The system keeps tracks of the free disk blocks for allocating space to
files when they are created.
• To reuse the space released from deleting the files, free space
management becomes crucial.
There are 2 techniques implemented by free space management
techniques:
1. Bitmap or Bit vector
2. Linked List

Bitmap or Bit Vector:

• A Bitmap or Bit Vector is series or collection of bits where each bit


corresponds to a disk block.

• The bit can take two values: 0 and 1: 0 indicates that the block is free
and 1 indicates an allocated block.

• The given instance of disk blocks on the disk can be represented by


Advantages:
• Simple to understand.

• Finding the first free block is efficient. It requires scanning the words (a
group of 8 bits) in a bitmap for a non-zero word. (A 0-valued word has all
bits 0). The first free block is then found by scanning for the first 1 bit in
the non-zero word.

Disadvantages:
• For finding a free block, Operating System needs to iterate all the blocks
which is time consuming.

• The efficiency of this method reduces as the disk size increases.

• a bitmap of 16 bits as: 1111000111111001.

Linked List:

• In this approach, the free disk blocks are linked together i.e. a free block
contains a pointer to the next free block.

• The block number of the very first disk block is stored at a separate
location on disk and is also cached in memory.

• The last free block would contain a null pointer indicating the end of free
list.
Advantages:
• The total available space is used efficiently using this method.

• Dynamic allocation in Linked List is easy, thus can add the space as
per the requirement dynamically.

Disadvantages:
• When the size of Linked List increases, managing the pointers becomes
tedious
Chapter no 6.: File Management
Q.1 Explain any four file operations
1.Create
Creation of the file is the most important operation on the file. Different types
of files are created by different methods for example text editors are used to
create a text file, word processors are used to create a word file and Image
editors are used to create the image files.
2.Write
Writing the file is different from creating the file. The OS maintains a write
pointer for every file which points to the position in the file from which, the
data needs to be written.
3.Read
Every file is opened in three different modes : Read, Write and append. A Read
pointer is maintained by the OS, pointing to the position up to which, the data
has been read.
4.Re-position
Re-positioning is simply moving the file pointers forward or backward
depending upon the user's requirement. It is also called as seeking.
5.Delete
Deleting the file will not only delete all the data stored inside the file, It also
deletes all the attributes of the file. The space which is allocated to the file will
now become available and can be allocated to the other files.
6.Truncate
Truncating is simply deleting the file except deleting attributes. The file is not
completely deleted although the information stored inside the file get
replaced.
Q.2 Write a note on RAID.
• RAID works by placing data on multiple disks and allowing input/output
(I/O) operations to overlap in a balanced way, improving performance.

• Because using multiple disks increases the mean time between failures,
storing data redundantly also increases fault tolerance.

• RAID arrays appear to the operating system (OS) as a single logical drive.

RAID 0:
• Data is distributed across the HDDs in the RAID set.

• Allows multiple data to be read or written simultaneously, and


therefore improves performance.

• Does not provide data protection and availability in the event of disk
failures.

RAID 1:
• Data is stored on two different HDDs, yielding two copies of the same
data.

• In the event of HDD failure, access to data is still available from the
surviving HDD.

• When the failed disk is replaced with a new one, data is automatically
copied from the surviving disk to the new disk.

• Disadvantage: The amount of storage capacity is twice the amount of


data stored.

Mirroring is NOT the same as doing backup!


NESTED RAID:
• Combines the performance benefits of RAID 0 with the redundancy
benefit of RAID 1.

RAID 0+1 – Mirrored Stripe


• Data is striped across HDDs, then the entire stripe is mirrored.

• If one drive fails, the entire stripe is faulted.

• Rebuild operation requires data to be copied from each disk in


the healthy stripe, causing increased load on the surviving
disks.

RAID 1+0 – Striped Mirror


• Data is first mirrored, and then both copies are striped across
multiple HDDs.

• When a drive fails, data is still accessible from its mirror.

• Rebuild operation only requires data to be copied from the


surviving disk into the replacement disk.

RAID 3 & 4:
• Stripes data for high performance and uses parity for improved
fault tolerance.

• One drive is dedicated for parity information.

• If a drive files, data can be reconstructed using data in the


parity drive.

• For RAID 3, data read / write is done across the entire stripe.
▪ Provide good bandwidth for large sequential data
access such as video streaming.

• For RAID 4, data read/write can be independently on single


disk.
RAID 5 & 6:

• RAID 5 is similar to RAID 4, except that the parity is distributed


across all disks instead of stored on a dedicated disk.
❖ This overcomes the write bottleneck on the parity disk.

• RAID 6 is similar to RAID 5, except that it includes a second parity


element to allow survival in the event of two disk failures.

o The probability for this to happen increases and the number of drives in
the array increases.

o Calculates both horizontal parity (as in RAID 5) and diagonal parity.

o Has more write penalty than in RAID 5.

Rebuild operation may take longer than on RAID 5.


Q.3 Explain file allocation methods: i) Chained/Linked
ii) Indexed
i) Chained/Linked Allocation:

• CHAINED/LINKED ALLOCATION
• In this scheme, each file is a linked list of disk blocks which need not
be contiguous.
• The disk blocks can be scattered anywhere on the disk
• The directory entry contains a pointer to the starting and the ending file
block
• Each block contains a pointer to the next block occupied by the file
ii) Indexed Allocation:

• In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file.
• Each file has its own index block.
• The ith entry in the index block contains the disk address of the ith file
block.
• The directory entry contains the address of the index block
Q.4 Explain directory structure.
1. Single-level directory:
• The simplest method is to have one big list of all the files on the disk. The
entire system will contain only one directory which is supposed to
mention all the files present in the file system. The directory contains
one entry per each file present on the file system.

Advantages:
•Since it is a single directory, so its implementation is very easy.
• If files are smaller in size, searching will faster.
• The operations like file creation, searching, deletion, updating are very easy
in such a directory structure.

Disadvantages:
• There may chance of name collision because two files can not have the same
name.
• Searching will become time taking if directory will large.
• In this can not group the same type of files together.
2. Two-level directory:

• In two level directory systems, we can create a separate directory for


each user. There is one master directory which contains separate
directories dedicated to each user. For each user, there is a different
directory present at the second level, containing group of user's file. The
system doesn't let a user to enter in the other user's directory without
permission.

Advantages:
• We can give full path like /User-name/directory-name/.
• Different users can have same directory as well as file name.
• Searching of files become more easy due to path name and user-
grouping.

Disadvantages:
• A user is not allowed to share files with other users.
• Still it not very scalable
• two files of the same type cannot be grouped together in the same user.
3. Tree-structured directory:
• In Tree structured directory system, any directory entry can either be a
file or sub directory.
• Root directory contains all directories for each user
• The users can create subdirectories and even store files in their directory
• A user do not have access to root directory and cannot modify it

Advantages:
• Very generalize, since full path name can be given.
• Very scalable, the probability of name collision is less.
• Searching becomes very easy, we can use both absolute path as well as
relative.

Disadvantages:
• Every file does not fit into the hierarchical model, files may be saved into
multiple
directories.
• We can not share files.
• It is inefficient, because accessing a file may go under multiple directories.

You might also like