0% found this document useful (0 votes)
10 views9 pages

4) Effect of Page Size

The document explains key concepts in operating systems, including thrashing, file directories, TLB, the effect of page size, and I/O buffering techniques. It details memory management mechanisms such as paging and discusses various file allocation methods, including contiguous and linked list allocation. Additionally, it covers disk scheduling algorithms and the concept of managed file transfer (MFT) for secure data exchange.

Uploaded by

piyushbpatil21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views9 pages

4) Effect of Page Size

The document explains key concepts in operating systems, including thrashing, file directories, TLB, the effect of page size, and I/O buffering techniques. It details memory management mechanisms such as paging and discusses various file allocation methods, including contiguous and linked list allocation. Additionally, it covers disk scheduling algorithms and the concept of managed file transfer (MFT) for secure data exchange.

Uploaded by

piyushbpatil21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Q ] Explain the following terms: 1) Thrashing 2) File Directories 3) TLB 4)

Effect of Page Size 5) I/O Buffering techniques .


Ans :- 4) Effect of Page Size :-
• Paging is a memory management mechanism used to access
processes from the secondary storage into the main memory in the
form of pages.
• Paging divides each process in the form of pages.
• The main memory is divided into small fixed-size blocks of physical
memory, which are called frames.
• The basic idea behind the paging is to remove the external
fragmentation and gain faster access to the data.
• The size of a frame should be kept the same as that of a page to
have maximum utilization of the main memory and to avoid
external fragmentation.
3) TLB (translation lookaside buffer):-
A translation lookaside buffer (TLB) is a type of memory cache that stores
recent translations of virtual memory to physical addresses to enable
faster retrieval. This high-speed cache is set up to keep track of recently
used page table entries (PTEs). Also known as an address-translation
cache, a TLB is a part of the processor's memory management unit
With a TLB, there's no need to place PTEs in register, which is impractical,
or keep the entire page table in the main memory, which requires two
main memory references. Instead, the TLB checks if the page is already in
the main memory. The processor examines the TLB for a PTE, retrieves
the frame number and forms the real address. If the page is not in the
main memory, a page fault is issued, and the TLB gets updated with the
new PTE.
2) File Directories:-
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating
systems, A hard disk can be divided into the number of partitions of
different sizes. The partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.
1)Thrashing :-
Thrashing is an essential concept in Operating Systems that describes a
situation in which the system spends significant time swapping data
between main memory and secondary storage (such as a hard disk). This
leads to severe performance degradation, such as poor response times, a
cycle of excessive page faults, and decreased overall system efficiency.
Q ] Differentiate between Internal fragmentation and External
fragmentation ?
Ans:-
Q ] Differentiate between Paging and Segmentation ?
Ans:-
Q ] Explain file allocation methods in detail ?
Ans:- There are various methods which can be used to allocate disk space
to the files. Selection of an appropriate allocation method will significantly
affect the performance and efficiency of the system. Allocation method
provides a way in which the disk will be utilized and the files will be
accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation.
If the blocks are allocated to the file in such a way that all the
logical blocks of the file get the contiguous physical block in the
hard disk then such allocation scheme is known as contiguous
allocation.

In the image shown below, there are three files in the directory. The
starting block and the length of each file are mentioned in the
table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.

2. Linked List Allocation

Linked List allocation solves all problems of contiguous allocation. In


linked list allocation, each file is considered as the linked list of disk
blocks. However, the disks blocks allocated to a particular file need not
to be contiguous on the disk. Each disk block allocated to a file contains
a pointer which points to the next disk block allocated to the same file.

3. File Allocation Table

The main disadvantage of linked list allocation is that the Random


access to a particular block is not provided. In order to access a block,
we need to access all its previous blocks.

File Allocation Table overcomes this drawback of linked list allocation.


In this scheme, a file allocation table is maintained, which gathers all the
disk block links. The table has one entry for each disk block and is
indexed by block number.
Q ] Explain MFT with example ?

Ans:- Managed file transfer (MFT) is a technology platform that allows


organizations to reliably exchange electronic data between systems and
people, within and outside the enterprise, securely and in compliance
with applicable regulations.

MFT is a more reliable and efficient means for secure data and file
transfer, outpacing and outperforming applications such as file transfer
protocol (FTP), hypertext transfer protocol (HTTP), secure file transfer
protocol (SFTP) and other methods.

Organizations increasingly rely on MFT to support their business needs


and goals in a way that FTP cannot. FTP presents many challenges such as
data security gaps, lack of visibility when a problem occurs, timely manual
recovery from failures and costly SLA fees due to poor performance.

There are several factors pushing companies to move to MFT:

• Data security
• Data growth
• Regulatory compliance
• Technology megatrends
• Visibility
Q ] Describe disk scheduling algorithm with example ?

Ans:- Disk Scheduling Algorithms

Here are the fundamental disk scheduling algorithms

1. First-Come, First-Served (FCFS) Scheduling

Description: Processes requests in the order they arrive, like a queue.


Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53.
o Order of servicing: 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65
→ 67.

Advantage: Simple and easy to implement.

Disadvantage: Can cause lengthy ready instances, specifically if the


requests are a long way apart (excessive variance).

2. Shortest Seek Time First (SSTF) Scheduling

Description: Chooses the request that calls for the least motion of the
disk arm from its modern position.
Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53.
o Order of servicing: 53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 →
183.

Advantage: Reduces total seek time compared to FCFS.

Disadvantage: Can cause starvation of some requests.

3. SCAN (Elevator) Scheduling

Description: Moves the disk arm in one course, servicing requests until it
reaches the end, then reverses path.
Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53 and moves towards 0.
o Order of servicing: 53 → 37 → 14 → 0 → 65 → 67 → 98 → 122 →
124 → 183.

Advantage: Reduces variance in response time.

Disadvantage: Long waiting time for requests just visited by the disk arm.

4. C-SCAN (Circular SCAN) Scheduling

Description: Moves the disk arm in one route, servicing requests, and
returns to the begin with out servicing requests at the return trip.
Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53 and moves towards 0.
o Order of servicing: 53 → 37 → 14 → 0 → 183 → 124 → 122 → 98 →
67 → 65.

Advantage: Provides a more uniform wait time compared to SCAN.

Disadvantage: Can nonetheless have longer wait times than other extra
complex algorithms.

5. LOOK Scheduling

Description: Similar to SCAN, however the disk arm most effective is


going as far as the furthest request in every direction before reversing.
Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53 and moves towards 0.
o Order of servicing: 53 → 37 → 14 → 65 → 67 → 98 → 122 → 124 →
183.

Advantage: More efficient than SCAN by using no longer going to the


stop of the disk if not necessary.

Disadvantage: Can still have the drawback of lengthy waits for requests
simply overlooked with the aid of the disk arm.
6. C-LOOK Scheduling

Description: Similar to C-SCAN, but the disk arm only goes as far as the
furthest request before returning without delay to the begin.
Example:
o Requests: [98, 183, 37, 122, 14, 124, 65, 67]
o Head starts at 53 and moves towards 0.
o Order of servicing: 53 → 37 → 14 → 183 → 124 → 122 → 98 → 67
→ 65.

Advantage: Provides uniform wait instances and avoids the pointless


traversal of the disk.

Disadvantage: Can motive longer average wait times as compared to


algorithms like SSTF.

Q ] Explain the concept of paging with example ?

Ans:- Paging is a Memory Management technique that helps in retrieving


the processes from the secondary memory in the form of pages. It
eliminates the need for contagious allocation of memory to the
processes. In paging, processes are divided into equal parts called pages,
and main memory is also divided into equal parts and each part is called a
frame.

Each page gets stored in one of the frames of the main memory
whenever required. So, the size of a frame is equal to the size of a page.
Pages of a process can be stored in the non-contagious locations in the
main memory.

You might also like