0% found this document useful (0 votes)
4 views

theory

The document discusses memory management, detailing its classification into contiguous, non-contiguous, dynamic memory allocation, and virtual memory. It explains concepts such as paging, segmentation, logical vs physical address spaces, and various page replacement algorithms, including FIFO, LRU, and optimal. Additionally, it covers demand paging, page faults, and the performance implications of virtual memory management.

Uploaded by

mp796526
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

theory

The document discusses memory management, detailing its classification into contiguous, non-contiguous, dynamic memory allocation, and virtual memory. It explains concepts such as paging, segmentation, logical vs physical address spaces, and various page replacement algorithms, including FIFO, LRU, and optimal. Additionally, it covers demand paging, page faults, and the performance implications of virtual memory management.

Uploaded by

mp796526
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

THEORY ASSIGNMENT

 INTERIM SEMESTER 2024-25

Submitted to: Dr . Suneet Joshi


Submitted by:
Registration No:
Q1: What is Memory Management? How is it classified?
Memory Management is basically the process of controlling how a computer's
memory is used. Imagine it as a system that decides where and how different
programs and files are stored in the computer's memory, and it makes sure that
everything runs smoothly without crashing into each other. It ensures every program
gets the memory it needs and releases it when it’s no longer in use.
Types of Memory Management Techniques:
1. Contiguous Memory Allocation:
o Here, each program gets a continuous block of memory.
o It can be done in two ways:
 Single-Partition Allocation: The whole memory is given to one
program.
 Multiple-Partition Allocation: Memory is divided into multiple
parts (or "slots"), and each slot holds a program.
2. Non-Contiguous Memory Allocation:
o Memory is not given in one continuous block. Instead, it is broken into
smaller pieces and given to different programs.
o It can be done in two ways:
 Paging: Programs are split into fixed-sized blocks (called "pages")
that fit into memory "frames" of the same size.
 Segmentation: Here, programs are divided into logical parts (like
functions, arrays, or variables) instead of fixed-size blocks.
 Paging + Segmentation: A mix of both, where the system gets the
benefits of both methods.
3. Dynamic Memory Allocation:
o Memory is allocated as and when needed, instead of being fixed
beforehand.
o It has two methods:
 Stack Allocation: Memory is allocated in a "last-in, first-out" style,
like stacking plates.
 Heap Allocation: Memory is given as needed and can be freed up
when no longer required, much like booking and canceling hotel
rooms.
4. Virtual Memory:
o Here, your system creates the illusion of having more memory than it
actually does.
o It uses storage (like a hard disk) to act as temporary RAM, so larger
programs can still run.

Q2: What is Paging and Segmentation?


1. Paging:
o What it means: Imagine you have a huge book (a program) that you
want to store in a bookshelf (the computer's memory). Instead of
placing the whole book on one shelf, you tear it into equal-sized pages
and put them on any available shelves.
o How it works: Each "page" of the program is stored in a frame of the
same size in memory. When you need a page, you check the page
number and find where it is stored.
o Benefits: It makes it easier to use the memory space efficiently.
o Drawbacks: Sometimes, even if a page doesn't fully fill a frame, the extra
space in the frame goes to waste. This is called internal fragmentation.
2. Segmentation:
o What it means: Instead of breaking up a program into fixed-size pages, it
is broken into meaningful parts like "function A," "function B," or "data
array."
o How it works: Each part is stored in memory as a segment, and the
system keeps track of where each segment is stored.
o Benefits: It is more intuitive since it divides memory according to how
programs are naturally structured.
o Drawbacks: If one segment is too large, it might not fit into memory
properly, leaving gaps. This causes external fragmentation.
Feature Paging Segmentation
How it divides Into equal-sized pages Into logical parts (like functions)
memory
Size Fixed-size Variable-size
Address type Page number + offset Segment number + offset
Fragmentation Internal (wasted space) External (gaps in memory)

Q3: What is the difference between Physical and Logical Address


Spaces?
When you run a program, the addresses (memory locations) you see and use are
different from the actual physical addresses in your computer’s memory. This
difference exists to keep things organized and make multitasking possible.
1. Logical Address Space:
o This is the address that your program thinks it’s using.
o It’s like you telling someone, "I live on 7th Street, Apartment 21." You
don’t care where the apartment building is physically located in the city.
o The logical address is created by the CPU and is sometimes called the
virtual address.
2. Physical Address Space:
o This is the real address where the data is stored in the computer’s
memory (RAM).
o It’s like the actual location of the apartment on Google Maps.
o This address is managed by the Memory Management Unit (MMU),
which translates logical addresses into physical addresses.
Example:
 Imagine a book (a program) that wants to store page 4 in memory.
 The logical address (that the program "sees") is Page 4, Line 5.
 But in reality, page 4 might be stored in memory location frame 9 at address
9005.
 The MMU translates "Page 4, Line 5" into "9005" in physical memory.

Aspect Logical Address Physical Address


Definition Address seen by the Actual location in physical memory
program
Generated CPU MMU (Memory Management Unit)
by
Access Used by programs Used by hardware

Q.4) Consider the following segment table:


Segment Base Length
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96
What are the physical addresses for the following logical addresses?
a. 0, 430
b. 1, 10
c. 2, 500
d. 3, 400
e. 4, 112
a. Logical Address: (0, 430)
 Segment 0 Base Address: 219
 Offset: 430
 Segment Length: 600
Since the offset 430 is less than the segment length 600, the address is valid.
Physical Address = Base Address + Offset
Physical Address=219+430=649\text{Physical Address} = 219 + 430 =
649Physical Address=219+430=649
Physical Address for (0, 430) = 649

b. Logical Address: (1, 10)


 Segment 1 Base Address: 2300
 Offset: 10
 Segment Length: 14
Since the offset 10 is less than the segment length 14, the address is valid.
Physical Address = Base Address + Offset
Physical Address=2300+10=2310
Physical Address for (1, 10) = 2310

c. Logical Address: (2, 500)


 Segment 2 Base Address: 90
 Offset: 500
 Segment Length: 100
Since the offset 500 is greater than the segment length 100, this address is invalid.
Physical Address for (2, 500): Invalid (Offset exceeds segment length)
d. Logical Address: (3, 400)
 Segment 3 Base Address: 1327
 Offset: 400
 Segment Length: 580
Since the offset 400 is less than the segment length 580, the address is valid.
Physical Address = Base Address + Offset
Physical Address=1327+400=1727
Physical Address for (3, 400) = 1727

e. Logical Address: (4, 112)


 Segment 4 Base Address: 1952
 Offset: 112
 Segment Length: 96
Since the offset 112 is greater than the segment length 96, this address is invalid.
Physical Address for (4, 112): Invalid (Offset exceeds segment length)

Summary of Results:
Logical Address (Segment, Offset) Physical Address
(0, 430) 649
(1, 10) 2310
(2, 500) Invalid (Offset > Length)
(3, 400) 1727
(4, 112) Invalid (Offset > Length)
Q.5) How Virtual Memory works? What are its advantages and
disadvantages?
Virtual Memory is a memory management technique that gives an application the
illusion of having a large, contiguous block of memory, even though it might be
physically fragmented or much smaller. It uses paging or segmentation to move parts
of programs or data between the RAM and disk storage (usually called the swap
space or page file). Virtual memory allows for larger addressable memory than
physically available, as it uses disk space to simulate extra memory.

How it works:
 The operating system divides memory into pages or segments.
 A page table keeps track of where virtual pages are stored in physical memory.
 When a program accesses data, the memory manager translates the virtual
address to the corresponding physical address.
 If the required page isn't in RAM, a page fault occurs, and the page is loaded
from disk.
Advantages:
1. More memory availability: Programs can use more memory than is physically
available by swapping pages to the disk.
2. Isolation: Virtual memory isolates processes, preventing them from directly
accessing each other’s memory.
3. Better memory utilization: Not all data needs to be in RAM simultaneously, so
memory is used more efficiently.
4. Security: Protection is provided, as each process cannot directly access
another's memory.
Disadvantages:
1. Slower performance: Disk access is much slower than RAM access, so if the
system relies too much on swapping, it can severely slow down the system
(known as thrashing).
2. Complexity: Managing virtual memory requires complex hardware and
software, such as page tables and memory management units (MMUs).
3. Overhead: The overhead involved in swapping pages in and out of memory can
reduce performance, especially when the system is low on physical memory.

Q.6) Consider a paging hardware with a TLB. Assume that the


entire page table and all the pages are in the physical memory. It
takes 10 milliseconds to search the TLB and 80 milliseconds to
access the physical memory. If the TLB hit ratio is 0.6, the
effective memory access time (in milliseconds) is _________.
A. 120
B. 122
C. 124
D. 118
Given:
 TLB search time = 10 ms
 Physical memory access time = 80 ms
 TLB hit ratio = 0.6
Formula for Effective Memory Access Time (EMAT):
EMAT=(TLB hit ratio×(TLB access time+Memory access time))+((1−TLB hit ratio)×(TLB
access time+2×Memory access time))
Substituting the given values:
EMAT=(0.6×(10+80))+(0.4×(10+2×80))
EMAT=(0.6×90)+(0.4×170)
EMAT=54+68=122ms
ANS= 122ms
Q.7) What is Demand Paging, Page Fault and Thrashing?
 Demand Paging: A system in which pages are only loaded into memory when
they are needed, i.e., when a program accesses a page that isn't in memory, a
page fault occurs, and the page is then loaded from secondary storage (e.g.,
disk).
 Page Fault: This occurs when a program tries to access a page that is not in
memory. The operating system then loads the page from disk into RAM.
 Thrashing: This occurs when the system spends most of its time swapping data
between RAM and disk, due to a high page fault rate. It leads to a severe
slowdown as the system constantly swaps rather than performing useful work.

Q.8) Describe the Following Page Replacement Algorithms


a) Optimal Page Replacement Algorithm
 The Optimal Page Replacement Algorithm selects the page that will not be
used for the longest period of time in the future. It minimizes page faults but is
not practical because it requires knowing future references, which is not
possible in real life.
b) First In First Out (FIFO) Page Replacement Algorithm
 The FIFO Algorithm replaces the oldest page in memory. The page that has
been in memory the longest is replaced first.
 Drawback: It doesn't consider how frequently or recently a page is accessed,
leading to inefficient replacement in some cases (can cause Belady’s Anomaly).
c) Least Recently Used (LRU) Page Replacement Algorithm
 The LRU Algorithm replaces the page that has not been used for the longest
period of time. It keeps track of the order in which pages are accessed and
replaces the least recently used page.
 Advantage: It tends to perform better than FIFO in most cases because it takes
usage into account.
Q.9) Page Faults for the Following Replacement Algorithms
Page Reference String: 7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1

Page Reference String

7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1

Steps

For each algorithm:

1. FIFO (First In, First Out): Replace the oldest page in the frame when a page fault occurs.
2. LRU (Least Recently Used): Replace the page that was least recently used.
3. Optimal: Replace the page that will not be used for the longest time in the future.

Table Structure

1. Page reference being accessed.


2. Frames after each reference.
3. Whether a Page Fault occurred.

Table

FIFO Algorithm

Reference Frame 1 Frame 2 Frame 3 Page Fault


7 7 Yes
2 7 2 Yes
3 7 2 3 Yes
1 1 2 3 Yes
2 1 2 3 No
5 5 2 3 Yes
3 5 2 3 No
4 4 2 3 Yes
6 4 6 3 Yes
7 7 6 3 Yes
7 7 6 3 No
1 1 6 3 Yes
0 0 6 3 Yes
5 5 6 3 Yes
4 5 4 3 Yes
6 5 4 6 Yes
2 2 4 6 Yes
3 2 3 6 Yes
0 0 3 6 Yes
1 1 3 6 Yes
Total Page Faults (FIFO): 15

LRU Algorithm

Reference Frame 1 Frame 2 Frame 3 Page Fault


7 7 Yes
2 7 2 Yes
3 7 2 3 Yes
1 1 2 3 Yes
2 1 2 3 No
5 5 2 3 Yes
3 5 2 3 No
4 4 2 3 Yes
6 6 2 3 Yes
7 7 2 3 Yes
7 7 2 3 No
1 1 7 3 Yes
0 0 7 3 Yes
5 5 7 3 Yes
4 4 7 3 Yes
6 6 4 3 Yes
2 2 4 3 Yes
3 2 4 3 No
0 0 4 3 Yes
1 1 4 3 Yes

Total Page Faults (LRU): 12

Optimal Algorithm

Reference Frame 1 Frame 2 Frame 3 Page Fault


7 7 Yes
2 7 2 Yes
3 7 2 3 Yes
1 1 2 3 Yes
2 1 2 3 No
5 5 2 3 Yes
3 5 2 3 No
4 4 2 3 Yes
6 6 2 3 Yes
7 7 2 3 Yes
7 7 2 3 No
1 1 2 3 Yes
0 0 2 3 Yes
5 5 2 3 Yes
4 5 4 3 Yes
6 5 6 3 Yes
2 2 6 3 Yes
3 2 3 6 Yes
0 0 3 6 Yes
1 1 3 6 Yes

Total Page Faults (Optimal): 10

Q.10) Describe Belady’s Anomaly with Example


Belady’s Anomaly is a phenomenon in which increasing the number of page frames
can actually increase the number of page faults for certain page reference strings.
This contradicts the usual expectation that adding more frames should reduce page
faults.
Example: For certain page reference strings, FIFO may cause more page faults when
the number of frames is increased.

Q.11) What is Segmentation? Why is it Required? What Are Its


Advantages and Disadvantages? Differentiate Between Paging
and Segmentation?
 Segmentation: It divides the memory into variable-sized segments based on
logical divisions (e.g., functions, arrays, stacks). Each segment has its own base
address and length.
 Why it's required: It allows for more flexible memory management, as
programs are divided into segments according to their structure (unlike paging,
which divides memory into fixed-size blocks).
Advantages:
 Logical division of programs.
 Easy sharing of segments between processes.
Disadvantages:
 External Fragmentation: Free memory may be fragmented into small blocks.
Difference between Paging and Segmentation:
Feature Paging Segmentation
Size of Fixed-size pages Variable-sized segments
Divisions
Memory Usage No external fragmentation External fragmentation may occur
Access Simplified (using page More logical (using segment
table) table)

Q.12) What is Segmented Paging? What Are Its Advantages and


Disadvantages?
Segmented Paging combines paging and segmentation. A program is divided into
segments, and each segment is further divided into pages.
Advantages:
 Reduces the problems of external fragmentation (from segmentation) and
internal fragmentation (from paging).
 Allows logical program structure (via segments) while still benefiting from the
efficiency of paging.
Disadvantages:
 More complex management due to two levels of memory translation (segment
table and page table).
 Can lead to higher overhead in managing both segmentation and paging.

Q.13) Describe Contiguous Memory Allocation


In Contiguous Memory Allocation, each process is allocated a single continuous block
of memory. The system keeps track of free memory spaces and assigns contiguous
blocks to processes.
Advantages:
 Simple to implement.
 Efficient for small processes.
Disadvantages:
 Can lead to external fragmentation.
 Memory allocation may be inefficient if there isn’t enough space for large
processes.

Q.14) What is Fragmentation? What Are the Two Types of


Fragmentation?
Fragmentation is the phenomenon where memory is inefficiently utilized due to the
way it is allocated.
 Internal Fragmentation: Occurs when memory blocks are allocated but are not
fully used, leaving unused space inside allocated blocks.
 External Fragmentation: Occurs when there are enough total free memory
blocks, but they are scattered in small chunks that cannot be used to fulfil a
request.

Q.15) Describe the Following in the Context of Disk Scheduling


a.) Seek Time
b.) Rotational Latency
c.) Transfer Time
d.) Disk Access Time
e.) Disk Response Time
 a) Seek Time: The time it takes for the disk arm to move to the track where the
data is stored.
 b) Rotational Latency: The time it takes for the desired disk sector to rotate
into position under the disk head.
 c) Transfer Time: The time it takes to transfer data once the disk head is in
position.
 d) Disk Access Time: The sum of seek time, rotational latency, and transfer
time.
 e) Disk Response Time: The total time from issuing a disk I/O request to
receiving the data, including all the delays (seek time, rotational latency,
transfer time).

Q.16) Describe following Disk Scheduling Algorithms with


example:
a.)FCFS scheduling algorithm
b.) SSTF (shortest seek time first) algorithm
c.)SCAN scheduling
d.) C-SCAN scheduling
e.) LOOK Scheduling
f.) C-LOOK scheduling

a) FCFS (First-Come, First-Served) Scheduling

 Description: This algorithm serves disk requests in the order they arrive, without
optimization for seek time.
 Advantage: Simple and fair.
 Disadvantage: Can lead to long seek times if requests are far apart.

Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53
Ste Head Movement Distance
p Moved
1 53 → 98 45
2 98 → 183 85
3 183 → 37 146
4 37 → 122 85
5 122 → 14 108
6 14 → 124 110
7 124 → 65 59
8 65 → 67 2

Total Head Movement: 45+85+146+85+108+110+59+2=64045+85+146+85+108+110+59+2=640

b) SSTF (Shortest Seek Time First) Scheduling

 Description: The request closest to the current head position is served first.
 Advantage: Reduces total seek time.
 Disadvantage: May cause starvation for distant requests.

Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53

Ste Head Movement Distance


p Moved
1 53 → 65 12
2 65 → 67 2
3 67 → 37 30
4 37 → 14 23
5 14 → 98 84
6 98 → 122 24
7 122 → 124 2
8 124 → 183 59

Total Head Movement: 12+2+30+23+84+24+2+59=23612+2+30+23+84+24+2+59=236

c) SCAN Scheduling

 Description: The head moves in one direction (toward one end of the disk), serving all
requests along the way, then reverses direction.
 Advantage: Prevents starvation and is fair.
 Disadvantage: Higher seek time compared to LOOK.
Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53, direction: upward

Step Head Movement


1 53 → 65
2 65 → 67
3 67 → 98
4 98 → 122
5 122 → 124
6 124 → 183
Reverse direction
7 183 → 37
8 37 → 14

Total Head
Movement: (65−53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−37)+(37−14)=370(65−
53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−37)+(37−14)=370

d) C-SCAN (Circular SCAN) Scheduling

 Description: Similar to SCAN, but the head moves in one direction only. When it reaches the
end, it jumps back to the start without serving requests during the return.
 Advantage: Provides uniform wait time for requests.
 Disadvantage: Slightly higher seek time compared to SCAN.

Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53, direction: upward

Step Head Movement


1 53 → 65
2 65 → 67
3 67 → 98
4 98 → 122
5 122 → 124
6 124 → 183
Jump back
7 0 → 14
8 14 → 37

Total Head
Movement: (65−53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−0)+(37−14)=393(65−5
3)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−0)+(37−14)=393

e) LOOK Scheduling

 Description: Similar to SCAN, but the head only moves as far as the last request in the
direction before reversing (does not go to the end of the disk).
 Advantage: Reduces unnecessary head movement compared to SCAN.
 Disadvantage: May not fully optimize seek time.

Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53, direction: upward

Step Head Movement


1 53 → 65
2 65 → 67
3 67 → 98
4 98 → 122
5 122 → 124
6 124 → 183
Reverse direction
7 183 → 37
8 37 → 14

Total Head Movement: Same as SCAN without moving beyond the last request.

f) C-LOOK Scheduling

 Description: Similar to C-SCAN, but the head only moves as far as the last request in the
direction before jumping back to the start.
 Advantage: Reduces unnecessary head movement compared to C-SCAN.
 Disadvantage: Slightly higher seek time compared to LOOK.

Example:

Disk requests: 98, 183, 37, 122, 14, 124, 65, 67


Initial head position: 53, direction: upward

Step Head Movement


1 53 → 65
2 65 → 67
3 67 → 98
4 98 → 122
5 122 → 124
6 124 → 183
Jump back
7 14 → 37

Total Head Movement: Similar to LOOK but includes the jump back from the last position to the
start.

Comparison of Algorithms

Algorithm Total Head Movement (Approx.)


FCFS 640
SSTF 236
SCAN 370
C-SCAN 393
LOOK 370
C-LOOK 370

These algorithms balance seek time optimization and fairness differently, depending on use-case
priorities.

Q.17) Consider the following disk request sequence for a disk


with 100 tracks:
45, 21, 67, 90, 4, 50, 89, 52, 61, 87, 25
Head pointer starting at 50 and moving in left direction. Find the
number of head
movements in cylinders using FCFS scheduling.

Given:

 Disk request sequence: 45, 21, 67, 90, 4, 50, 89, 52, 61, 87, 25
 Initial head position: 50
 Head direction: Left (but irrelevant for FCFS as it processes sequentially).
Steps:

1. Start at the initial head position (50).


2. Move the head to each request in the sequence, calculating the absolute difference between the
current position and the next request.

Calculation:

Step Current Position Next Request Movement (in cylinders)


1 50 45 (
2 45 21 (
3 21 67 (
4 67 90 (
5 90 4 (
6 4 50 (
7 50 89 (
8 89 52 (
9 52 61 (
10 61 87 (
11 87 25 (

Total Head Movement:

5+24+46+23+86+46+39+37+9+26+62=403 cylinders5+24+46+23+86+46+39+37+9+26+62=403cylinders

Answer:

The total head movement is 403 cylinders.

Q.18) What is RAID? How RAID works? What are various levels
of RAID ? What are the advantages and disadvantages of RAID .

What is RAID?
RAID (Redundant Array of Independent/Inexpensive Disks) is a data storage virtualization
technology that combines multiple physical hard drives into one logical unit. It enhances data
redundancy and performance by distributing data across drives or replicating data on multiple
drives.

How RAID Works

RAID works by dividing or duplicating data across multiple drives. Depending on the level, it
achieves:

 Redundancy: Ensuring data is not lost in case of disk failure.


 Performance: Improving read/write speeds by distributing operations across disks.

RAID Levels

1. RAID 0 (Striping): Splits data across multiple drives for performance but offers no redundancy.
2. RAID 1 (Mirroring): Duplicates data across drives, providing redundancy.
3. RAID 5 (Striping with Parity): Uses striping and stores parity information, allowing recovery from a
single disk failure.
4. RAID 6 (Double Parity): Similar to RAID 5 but can recover from two simultaneous failures.
5. RAID 10 (1+0): Combines RAID 1 and RAID 0, offering both redundancy and performance.

Advantages

 Improved performance (RAID 0, RAID 10).


 High data redundancy (RAID 1, RAID 5, RAID 6).
 Fault tolerance reduces downtime.

Disadvantages

 Expensive: Requires multiple drives.


 Complexity in setup and management.
 Limited improvement in performance with redundancy-based levels.

Q.19 Explain File System Structure. What happens when you


turn on your computer?

The file system organizes how data is stored and retrieved on a storage device, such as a hard drive
or SSD. Its primary role is to ensure files are systematically managed, allowing users and
applications to locate and modify files efficiently. A typical file system includes the following
components:
1. Boot Block: This part is essential for starting the operating system. It contains the bootstrap code
that initializes the system during startup.
2. Superblock: The superblock holds critical metadata about the file system, such as its size, available
blocks, and information about the inode table.
3. Inode Table: Inodes are structures that store metadata for files and directories, such as permissions,
sizes, and pointers to data blocks.
4. Data Blocks: These blocks store the actual content of files.
5. Directory Structure: This maps file names to the corresponding inode entries, allowing quick access
to file locations.

Modern file systems, such as NTFS, ext4, and APFS, implement these components with added
features like journaling and encryption.

What Happens When You Turn On Your Computer?

When a computer powers up, several critical steps occur to load the operating system and make
the system ready for use:

1. Power-On Self-Test (POST): The computer’s hardware components are tested for functionality,
including RAM, CPU, and connected peripherals.
2. BIOS/UEFI Execution: The firmware initializes hardware and identifies bootable devices like hard
drives, USBs, or network interfaces.
3. Bootloader Activation: A program, such as GRUB or Windows Boot Manager, is loaded into memory.
It selects and loads the operating system kernel.
4. Kernel Initialization: The OS kernel is loaded into memory and begins managing hardware. It mounts
the root file system and starts system processes.
5. System Services Start: Background services, like networking and user interface modules, are
initialized.
6. User Login: The system presents a login screen, allowing access to the operating system.

These processes ensure that the computer transitions from a powered-off state to a fully
operational environment, ready to handle user input and tasks.

Q.20) How Disk space is allocated to files? Describe the following


methods which can be used for allocation.
1. Contiguous Allocation.
2. Extents
3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
8. Multilevel Indexed Allocation
9. Inode

Disk space allocation refers to the method by which a file system organizes the storage of file data
on physical or virtual disks. This process ensures efficient storage, quick access, and minimal
wastage. Below are the commonly used methods:

1. Contiguous Allocation

In this method, a file is stored in consecutive blocks of the disk.

 Advantages: Fast access as the blocks are sequential.


 Disadvantages: Causes fragmentation, requiring defragmentation over time.
 Example: A file requiring 5 blocks might occupy blocks 1-5.

2. Extents

Extents are an extension of contiguous allocation. A file begins with a contiguous set of blocks, and
if more space is needed, additional contiguous extents are allocated.

 Advantages: Reduces fragmentation compared to pure contiguous allocation.


 Disadvantages: Metadata management for extents increases complexity.

3. Linked Allocation

Each block of a file contains a pointer to the next block.

 Advantages: Eliminates external fragmentation.


 Disadvantages: Slower random access due to pointer traversal.
 Example: A file with blocks A, B, and C links them sequentially via pointers.

4. Clustering

Multiple blocks are grouped into clusters, treating them as single units.

 Advantages: Improves performance for large files.


 Disadvantages: Wastes space if files are smaller than a cluster.

5. File Allocation Table (FAT)

Maintains a table mapping each block to the next in a file’s sequence.


 Advantages: Simplifies locating files.
 Disadvantages: Accessing the FAT for every operation slows down performance.

6. Indexed Allocation

An index block contains pointers to the data blocks of a file.

 Advantages: Facilitates random access.


 Disadvantages: Index block overhead for small files.

7. Linked Indexed Allocation

Combines linked and indexed allocation. Index blocks are linked together if additional storage is
required.

 Advantages: Efficient for larger files.


 Disadvantages: Complexity in metadata.

8. Multilevel Indexed Allocation

Uses multiple levels of index blocks. For large files, the primary index points to secondary indices,
which point to data blocks.

 Advantages: Supports very large files.


 Disadvantages: Increased lookup time.

9. Inode

UNIX-based systems use inodes that store metadata and pointers to data blocks.

 Advantages: Efficiently handles both small and large files.


 Disadvantages: Fixed number of inodes can limit file creation.

Q.21) Describe the following things about I/O systems in


Operating System:
1. I/O devices
2.I/O System calls
3. DMA controllers
4. Memory-Mapped I/O
5. Direct Virtual Memory Address
1. I/O Devices

Input/Output devices interact with the system for data exchange. Examples include keyboards,
mice, printers, and disks. Each device communicates with the CPU through its own driver.

2. I/O System Calls

These are programming interfaces allowing applications to interact with I/O devices without direct
hardware control. Common system calls include read(), write(), and ioctl().

3. DMA Controllers

Direct Memory Access (DMA) controllers allow data transfer between memory and devices without
CPU involvement, reducing overhead and improving speed.

 Example: Transferring a large file from disk to RAM is faster using DMA.

4. Memory-Mapped I/O

In this technique, device data is mapped to a portion of the system’s memory space. The CPU
accesses devices as if interacting with memory.

 Advantages: Simplifies device communication.


 Disadvantages: Can lead to conflicts in the address space.

5. Direct Virtual Memory Address (DVMA)

DVMA provides a virtualized address space for I/O operations, enabling direct access to memory
without interrupting the CPU’s memory operations.

Q.22) Differentiate between Security and Protection in Operating


System? What is Access Matrix in Operating System? Describe
the following methods of implementing the access matrix in the
operating system.
1.Global Table
2.Access Lists for Objects
3.Capability Lists for Domains
4. Lock-Key Mechanism
Security vs. Protection

 Security: Ensures the system is safe from external threats like malware or unauthorized access.
 Protection: Focuses on internal system resource management, ensuring proper user access rights.

Access Matrix

An access matrix defines rights and permissions of subjects (users) over objects (resources). Rows
represent users, and columns represent resources.

Implementation Methods

1. Global Table: A single table stores all access rights.


o Advantages: Simple implementation.
o Disadvantages: Inefficient for large systems.

2. Access Lists for Objects: Each object maintains a list of authorized users and their rights.
o Advantages: Efficient for managing specific permissions.
o Disadvantages: Overhead in maintaining lists for each object.

3. Capability Lists for Domains: Each user maintains a list of accessible objects.
o Advantages: Efficient for users with many permissions.
o Disadvantages: Complex to implement.

4. Lock-Key Mechanism: Resources are paired with locks, and users are assigned keys to
access them.
o Advantages: Ensures security through pairing.
o Disadvantages: Management of locks and keys becomes cumbersome.

Q.23) Describe about Multithread Programming Model.


Multithreading allows multiple threads to execute concurrently within a single process, sharing
resources like memory and files. This improves performance by utilizing CPU cores effectively.
Models

1. One-to-One: Each user thread maps to a kernel thread, enabling true parallelism but with high
overhead.
2. Many-to-One: All user threads map to a single kernel thread. Simpler but lacks parallelism.
3. Many-to-Many: Multiple user threads map to multiple kernel threads, balancing flexibility and
performance.

Multithreading finds applications in web servers, gaming, and data processing, where tasks can be
split for simultaneous execution.

Q.24) Describe the way to apply the monitors to solve the


dinning philosopher’s problem with example.
The dining philosophers problem involves a group of philosophers sitting at a table, where each
philosopher alternates between thinking and eating. Each needs two forks to eat, but adjacent
philosophers share a fork.

Solution Using Monitors

Monitors provide mutual exclusion, ensuring only one philosopher accesses a fork at a time. A
monitor encapsulates:

1. Shared variables representing fork availability.


2. Synchronization methods to acquire or release forks.
3. Condition variables to handle waiting philosophers.

In this solution, philosophers pick up forks only if both are available, otherwise, they wait. This
avoids deadlocks and ensures fairness.

Q.25) Consider the following snapshot of a system:


Allocation Max
ABCD ABCD

P0 3014 5117
P1 2210 3211
P2 3121 3321
P3 0510 4612
P4 4212 6325

Using the banker’s algorithm, determine whether or not each of


the following
states is unsafe. If the state is safe, illustrate the order in which
the processes
may complete. Otherwise, illustrate why the state is unsafe.
a) Available = (0,3,0,1)

b) Available = (1,0,0,2)
The Banker's Algorithm is used to determine whether a system is in a safe state by simulating
resource allocation to avoid deadlock. It considers:

 Allocation: Resources currently allocated to each process.


 Max: Maximum resources a process may need.
 Need: Additional resources required by each process (Need = Max - Allocation).
 Available: Resources currently available in the system.

Given the matrices and availability vectors:

Allocation Matrix:

Proces A B C D
s
P0 3 0 1 4
P1 2 2 1 0
P2 3 1 2 1
P3 0 5 1 0
P4 4 2 1 2

Max Matrix:

Proces A B C D
s
P0 5 1 1 7
P1 3 2 1 1
P2 3 3 2 1
P3 4 6 1 2
P4 6 3 2 5

Need Matrix (Max - Allocation):

Proces A B C D
s
P0 2 1 0 3
P1 1 0 0 1
P2 0 2 0 0
P3 4 1 0 2
P4 2 1 1 3

a) Available = (0, 3, 0, 1)

Step 1: Process P0

 Need: (2, 1, 0, 3)
 Available: (0, 3, 0, 1)
 P0 cannot execute as it requires more resources for A and D than available.

Step 2: Process P1

 Need: (1, 0, 0, 1)
 Available: (0, 3, 0, 1)
 P1 can execute. Simulate allocation and release resources:
o Available after release: (2, 5, 1, 1).

Step 3: Process P2

 Need: (0, 2, 0, 0)
 Available: (2, 5, 1, 1)
 P2 can execute. Release resources:
o Available after release: (5, 6, 3, 2).

Step 4: Process P3

 Need: (4, 1, 0, 2)
 Available: (5, 6, 3, 2)
 P3 can execute. Release resources:
o Available after release: (5, 11, 4, 2).

Step 5: Process P4

 Need: (2, 1, 1, 3)
 Available: (5, 11, 4, 2)
 P4 cannot execute as it needs more D than available.

Conclusion: The system is unsafe as P4 cannot execute.

b) Available = (1, 0, 0, 2)

Step 1: Process P0

 Need: (2, 1, 0, 3)
 Available: (1, 0, 0, 2)
 P0 cannot execute due to insufficient resources for A and D.

Step 2: Process P1

 Need: (1, 0, 0, 1)
 Available: (1, 0, 0, 2)
 P1 can execute. Release resources:
o Available after release: (3, 2, 1, 2).

Step 3: Process P2

 Need: (0, 2, 0, 0)
 Available: (3, 2, 1, 2)
 P2 can execute. Release resources:
o Available after release: (6, 3, 3, 3).

Step 4: Process P3

 Need: (4, 1, 0, 2)
 Available: (6, 3, 3, 3)
 P3 can execute. Release resources:
o Available after release: (6, 8, 4, 3).

Step 5: Process P4

 Need: (2, 1, 1, 3)
 Available: (6, 8, 4, 3)
 P4 can execute. Release resources:s
o Available after release: (10, 10, 5, 5).

Conclusion: The system is safe.


Safe Sequence: P1 → P2 → P3 → P4.

You might also like