theory
theory
Summary of Results:
Logical Address (Segment, Offset) Physical Address
(0, 430) 649
(1, 10) 2310
(2, 500) Invalid (Offset > Length)
(3, 400) 1727
(4, 112) Invalid (Offset > Length)
Q.5) How Virtual Memory works? What are its advantages and
disadvantages?
Virtual Memory is a memory management technique that gives an application the
illusion of having a large, contiguous block of memory, even though it might be
physically fragmented or much smaller. It uses paging or segmentation to move parts
of programs or data between the RAM and disk storage (usually called the swap
space or page file). Virtual memory allows for larger addressable memory than
physically available, as it uses disk space to simulate extra memory.
How it works:
The operating system divides memory into pages or segments.
A page table keeps track of where virtual pages are stored in physical memory.
When a program accesses data, the memory manager translates the virtual
address to the corresponding physical address.
If the required page isn't in RAM, a page fault occurs, and the page is loaded
from disk.
Advantages:
1. More memory availability: Programs can use more memory than is physically
available by swapping pages to the disk.
2. Isolation: Virtual memory isolates processes, preventing them from directly
accessing each other’s memory.
3. Better memory utilization: Not all data needs to be in RAM simultaneously, so
memory is used more efficiently.
4. Security: Protection is provided, as each process cannot directly access
another's memory.
Disadvantages:
1. Slower performance: Disk access is much slower than RAM access, so if the
system relies too much on swapping, it can severely slow down the system
(known as thrashing).
2. Complexity: Managing virtual memory requires complex hardware and
software, such as page tables and memory management units (MMUs).
3. Overhead: The overhead involved in swapping pages in and out of memory can
reduce performance, especially when the system is low on physical memory.
7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1
Steps
1. FIFO (First In, First Out): Replace the oldest page in the frame when a page fault occurs.
2. LRU (Least Recently Used): Replace the page that was least recently used.
3. Optimal: Replace the page that will not be used for the longest time in the future.
Table Structure
Table
FIFO Algorithm
LRU Algorithm
Optimal Algorithm
Description: This algorithm serves disk requests in the order they arrive, without
optimization for seek time.
Advantage: Simple and fair.
Disadvantage: Can lead to long seek times if requests are far apart.
Example:
Description: The request closest to the current head position is served first.
Advantage: Reduces total seek time.
Disadvantage: May cause starvation for distant requests.
Example:
c) SCAN Scheduling
Description: The head moves in one direction (toward one end of the disk), serving all
requests along the way, then reverses direction.
Advantage: Prevents starvation and is fair.
Disadvantage: Higher seek time compared to LOOK.
Example:
Total Head
Movement: (65−53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−37)+(37−14)=370(65−
53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−37)+(37−14)=370
Description: Similar to SCAN, but the head moves in one direction only. When it reaches the
end, it jumps back to the start without serving requests during the return.
Advantage: Provides uniform wait time for requests.
Disadvantage: Slightly higher seek time compared to SCAN.
Example:
Total Head
Movement: (65−53)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−0)+(37−14)=393(65−5
3)+(67−65)+(98−67)+(122−98)+(124−122)+(183−124)+(183−0)+(37−14)=393
e) LOOK Scheduling
Description: Similar to SCAN, but the head only moves as far as the last request in the
direction before reversing (does not go to the end of the disk).
Advantage: Reduces unnecessary head movement compared to SCAN.
Disadvantage: May not fully optimize seek time.
Example:
Total Head Movement: Same as SCAN without moving beyond the last request.
f) C-LOOK Scheduling
Description: Similar to C-SCAN, but the head only moves as far as the last request in the
direction before jumping back to the start.
Advantage: Reduces unnecessary head movement compared to C-SCAN.
Disadvantage: Slightly higher seek time compared to LOOK.
Example:
Total Head Movement: Similar to LOOK but includes the jump back from the last position to the
start.
Comparison of Algorithms
These algorithms balance seek time optimization and fairness differently, depending on use-case
priorities.
Given:
Disk request sequence: 45, 21, 67, 90, 4, 50, 89, 52, 61, 87, 25
Initial head position: 50
Head direction: Left (but irrelevant for FCFS as it processes sequentially).
Steps:
Calculation:
5+24+46+23+86+46+39+37+9+26+62=403 cylinders5+24+46+23+86+46+39+37+9+26+62=403cylinders
Answer:
Q.18) What is RAID? How RAID works? What are various levels
of RAID ? What are the advantages and disadvantages of RAID .
What is RAID?
RAID (Redundant Array of Independent/Inexpensive Disks) is a data storage virtualization
technology that combines multiple physical hard drives into one logical unit. It enhances data
redundancy and performance by distributing data across drives or replicating data on multiple
drives.
RAID works by dividing or duplicating data across multiple drives. Depending on the level, it
achieves:
RAID Levels
1. RAID 0 (Striping): Splits data across multiple drives for performance but offers no redundancy.
2. RAID 1 (Mirroring): Duplicates data across drives, providing redundancy.
3. RAID 5 (Striping with Parity): Uses striping and stores parity information, allowing recovery from a
single disk failure.
4. RAID 6 (Double Parity): Similar to RAID 5 but can recover from two simultaneous failures.
5. RAID 10 (1+0): Combines RAID 1 and RAID 0, offering both redundancy and performance.
Advantages
Disadvantages
The file system organizes how data is stored and retrieved on a storage device, such as a hard drive
or SSD. Its primary role is to ensure files are systematically managed, allowing users and
applications to locate and modify files efficiently. A typical file system includes the following
components:
1. Boot Block: This part is essential for starting the operating system. It contains the bootstrap code
that initializes the system during startup.
2. Superblock: The superblock holds critical metadata about the file system, such as its size, available
blocks, and information about the inode table.
3. Inode Table: Inodes are structures that store metadata for files and directories, such as permissions,
sizes, and pointers to data blocks.
4. Data Blocks: These blocks store the actual content of files.
5. Directory Structure: This maps file names to the corresponding inode entries, allowing quick access
to file locations.
Modern file systems, such as NTFS, ext4, and APFS, implement these components with added
features like journaling and encryption.
When a computer powers up, several critical steps occur to load the operating system and make
the system ready for use:
1. Power-On Self-Test (POST): The computer’s hardware components are tested for functionality,
including RAM, CPU, and connected peripherals.
2. BIOS/UEFI Execution: The firmware initializes hardware and identifies bootable devices like hard
drives, USBs, or network interfaces.
3. Bootloader Activation: A program, such as GRUB or Windows Boot Manager, is loaded into memory.
It selects and loads the operating system kernel.
4. Kernel Initialization: The OS kernel is loaded into memory and begins managing hardware. It mounts
the root file system and starts system processes.
5. System Services Start: Background services, like networking and user interface modules, are
initialized.
6. User Login: The system presents a login screen, allowing access to the operating system.
These processes ensure that the computer transitions from a powered-off state to a fully
operational environment, ready to handle user input and tasks.
Disk space allocation refers to the method by which a file system organizes the storage of file data
on physical or virtual disks. This process ensures efficient storage, quick access, and minimal
wastage. Below are the commonly used methods:
1. Contiguous Allocation
2. Extents
Extents are an extension of contiguous allocation. A file begins with a contiguous set of blocks, and
if more space is needed, additional contiguous extents are allocated.
3. Linked Allocation
4. Clustering
Multiple blocks are grouped into clusters, treating them as single units.
6. Indexed Allocation
Combines linked and indexed allocation. Index blocks are linked together if additional storage is
required.
Uses multiple levels of index blocks. For large files, the primary index points to secondary indices,
which point to data blocks.
9. Inode
UNIX-based systems use inodes that store metadata and pointers to data blocks.
Input/Output devices interact with the system for data exchange. Examples include keyboards,
mice, printers, and disks. Each device communicates with the CPU through its own driver.
These are programming interfaces allowing applications to interact with I/O devices without direct
hardware control. Common system calls include read(), write(), and ioctl().
3. DMA Controllers
Direct Memory Access (DMA) controllers allow data transfer between memory and devices without
CPU involvement, reducing overhead and improving speed.
Example: Transferring a large file from disk to RAM is faster using DMA.
4. Memory-Mapped I/O
In this technique, device data is mapped to a portion of the system’s memory space. The CPU
accesses devices as if interacting with memory.
DVMA provides a virtualized address space for I/O operations, enabling direct access to memory
without interrupting the CPU’s memory operations.
Security: Ensures the system is safe from external threats like malware or unauthorized access.
Protection: Focuses on internal system resource management, ensuring proper user access rights.
Access Matrix
An access matrix defines rights and permissions of subjects (users) over objects (resources). Rows
represent users, and columns represent resources.
Implementation Methods
2. Access Lists for Objects: Each object maintains a list of authorized users and their rights.
o Advantages: Efficient for managing specific permissions.
o Disadvantages: Overhead in maintaining lists for each object.
3. Capability Lists for Domains: Each user maintains a list of accessible objects.
o Advantages: Efficient for users with many permissions.
o Disadvantages: Complex to implement.
4. Lock-Key Mechanism: Resources are paired with locks, and users are assigned keys to
access them.
o Advantages: Ensures security through pairing.
o Disadvantages: Management of locks and keys becomes cumbersome.
1. One-to-One: Each user thread maps to a kernel thread, enabling true parallelism but with high
overhead.
2. Many-to-One: All user threads map to a single kernel thread. Simpler but lacks parallelism.
3. Many-to-Many: Multiple user threads map to multiple kernel threads, balancing flexibility and
performance.
Multithreading finds applications in web servers, gaming, and data processing, where tasks can be
split for simultaneous execution.
Monitors provide mutual exclusion, ensuring only one philosopher accesses a fork at a time. A
monitor encapsulates:
In this solution, philosophers pick up forks only if both are available, otherwise, they wait. This
avoids deadlocks and ensures fairness.
P0 3014 5117
P1 2210 3211
P2 3121 3321
P3 0510 4612
P4 4212 6325
b) Available = (1,0,0,2)
The Banker's Algorithm is used to determine whether a system is in a safe state by simulating
resource allocation to avoid deadlock. It considers:
Allocation Matrix:
Proces A B C D
s
P0 3 0 1 4
P1 2 2 1 0
P2 3 1 2 1
P3 0 5 1 0
P4 4 2 1 2
Max Matrix:
Proces A B C D
s
P0 5 1 1 7
P1 3 2 1 1
P2 3 3 2 1
P3 4 6 1 2
P4 6 3 2 5
Proces A B C D
s
P0 2 1 0 3
P1 1 0 0 1
P2 0 2 0 0
P3 4 1 0 2
P4 2 1 1 3
a) Available = (0, 3, 0, 1)
Step 1: Process P0
Need: (2, 1, 0, 3)
Available: (0, 3, 0, 1)
P0 cannot execute as it requires more resources for A and D than available.
Step 2: Process P1
Need: (1, 0, 0, 1)
Available: (0, 3, 0, 1)
P1 can execute. Simulate allocation and release resources:
o Available after release: (2, 5, 1, 1).
Step 3: Process P2
Need: (0, 2, 0, 0)
Available: (2, 5, 1, 1)
P2 can execute. Release resources:
o Available after release: (5, 6, 3, 2).
Step 4: Process P3
Need: (4, 1, 0, 2)
Available: (5, 6, 3, 2)
P3 can execute. Release resources:
o Available after release: (5, 11, 4, 2).
Step 5: Process P4
Need: (2, 1, 1, 3)
Available: (5, 11, 4, 2)
P4 cannot execute as it needs more D than available.
b) Available = (1, 0, 0, 2)
Step 1: Process P0
Need: (2, 1, 0, 3)
Available: (1, 0, 0, 2)
P0 cannot execute due to insufficient resources for A and D.
Step 2: Process P1
Need: (1, 0, 0, 1)
Available: (1, 0, 0, 2)
P1 can execute. Release resources:
o Available after release: (3, 2, 1, 2).
Step 3: Process P2
Need: (0, 2, 0, 0)
Available: (3, 2, 1, 2)
P2 can execute. Release resources:
o Available after release: (6, 3, 3, 3).
Step 4: Process P3
Need: (4, 1, 0, 2)
Available: (6, 3, 3, 3)
P3 can execute. Release resources:
o Available after release: (6, 8, 4, 3).
Step 5: Process P4
Need: (2, 1, 1, 3)
Available: (6, 8, 4, 3)
P4 can execute. Release resources:s
o Available after release: (10, 10, 5, 5).