0% found this document useful (0 votes)
27 views24 pages

Question Bank Answers

The document provides a comprehensive overview of operating systems, detailing their objectives, functions, types, and key concepts such as process management, memory management, and scheduling algorithms. It explains various operating system types, including batch and time-sharing systems, and discusses the differences between processes and programs, as well as thread models. Additionally, it covers scheduling algorithms like FCFS, SJF, and Round Robin, along with examples and calculations for average waiting times.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views24 pages

Question Bank Answers

The document provides a comprehensive overview of operating systems, detailing their objectives, functions, types, and key concepts such as process management, memory management, and scheduling algorithms. It explains various operating system types, including batch and time-sharing systems, and discusses the differences between processes and programs, as well as thread models. Additionally, it covers scheduling algorithms like FCFS, SJF, and Round Robin, along with examples and calculations for average waiting times.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Operating System Question Bank Answers

Process Management
1. What is operating system? Explain the objectives and functions of operating system.
Answer:
An operating system (OS) is system software that manages computer hardware and software
resources and provides common services for computer programs.
Objectives of OS:
- Convenience: Makes the computer more convenient to use
- Efficiency: Allows efficient use of computer system resources
- Ability to evolve: Should permit effective development, testing, and introduction of new
system functions
Functions of OS:
1. Process Management: Creation, scheduling, termination of processes
2. Memory Management: Allocation and deallocation of memory space
3. File Management: Creation, deletion, manipulation of files
4. Device Management: Management of I/O devices
5. Security and Protection: Protecting system resources
6. User Interface: Providing interface between user and hardware
7. Networking: Managing network connections
8. Job Accounting: Keeping track of system resources used

2. List the types of operating system and explain any two in detail.
Types of Operating Systems:
1. Batch Operating System
2. Time-Sharing Operating System
3. Distributed Operating System
4. Network Operating System
5. Real-Time Operating System
6. Embedded Operating System
Detailed Explanation of Two Types:
1. Batch Operating System:
- Users prepare jobs (programs, data, and control instructions) and submit them to the
computer operator
- Similar jobs are batched together and executed as a group
- No direct interaction between user and computer
- Examples: Early mainframe systems
Advantages:
- Efficient for large volumes of similar jobs
- No idle time between jobs
Disadvantages:
- No user interaction
- Difficult to debug programs
- If one job fails, others behind it must wait
2. Time-Sharing Operating System:
- Multiple users can share computer resources simultaneously
- CPU time is divided among multiple users
- Each user gets a small time slice of CPU time
- Examples: UNIX, Multics
Advantages:
- Provides quick response time
- Reduces CPU idle time
- Allows multiple users to share resources
Disadvantages:
- More complex system
- Requires memory management and protection
- Security and integrity concerns
3. Explain monolithic and Micro kernel in operating system.
Monolithic Kernel:
- All OS services operate in kernel space
- Single large process running entirely in a single address space
- All functional components have access to all data and functions
- Examples: Traditional UNIX kernels, Linux
Advantages:
- Performance is good due to direct communication
- Simple design
Disadvantages:
- Less modular
- Difficult to maintain and debug
- Less secure (if one component fails, entire system may crash)
Microkernel:
- Minimal OS kernel providing only essential services
- Other services run in user space as separate processes
- Services communicate via message passing
- Examples: Mach, QNX, MINIX
Advantages:
- More modular and maintainable
- More secure (failures in services don't crash system)
- Easier to port to new architectures
Disadvantages:
- Performance overhead due to message passing
- More complex design

4. Explain in brief:
1) Kernel : Core component of OS that manages system resources and communication
between hardware and software. It handles memory management, process management, task
scheduling, and I/O management.
2) System Call : Interface between a process and the OS, allowing user-level processes to
request services from the kernel. Examples include file operations (open, read, write) and
process control (fork, exec).

3) Multitasking : OS feature that allows multiple tasks/processes to run concurrently by rapidly


switching between them, giving the illusion of parallel execution.

4) Multithreading : Execution model where a single process contains multiple threads of


execution that share resources but can execute independently.

5. Define Process. Differentiate between process and program.


Process Definition:
A process is an instance of a program in execution. It consists of the program code, current
activity (program counter, stack), data section, heap, and process control block.
Differences:
| Program | Process |
|---------|---------|
| Static entity (set of instructions) | Dynamic entity (program in execution) |
| Stored on disk | Resides in memory |
| Passive entity | Active entity |
| No resource requirements | Requires memory, CPU, I/O resources |
| Single instance | Multiple instances possible |
| No execution context | Has execution context (PC, registers, etc.) |

6. Explain the process state transition with neat diagram.


Process States:
1. New : Process is being created
2. Ready : Process is waiting to be assigned to a processor
3. Running : Instructions are being executed
4. Waiting : Process is waiting for some event (I/O completion, signal)
5. Terminated : Process has finished execution
State Transitions:
1. New → Ready: OS admits the process
2. Ready → Running: Scheduler selects the process
3. Running → Ready: Time slice expired or higher priority process arrived
4. Running → Waiting: Process requests I/O or event
5. Waiting → Ready: I/O completed or event occurred
6. Running → Terminated: Process completes or is aborted
```
+-----+ admit +------+ dispatch +------+
| New | ------> |Ready| -------> |Running|
+-----+ +------+ +------+
^ |
| time slice | I/O or event wait
| expired v
+--------------- +------+
|Waiting|
+------+
```
7. What is thread?
A thread is a lightweight process that exists within a process and shares the process's resources
(memory, files). Threads have their own program counter, register set, and stack, but share code
section, data section, and OS resources with other threads of the same process.
Benefits:
- Responsiveness (one thread can run while another is blocked)
- Resource sharing (threads share memory and resources)
- Economy (creating threads is cheaper than processes)
- Scalability (can take advantage of multiple processors)
8. What is Process Control Block? Explain various entries of it.
Process Control Block (PCB):
Data structure maintained by OS for each process that contains all information needed to
manage and track the process.
PCB Entries:
1. Process Identification : Process ID, Parent process ID
2. Process State : Current state (new, ready, running, etc.)
3. CPU Registers : Program counter, stack pointer, general-purpose registers
4. CPU Scheduling Information : Priority, scheduling parameters
5. Memory Management Information : Page tables, memory limits
6. Accounting Information : CPU used, time limits, process number
7. I/O Status Information : List of I/O devices allocated, pending I/O requests
8. Process Privileges : Access rights to system resources
9. Pointers : To other PCBs (for scheduling queues)

9. Define terms:
1) Waiting Time : Total time a process spends waiting in the ready queue
2) Response Time : Time from submission of a request until first response is produced
3) Turnaround Time : Time taken to execute a process from submission to completion
4) Scheduler : OS component that selects which process should be executed next
5) Dispatcher : OS module that gives control of CPU to the process selected by scheduler

10. Explain the classical thread model with its implementation strategies.
Classical Thread Model:
- Threads exist within a process and share the process's resources
- Each thread has its own thread ID, program counter, register set, and stack
- Threads share code section, data section, and OS resources
Implementation Strategies:
1. User-Level Threads :
- Managed by user-level library (kernel unaware of threads)
- Fast creation/context switching (no kernel mode needed)
- If one thread blocks, entire process blocks
- Examples: POSIX Pthreads
2. Kernel-Level Threads :
- Managed directly by OS kernel
- Slower creation/context switching (requires system call)
- If one thread blocks, others can run
- Examples: Windows NT, Solaris
3. Hybrid Approaches :
- Combination of user and kernel level threads
- Multiple user threads mapped to smaller number of kernel threads
- Examples: Solaris 2, Windows NT/2000

11. Explain Round Robin, FCFS, Shortest Job First and Priority Scheduling Algorithms with
illustration.
1. First-Come, First-Served (FCFS):
- Simplest scheduling algorithm
- Processes are executed in order of arrival
- Non-preemptive
- Example: P1(24), P2(3), P3(3) arrives in order
P1 runs for 24, P2 for 3, P3 for 3
Average waiting time = (0 + 24 + 27)/3 = 17
Advantages:
- Simple to implement
- Fair in some sense
Disadvantages:
- Poor performance (high waiting time)
- Convoy effect (short process behind long process)
2. Shortest Job First (SJF):
- Executes process with shortest CPU burst next
- Can be preemptive (Shortest Remaining Time First) or non-preemptive
- Example: P1(6), P2(8), P3(7), P4(3) arrives at time 0
Non-preemptive order: P4(3), P1(6), P3(7), P2(8)
Waiting times: P4(0), P1(3), P3(9), P2(16)
Average waiting time = (0+3+9+16)/4 = 7
Advantages:
- Provably optimal for minimizing average waiting time
- Good for batch systems
Disadvantages:
- Difficult to predict burst times
- Starvation of long processes possible

3. Priority Scheduling:
- Each process assigned a priority
- Highest priority process executed next
- Can be preemptive or non-preemptive
- Example: P1(10, priority 3), P2(1, priority 1), P3(2, priority 4), P4(1, priority 5)
Order: P4, P3, P1, P2 (assuming lower number = higher priority)
Advantages:
- Flexible (priority can reflect importance)
- Can approximate SJF (priority = inverse of next CPU burst)
Disadvantages:
- Starvation of low priority processes
- Priority inversion problem
4. Round Robin (RR):
- Each process gets small unit of CPU time (time quantum)
- After time expires, process is preempted and added to end of ready queue
- Example: P1(24), P2(3), P3(3) with quantum=4
Execution order: P1(4), P2(3), P3(3), P1(4), P1(4), P1(4), P1(4), P1(1)
Waiting times: P1(6), P2(4), P3(7)
Average waiting time = (6+4+7)/3 = 5.67
Advantages:
- Fair allocation of CPU
- Good for time-sharing systems
- No starvation
Disadvantages:
- Performance depends heavily on time quantum
- High overhead from context switches

12. Find average waiting time for shortest job first scheduling, and Round robin scheduling
algorithm. CPU burst time is given in millisecond and time quantum is 4. Process CPU Burst Time
P1- 6, P2- 8, P3- 5, P4- 2
SJF (Non-preemptive):
Execution order: P4(2), P3(5), P1(6), P2(8)
Waiting times:
- P4: 0
- P3: 2
- P1: 2+5 = 7
- P2: 2+5+6 = 13
Average waiting time = (0 + 2 + 7 + 13)/4 = 22/4 = 5.5 ms

Round Robin (Quantum=4):


Execution order with remaining times:
P1(6): runs 4, remaining 2
P2(8): runs 4, remaining 4
P3(5): runs 4, remaining 1
P4(2): runs 2, completes
P1(2): runs 2, completes
P2(4): runs 4, completes
P3(1): runs 1, completes
Waiting times:
- P1: (0) + (4+4+2+4 = 10) = 10 (from ready to first run until completion)
Actually better calculation: P1 arrives at 0, runs at 0, completes at 6: waiting time = 0
Let me recalculate properly:
Correct RR calculation:
Time 0-4: P1 runs 4 (remaining 2)
Time 4-8: P2 runs 4 (remaining 4)
Time 8-12: P3 runs 4 (remaining 1)
Time 12-14: P4 runs 2 (completes)
Time 14-16: P1 runs 2 (completes)
Time 16-20: P2 runs 4 (completes)
Time 20-21: P3 runs 1 (completes)
Waiting time = Turnaround time - Burst time
Turnaround times:
P1: 16 - 0 = 16, WT = 16-6 = 10
P2: 20 - 0 = 20, WT = 20-8 = 12
P3: 21 - 0 = 21, WT = 21-5 = 16
P4: 14 - 0 = 14, WT = 14-2 = 12
Average waiting time = (10 + 12 + 16 + 12)/4 = 50/4 = 12.5 ms

13. Five jobs A through E arrive at a computer centre with following details. Calculate the TAT &
Waiting Time for FCFS, SJF & RR (With Quanta=3) algorithms.
Given:
| Process | Arrival Time | CPU Time |
|---------|--------------|----------|
|A |0 |9 |
|B |1 |5 |
|C |2 |2 |
|D |3 |6 |
|E |4 |8 |
FCFS:
Execution order: A(0-9), B(9-14), C(14-16), D(16-22), E(22-30)
Turnaround Time (TAT) = Completion - Arrival
Waiting Time = TAT - CPU Time
| Process | TAT | WT |
|---------|-------|-------|
|A | 9-0=9 | 9-9=0 |
|B | 14-1=13 | 13-5=8 |
|C | 16-2=14 | 14-2=12 |
|D | 22-3=19 | 19-6=13 |
|E | 30-4=26 | 26-8=18 |
Average TAT = (9+13+14+19+26)/5 = 81/5 = 16.2
Average WT = (0+8+12+13+18)/5 = 51/5 = 10.2
SJF (Non-preemptive):
At time 0: Only A available, runs for 9
At time 9: Available processes - B(5), C(2), D(6), E(8)
Choose shortest: C runs (9-11)
Then B (11-16)
Then D (16-22)
Then E (22-30)
| Process | TAT | WT |
|---------|-------|-------|
|A | 9-0=9 | 9-9=0 |
|C | 11-2=9 | 9-2=7 |
|B | 16-1=15 | 15-5=10 |
|D | 22-3=19 | 19-6=13 |
|E | 30-4=26 | 26-8=18 |
Average TAT = (9+9+15+19+26)/5 = 78/5 = 15.6
Average WT = (0+7+10+13+18)/5 = 48/5 = 9.6
RR (Quantum=3):
Execution order with remaining times:
Time 0-3: A runs 3 (remaining 6)
Time 3-6: B arrives at 1, runs 3 (remaining 2)
Time 6-8: C arrives at 2, runs 2 (completes)
Time 8-11: D arrives at 3, runs 3 (remaining 3)
Time 11-14: E arrives at 4, runs 3 (remaining 5)
Time 14-17: A runs 3 (remaining 3)
Time 17-19: B runs 2 (completes)
Time 19-22: D runs 3 (completes)
Time 22-25: E runs 3 (remaining 2)
Time 25-28: A runs 3 (completes)
Time 28-30: E runs 2 (completes)
Completion times:
A: 28, B: 19, C: 8, D: 22, E: 30
| Process | TAT | WT |
|---------|-------|-------|
|A | 28-0=28 | 28-9=19 |
|B | 19-1=18 | 18-5=13 |
|C | 8-2=6 | 6-2=4 |
|D | 22-3=19 | 19-6=13 |
|E | 30-4=26 | 26-8=18 |
Average TAT = (28+18+6+19+26)/5 = 97/5 = 19.4
Average WT = (19+13+4+13+18)/5 = 67/5 = 13.4

Memory Management
1. Explain the following allocation algorithms.
I. First-fit II. Best-fit III. Worst-fit
I. First-fit:
- Allocates the first hole that is big enough
- Searching can start at beginning or where previous search ended
- Fast (stops at first suitable block)
Advantages:
- Fastest search
- Tends to preserve large blocks at end
Disadvantages:
- May leave many small fragments at beginning
II. Best-fit:
- Allocates the smallest hole that is big enough
- Must search entire list unless ordered by size
- Produces smallest leftover hole
Advantages:
- Minimizes wasted memory
- Good for systems with small memory
Disadvantages:
- Slow (must search entire list)
- Leaves many small fragments
III. Worst-fit:
- Allocates the largest hole
- Must search entire list unless ordered by size
- Produces largest leftover hole
Advantages:
- Reduces number of small fragments
- Leaves large blocks available
Disadvantages:
- Slow (must search entire list)
- Tends to break up large blocks

2. What is fragmentation? Explain the difference between internal and external fragmentation.
Wasted memory that cannot be used effectively, occurring when memory is allocated in small,
non-contiguous blocks.
Internal Fragmentation:
- Occurs when memory is allocated in fixed-size blocks
- Process uses less memory than allocated, leaving unused memory inside the block
- Example: Paging systems where page size is fixed
External Fragmentation:
- Occurs when memory is allocated in variable-size blocks
- Free memory is broken into small pieces between allocated blocks
- Total free memory may be sufficient, but not contiguous
- Example: Variable partition memory allocation
Key Differences:
- Internal: Wasted space within allocated blocks
- External: Wasted space between allocated blocks
- Internal can be reduced by smaller allocation units
- External can be reduced by compaction or segmentation
3. Explain Virtual Memory Management with Paging in detail.
Virtual Memory:
- Technique that allows execution of processes not completely in memory
- Creates illusion of more memory than physically available
- Uses disk space to extend physical memory
Paging:
- Divides physical memory into fixed-size blocks (frames)
- Divides logical memory into same-size blocks (pages)
- Page table maps logical pages to physical frames
Key Concepts:
1. Page Table : Per-process data structure that maps virtual pages to physical frames
2. TLB (Translation Lookaside Buffer) : Cache for page table entries to speed up address
translation
3. Page Fault : Occurs when referenced page is not in memory
- OS handles by loading page from disk
4. Demand Paging : Pages loaded only when needed (on page fault)
Advantages:
- Allows processes larger than physical memory
- Efficient memory utilization
- Simplifies memory allocation (any page can go in any frame)
- Enables memory sharing between processes
Disadvantages:
- Page fault handling overhead
- Internal fragmentation (last page may not be full)
- Memory required for page tables

4. For the Page Reference String 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1, Calculate the


Page Faults applying (i) Optimal (ii) LRU and (iii) FIFO Page Replacement Algorithms for a
Memory with three frames.
Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
(i) Optimal:
Replace page that won't be used for longest time in future
Frames: [ , , ]
Page Faults: 0
7: [7, , ] PF=1
0: [7,0, ] PF=2
1: [7,0,1] PF=3
2: Replace 7 (used last at start), [2,0,1] PF=4
0: [2,0,1] (hit)
3: Replace 1 (next ref at 14), [2,0,3] PF=5
0: [2,0,3] (hit)
4: Replace 2 (next ref at 8), [4,0,3] PF=6
2: Replace 3 (next ref at 9), [4,0,2] PF=7
3: Replace 4 (no future ref), [3,0,2] PF=8
0: [3,0,2] (hit)
3: [3,0,2] (hit)
2: [3,0,2] (hit)
1: Replace 0 (next ref at 16), [3,1,2] PF=9
2: [3,1,2] (hit)
0: Replace 3 (next ref at 18), [0,1,2] PF=10
1: [0,1,2] (hit)
7: Replace 2 (no future ref), [0,1,7] PF=11
0: [0,1,7] (hit)
1: [0,1,7] (hit)

Total Page Faults: 11


(ii) LRU:
Replace least recently used page
Frames: [ , , ]
Page Faults: 0
7: [7, , ] PF=1
0: [7,0, ] PF=2
1: [7,0,1] PF=3
2: Replace 7 (LRU), [2,0,1] PF=4
0: [2,0,1] (hit)
3: Replace 1 (LRU), [2,0,3] PF=5
0: [2,0,3] (hit)
4: Replace 2 (LRU), [4,0,3] PF=6
2: Replace 3 (LRU), [4,0,2] PF=7
3: Replace 4 (LRU), [3,0,2] PF=8
0: [3,0,2] (hit)
3: [3,0,2] (hit)
2: [3,0,2] (hit)
1: Replace 0 (LRU), [3,1,2] PF=9
2: [3,1,2] (hit)
0: Replace 3 (LRU), [0,1,2] PF=10
1: [0,1,2] (hit)
7: Replace 2 (LRU), [0,1,7] PF=11
0: [0,1,7] (hit)
1: [0,1,7] (hit)

Total Page Faults: 11


(iii) FIFO:
Replace oldest page in memory
Frames: [ , , ]
Page Faults: 0
7: [7, , ] PF=1
0: [7,0, ] PF=2
1: [7,0,1] PF=3
2: Replace 7 (oldest), [2,0,1] PF=4
0: [2,0,1] (hit)
3: Replace 0 (oldest), [2,3,1] PF=5
0: Replace 1 (oldest), [2,3,0] PF=6
4: Replace 2 (oldest), [4,3,0] PF=7
2: Replace 3 (oldest), [4,2,0] PF=8
3: Replace 0 (oldest), [4,2,3] PF=9
0: Replace 4 (oldest), [0,2,3] PF=10
3: [0,2,3] (hit)
2: [0,2,3] (hit)
1: Replace 0 (oldest), [1,2,3] PF=11
2: [1,2,3] (hit)
0: Replace 1 (oldest), [0,2,3] PF=12
1: Replace 2 (oldest), [0,1,3] PF=13
7: Replace 3 (oldest), [0,1,7] PF=14
0: [0,1,7] (hit)
1: [0,1,7] (hit)
Total Page Faults: 14

5. Explain Goals of I/O Software.


Goals of I/O Software:
1. Device Independence : Programs should be able to access any I/O device without specifying
device type
2. Uniform Naming : Files and devices should have consistent naming scheme
3. Error Handling : Handle errors at low level if possible, report to higher levels when needed
4. Synchronous vs Asynchronous : Support both blocking (synchronous) and non-blocking
(asynchronous) transfers
5. Buffering : Manage data transfer between devices with different speeds
6. Sharable vs Dedicated Devices : Support both shared and exclusive device access
7. Performance : Optimize I/O operations for speed and efficiency

6. Draw the block diagram for DMA. Explain the steps for DMA data transfer.
DMA Block Diagram:
```
[CPU] ----[System Bus]---- [Memory]
|
[DMA Controller]
|
[I/O Device(s)]
```
DMA Transfer Steps:
1. CPU sets up DMA controller:
- Specifies memory address
- Specifies transfer count
- Specifies direction (read/write)
- Starts I/O device
2. DMA controller requests bus from CPU (bus request)
3. CPU grants bus (bus grant), enters wait state
4. DMA controller performs transfer:
- Reads/writes directly to memory
- Increments address, decrements count
5. When transfer complete:
- DMA interrupts CPU
- CPU regains bus control
Advantages:
- Reduces CPU overhead
- Allows parallel operation (CPU can work while DMA transfers)
- Faster for bulk transfers

7. Write short note on Device driver and device controller.


Device Controller:
- Hardware component that interfaces with actual device
- Contains registers for command, status, data
- May have local buffer for temporary storage
- Examples: Disk controller, USB controller
Device Driver:
- Software module that controls a device controller
- Part of OS kernel or loaded as module
- Provides uniform interface to OS
- Handles device-specific commands
- Manages data transfer between device and memory
- Handles errors and exceptions
Relationship:
- OS makes generic I/O requests
- Driver translates to device-specific commands
- Controller executes commands on device
- Controller signals completion/interrupts
- Driver handles interrupts and notifies OS

8. Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently
serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of
pending requests, in FIFO order, is 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130 Starting
from the current head position, what is the total distance (in cylinders) that the disk arm moves
to satisfy all the pending requests, for FCFS and SCAN disk scheduling.
Given:
Current: 143
Previous: 125 (moving upward since 143 > 125)
Queue: 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
FCFS:
Services requests in order: 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
Movement:
143 → 86: |143-86| = 57
86 → 1470: |86-1470| = 1384
1470 → 913: |1470-913| = 557
913 → 1774: |913-1774| = 861
1774 → 948: |1774-948| = 826
948 → 1509: |948-1509| = 561
1509 → 1022: |1509-1022| = 487
1022 → 1750: |1022-1750| = 728
1750 → 130: |1750-130| = 1620
Total distance = 57 + 1384 + 557 + 861 + 826 + 561 + 487 + 728 + 1620 = 7081 cylinders
SCAN (Elevator):
Moving upward initially (since 143 > 125)
Services all requests in upward direction first, then reverses
Upward requests from 143: 1470, 913, 1774, 948, 1509, 1022, 1750
Sorted: 913, 948, 1022, 1470, 1509, 1750, 1774
Downward requests: 130, 86
Movement:
143 → 913: 913-143 = 770
913 → 948: 948-913 = 35
948 → 1022: 1022-948 = 74
1022 → 1470: 1470-1022 = 448
1470 → 1509: 1509-1470 = 39
1509 → 1750: 1750-1509 = 241
1750 → 1774: 1774-1750 = 24
Reaches end (4999 not needed, reverses at 1774)
1774 → 130: 1774-130 = 1644
130 → 86: 130-86 = 44
Total distance = 770 + 35 + 74 + 448 + 39 + 241 + 24 + 1644 + 44 = 3319 cylinders

9. Explain Elevator Algorithm.


Elevator Algorithm (SCAN):
- Disk arm moves in one direction (like elevator)
- Services all requests in that direction
- When reaches end of disk, reverses direction and services new requests
- Also called "lift algorithm"
Characteristics:
- Bi-directional movement
- Services requests in path of movement
- Fair to all requests (no starvation)
- Better performance than FCFS
Variations:
1. C-SCAN : Circular SCAN - returns to start without servicing on return
2. LOOK : Doesn't go to end, just goes to last request in direction
3. C-LOOK : Like C-SCAN but only goes to last request
Advantages:
- More efficient than FCFS
- Provides uniform wait time
- No starvation
Disadvantages:
- May cause delay for requests just missed by arm
10. Write short note: RAID levels.
RAID (Redundant Array of Independent Disks):
Technology that combines multiple disk drives into a logical unit for performance and/or
redundancy.
Common RAID Levels:
1. RAID 0 (Striping):
- Data split across disks with no redundancy
- High performance
- No fault tolerance
2. RAID 1 (Mirroring):
- Complete data duplication on two disks
- Good read performance
- 100% overhead (needs twice the storage)
3. RAID 5 (Striping with Distributed Parity):
- Data and parity striped across 3+ disks
- Can survive one disk failure
- Good balance of performance, capacity, redundancy
4. RAID 6 (Striping with Dual Parity):
- Like RAID 5 but with two parity blocks
- Can survive two disk failures
- Higher overhead than RAID 5
5. RAID 10 (1+0):
- Combination of mirroring and striping
- Mirrored pairs that are then striped
- High performance and redundancy
- 50% storage efficiency
Other Levels:
- RAID 2, 3, 4: Rarely used in practice
- RAID 50, 60: Nested RAID combinations
Benefits:
- Increased data reliability
- Improved I/O performance
- Increased storage capacity

You might also like