Operating System Solution
Operating System Solution
e) What is synchronization?
Synchronization ensures that multiple processes or threads execute in a coordinated manner,
avoiding conflicts.
h) What is a page?
A page is a fixed-size block of virtual memory that is mapped to physical memory.
j) What is booting?
Booting is the process of starting a computer by loading the operating system into memory.
1. Resource sharing
2. Improved performance
3. Reliability and fault tolerance
4. Scalability
b) Compare preemptive and non-preemptive scheduling.
Preemptive scheduling allows the operating system to interrupt and switch processes.
Non-preemptive scheduling ensures a process runs until completion or voluntarily yields
control.
Independent processes do not affect or interact with the execution of other processes.
Dependent processes rely on each other for execution and synchronization.
Many-to-One: Many user threads are mapped to a single kernel thread, but only one can
execute at a time.
One-to-One: Each user thread is mapped to a kernel thread, allowing multiple threads to run
in parallel.
Many-to-Many: Multiple user threads are mapped to a smaller or equal number of kernel
threads, balancing parallelism and efficiency.
b) Which three requirements must be satisfied while designing a solution to the critical section
problem?
1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, the selection of the next process cannot be
postponed indefinitely.
3. Bounded Waiting: A process must not wait indefinitely to enter the critical section.
1. Process ID
2. Program Counter
3. CPU registers
4. Memory limits
5. Process state
6. Scheduling information
7. I/O status information
c) Consider the following reference string and find out the total number of page faults using
OPT and FIFO. Assume number of frames are 3:
Reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3
Client-Server: Centralized model where clients request services from a central server.
Peer-to-Peer: Decentralized model where each node can act as both client and server,
sharing resources directly.
1. Mutex locks
2. Semaphores
3. Monitors
4. Peterson’s solution
f) What is kernel?
The kernel is the core component of an operating system that manages system resources and
facilitates communication between hardware and software.
Preemptive scheduling: The OS can interrupt a currently running process to start or resume
another process.
Non-preemptive scheduling: Once a process is given CPU control, it runs to completion or
voluntarily yields control.
1. Many-to-One Model: Multiple user-level threads are mapped to a single kernel thread,
which can lead to performance issues as only one thread runs at a time.
2. (Please note: You can create diagrams using drawing tools or online resources.)
3. One-to-One Model: Each user thread corresponds to a kernel thread, allowing multiple
threads to run in parallel, improving performance.
b) Write a short note on logical address and physical address binding with a diagram.
Logical address binding refers to the address generated by the CPU, while physical address
binding refers to the actual address in the memory. Binding can occur at compile time, load
time, or run time.
c) Consider the following set of processes with the length of CPU burst time and arrival time
given in milliseconds. Calculate waiting time, turnaround time for each process. Also calculate
the average waiting time and average turnaround time using preemptive priority scheduling.
P1 14 4 3
P2 5 2 1
P3 6 9 2
P4 5 5 3
P5 9 0 4
Gantt Chart: P5 → P2 → P3 → P1 → P4
Waiting times:
P1 = 17, P2 = 0, P3 = 5, P4 = 12, P5 = 0
Turnaround times:
P1 = 31, P2 = 5, P3 = 11, P4 = 17, P5 = 9
Average Waiting Time = (17 + 0 + 5 + 12 + 0) / 5 = 6.8 ms
Average Turnaround Time = (31 + 5 + 11 + 17 + 9) / 5 = 14.6 ms
Process States:
New: Process is being created.
Ready: Process is waiting for CPU allocation.
Running: Process is currently being executed.
Waiting: Process is waiting for an event or I/O operation.
Terminated: Process has finished execution.
b) Explain the reader-writer problem in brief.
The reader-writer problem involves managing access to a shared resource where multiple
readers can read simultaneously but writers require exclusive access. Solutions include using
semaphores or mutexes to ensure synchronization while maximizing concurrency for readers.
c) Consider a reference string 3,2,1,0,3,2,4,3,2,1,0,4. Number of frames = 3. Find out the number
of page faults using i) LRU, ii) OPT.
b) Explain first fit, best fit, worst fit, next fit algorithm.
1. First Fit: Allocates the first block of memory that is large enough for the request.
2. Best Fit: Allocates the smallest available block that meets the requirements, minimizing
leftover space.
3. Worst Fit: Allocates the largest available block, aiming to leave the largest possible leftover
space.
4. Next Fit: Similar to first fit but continues the search from the last allocated block rather than
starting from the beginning.
i) What is Frame?
A frame is a fixed-size block of physical memory used to store a page in a paged memory
management system.
1. Customizability
2. Community support
3. Cost-effective
4. Transparency in security
5. Rapid development and innovation
1. Allows execution of processes that require more memory than physically available.
2. Provides memory isolation for processes, enhancing security.
3. Enables efficient use of physical memory by paging.
1. fork()
2. exec()
3. wait()
4. exit()
5. kill()
Internal Fragmentation: Wasted space within an allocated block (e.g., when memory blocks
are larger than required).
External Fragmentation: Wasted space outside allocated blocks (e.g., small free blocks
scattered throughout memory).
c) Consider the following set of processes with the length of CPU burst time and arrival time
given in milliseconds. Illustrate the execution of these processes using Round Robin (RR) CPU
scheduling algorithm considering the time quantum is 3. Calculate average waiting time and
average turnaround time. Also, draw the Gantt chart.
Process Burst Time Arrival Time
P1 4 2
P2 6 0
P3 2 1
Waiting Times:
Turnaround Times:
P1 = 9
P2 = 6
P3 = 4
Average Waiting Time = (5 + 0 + 2) / 3 = 2.33 ms
Average Turnaround Time = (9 + 6 + 4) / 3 = 6.33 ms
b) Which are the different types of schedulers? Explain the working of the short-term
scheduler?
Different types of schedulers include:
Long-term Scheduler: Decides which processes are admitted to the system for processing.
Short-term Scheduler: Selects from among the processes that are ready to execute and
allocates CPU time. It operates frequently and must be efficient to ensure high CPU
utilization.
Page faults = 9
ii) LRU:
Page faults = 8
a) What is shell?
A shell is a command-line interface that allows users to interact with the operating system by
executing commands.
b) What is thread?
A thread is the smallest unit of processing that can be scheduled by an operating system,
allowing for concurrent execution within a process.
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
h) Define Semaphore.
A semaphore is a synchronization primitive that provides a way to control access to shared
resources by multiple processes or threads.
j) What is segmentation?
Segmentation is a memory management scheme that divides the process's memory into
segments based on the logical division of the program, such as functions or data structures.
1. Process management
2. Memory management
3. Device management
4. File management
5. Security and access control
1. LFU (Least Frequently Used): Replaces the page that has been used the least number of
times; it is efficient for workloads with consistent data access patterns.
2. MFU (Most Frequently Used): Replaces the page that has been used the most, based on the
assumption that pages that are used often will be used again soon; it can lead to suboptimal
performance if not managed properly.
Diagram:
sql
b) Consider the following set of processes with CPU time given in milliseconds. Illustrate
execution of processes using FCFS and preemptive SJF CPU scheduling algorithm and
calculate turnaround time, waiting time, average turnaround time, average waiting time.
Processes Burst time Arrival Time
P0 5 1
P1 3 0
P2 2 2
P3 4 3
P4 8 2
FCFS Scheduling:
Gantt Chart: P1 | P2 | P0 | P3 | P4
Turnaround Time (TAT) = Completion Time - Arrival Time
Waiting Time (WT) = TAT - Burst Time
Calculations:
Gantt Chart: P1 | P2 | P3 | P0 | P4
The calculations follow similarly with burst times recalibrated for preemptive scheduling.
1. Internal Fragmentation: Occurs when fixed-sized memory blocks are allocated, leading to
unused space within allocated blocks.
2. External Fragmentation: Occurs when free memory is split into small, non-contiguous
blocks, making it difficult to allocate larger blocks.
b) Which three requirements must be satisfied while designing a solution to the critical section
problem? Explain each in detail.
1. Mutual Exclusion: Only one process can execute in its critical section at any time,
preventing data inconsistency.
2. Progress: If no process is executing in its critical section, the selection of the next process
that enters the critical section cannot be postponed indefinitely.
3. Bounded Waiting: There must be a limit on the number of times other processes are allowed
to enter their critical sections after a process has made a request to enter its critical
section.
a) Describe the term distributed operating system. State its advantages and disadvantages.
A distributed operating system manages a group of independent computers and makes them
appear to the user as a single coherent system.
Advantages:
1. Resource Sharing: Users can access shared resources across the network.
2. Scalability: The system can easily expand by adding more machines.
Disadvantages:
3. Complexity: Increased complexity in managing resources and communication between
nodes.
4. Security Issues: More vulnerability to attacks due to the distributed nature of the system.
Diagram:
lua
In this diagram, processes can be swapped in and out of RAM based on demand and resource
availability, allowing the system to manage memory efficiently.