Final Os
Final Os
4. How can semaphores be used to create and resolve deadlocks? Provide an example.
Semaphores can both cause and resolve deadlocks depending on their use. Deadlocks occur when
processes hold resources while waiting for others, creating a cycle of dependency. To prevent this,
semaphores can enforce a strict order of resource acquisition. For example, if two resources, A and B,
are required, semaphores ensure that all processes request A before B, breaking potential cycles.
However, incorrect semaphore usage, such as not releasing a semaphore, can itself lead to deadlocks.
Page | 1
5. Explain the resource-allocation graph(avoidance algo) and its significance in deadlock handling.
A resource-allocation graph is a graphical representation used in operating systems to depict resource
allocation to processes, aiding in deadlock detection. It consists of nodes representing both processes
and resources, with directed edges indicating either a process's request for a resource or a resource's
assignment to a process. This graph helps identify deadlocks by revealing cycles: if a cycle exists and all
resources have only one instance, a deadlock is present. Analyzing these graphs allows for proactive
deadlock handling strategies. Additionally, they simplify the understanding of complex resource
allocation states in a system. If graph contains no cycles. No deadlock. If a graph contains a cycle. If
only one instance per resource type, then deadlock. If several instances per resource type, possibility of
deadlock
Page | 2
12. Recovery from Deadlock: Resource Preemption
. Selecting a victim – minimizes cost.
.Rollback – return to some safe state, restart the thread for that state.
.Starvation – same thread may always be picked as victim, include number of rollbacks in cost factor.
CH 9: Main Memory
Page | 5
The Memory-Management Unit (MMU) is responsible for mapping logical (virtual) addresses generated
by a program into physical addresses in memory. When a program accesses memory, the MMU uses
either a relocation register (for simple base and limit schemes) or a page table (in paging systems) to
map the virtual address to its corresponding physical address. In segmented or paged memory systems, it
also performs segment or page-level translations, adding base addresses or using page frame numbers.
The MMU also includes mechanisms for handling memory access violations and supporting virtual
memory, such as page faults. This translation is crucial for memory protection, efficient process
isolation, and the implementation of virtual memory.
25. Explain the concepts of fragmentation and methods to reduce it.
Fragmentation in computing occurs when memory is used inefficiently, leaving unusable small spaces.
It's of two types: external, where free memory is fragmented into small blocks, and internal, where
allocated memory has unused portions. To reduce external fragmentation, strategies like compaction
(rearranging memory contents to create continuous free blocks), segmentation, and paging (dividing
memory into segments or pages of fixed size) are used. Internal fragmentation can be minimized by
allocating memory that closely matches request sizes. Paging is particularly effective in addressing both
types by allocating and managing memory in fixed-size blocks.
26. Describe the paging technique and how it addresses the issue of fragmentation.(types)
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used.Paging is a memory management
technique that divides physical memory into fixed-size blocks called frames and divides logical memory
into blocks of the same size called pages. When a process is loaded into memory, its pages are loaded
into available frames, which can be non-contiguous, thus effectively addressing external fragmentation
by eliminating the need for contiguous memory allocation. Since each page is a fixed size, paging can
lead to minimal internal fragmentation, limited to the last page of a process. The system uses a page
table to keep track of the mapping between a process's pages and the corresponding frames in physical
memory. This technique simplifies memory allocation and efficiently utilizes available memory space.
27. Discuss the implementation of page tables and their role in memory management.
Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page
table
In this scheme every data/instruction access requires two memory
accesses
• One for the page table and one for the data / instruction
The two-memory access problem can be solved by the use of a special
fast-lookup hardware cache called translation look-aside buffers
(TLBs) (also called associative memory).
28. Describe the differences between compile-time, load-time, and execution-time address binding.
Compile-time address binding occurs when memory locations are determined during program
compilation, requiring the program's memory location to be known beforehand. Load-time binding
happens when the compiler generates relocatable code, and final memory locations are determined when
the program is loaded into memory; if the starting address changes, the program needs to be reloaded.
Page | 6
Execution-time binding, used in systems with virtual memory, allows processes to be moved between
memory and disk while running, with addresses bound during execution, allowing more flexibility and
efficient memory usage. Each method offers different levels of flexibility and efficiency, with execution-
time binding providing the most adaptability for dynamic memory management.
29. Describe the paging model in memory management and its implementation details.
In the paging model of memory management, physical memory is divided into fixed-size blocks called
frames, and logical memory is divided into blocks of the same size called pages. A process's address
space is broken into pages, which are mapped to frames in physical memory. This mapping is managed
by a page table, which maintains the correspondence between a process's virtual pages and the physical
frames. Paging allows for non-contiguous memory allocation, thus efficiently utilizing memory and
reducing external fragmentation. The system's Memory Management Unit (MMU) translates virtual
addresses to physical addresses using the page table, allowing for flexible and efficient memory use.
30. Explain the concept and use of Translation Look-Aside Buffers (TLBs) in memory management.
Translation Look-Aside Buffers (TLBs) are a type of cache used in memory management to speed up
the translation of virtual memory addresses to physical addresses. When a virtual address is translated,
the mapping from the page table is stored in the TLB. On subsequent accesses, the TLB is checked first;
if the address translation is found (a TLB hit), the time-consuming page table lookup is avoided. If the
translation is not in the TLB (a TLB miss), the page table must be consulted. TLBs significantly improve
performance by reducing the average time needed for address translation, essential in systems with
virtual memory.
Memory Protection
Memory protection implemented by associating protection bit with
each frame to indicate if read-only or read-write access is allowed
• Can also add more bits to indicate page execute-only, and so on
Valid-invalid bit attached to each entry in the page table:
• “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
• “invalid” indicates that the page is not in the process’ logical
address space
• Or use page-table length register (PTLR)
Any violations result in a trap to the kernel
CH 10 Virtual Memory
31. Define virtual memory and its benefits.
Virtual memory is a memory management capability of an operating system (OS) that uses hardware
and software to enable a computer to compensate for physical memory shortages, by temporarily
transferring data from random access memory (RAM) to disk storage. This process creates the illusion
of a very large (virtual) memory for users, even on a system with limited physical memory. The benefits
of virtual memory include increased program execution efficiency, memory abstraction, and improved
computer multitasking. It allows more applications to run simultaneously and handles large application
memory needs beyond actual physical memory limits. Virtual memory also provides isolation and
protection of memory spaces between different processes.
32. Explain the concept of demand paging.
Demand paging is a memory management technique where pages are loaded into RAM only when
requested by a program, minimizing initial loading time. It allows efficient use of memory by loading
only the necessary portions of a program into RAM as needed.
Page | 7
Lazy swapper – never swaps a page into memory unless a page will be needed. Swapper that deals with
pages is a pager
35. Describe various page replacement algorithms, including FIFO, optimal, and LRU.
Page replacement algorithms decide which memory pages to swap out when a new page needs to be
loaded into full memory:
1. FIFO (First-In, First-Out): The oldest page in memory is replaced first. It's simple but can remove
important pages, potentially leading to suboptimal performance.
2. Optimal: This algorithm replaces the page that will not be used for the longest period in the future.
While it provides the best performance, it's impractical as it requires future knowledge of requests.
3. LRU (Least Recently Used): Replaces the page that has not been used for the longest time. It's more
realistic than optimal and often provides good performance, but requires tracking page usage history,
adding overhead.
Each algorithm balances factors like simplicity, memory utilization, and the need for predicting future
requests.
Prevent over-allocation of memory by modifying page-fault service routine to include page replacement
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk
Page replacement completes separation between logical memory and
physical memory – large virtual memory can be provided on a smaller physical memory
36. Explain the allocation of frames in virtual memory.
37. Discuss the concept and impact of thrashing in a virtual memory system.
Thrashing in a virtual memory system occurs when a computer spends more time swapping pages in and
out of memory than executing instructions, severely degrading performance. This situation arises when
there's insufficient physical memory to satisfy the needs of all active processes, leading to excessive
page faults and disk I/O. As a result, CPU utilization drops, and the system throughput declines
dramatically. Thrashing can be caused by overcommitting system resources or poor page replacement
strategies. Addressing thrashing typically involves increasing physical memory, optimizing system load,
or improving the page replacement algorithm to reduce the rate of page faults.
Page | 8
Proportional allocation – Allocate according to the size of process
• Dynamic as degree of multiprogramming, process sizes change
39. Global vs. Local Allocation
Global replacement – process selects a replacement frame from the
set of all frames; one process can take a frame from another
• But then process execution time can vary greatly
• But greater throughput so more common
Local replacement – each process selects from only its own set of
allocated frames
• More consistent per-process performance
• But possibly underutilized memory
1. The operating system verifies the fault is valid (i.e., the address is legal but not currently in memory).
2. The process causing the fault is paused, and its context (state) is saved.
3. The OS locates the needed page on secondary storage (like a hard drive).
4. A free frame in physical memory is located; if none are available, a page replacement algorithm
selects a page to evict.
Page | 9
5. The needed page is read into the chosen frame, the page table is updated to reflect this change, and the
process's context is restored.
The process then retries the instruction that caused the fault, which should now succeed with the
required data in physical memory. This process is fundamental to virtual memory systems, allowing
them to efficiently use available physical memory and handle larger process memory demands.
CH 11: Mass-Storage Systems
43. Describe the physical structure of HDDs and NVM devices.
Hard Disk Drives (HDDs) consist of spinning platters coated with magnetic material, with read/write
heads on an actuator arm to access data. They store data magnetically in small areas called sectors,
usually grouped into tracks in concentric circles. HDDs are mechanical, making them slower compared
to solid-state devices but offer larger storage capacities at a lower cost.
Non-Volatile Memory (NVM) devices, like SSDs, use flash memory for data storage. They contain no
moving parts and store data in an array of memory cells made from floating-gate transistors. NVM
devices offer faster access times, lower latency, and are more resistant to physical shock, but generally
have higher costs per gigabyte and a limited number of write cycles compared to HDDs.
44. Discuss the importance and methods of storage device management.
Storage device management is crucial for efficient data access, reliability, and system performance. Key
methods include formatting, which prepares the storage device for use by defining a file system;
partitioning, which divides a physical storage device into separate regions (partitions) for better
organization and management; RAID (Redundant Array of Independent Disks) configurations, which
enhance data reliability and read/write speeds by distributing data across multiple disks;
defragmentation, particularly for HDDs, which consolidates fragmented data to improve access times;
and regular backups, ensuring data integrity and recovery in case of hardware failure. Efficient storage
management balances performance, reliability, and capacity utilization.
45. Explain the different types of storage attachment, including host-attached and network-attached.
Host-attached storage (HAS) directly connects to a computer or server, typically via interfaces like
SATA, SCSI, or USB. This direct connection offers high data transfer speeds and low latency but limits
accessibility to the connected host. Examples include internal hard drives and external USB drives.
Network-attached storage (NAS) connects to a network, providing file-based data storage services to
other devices on the network. NAS systems are accessible over a local area network (LAN) through
standard network protocols like NFS or SMB/CIFS, offering the advantage of shared storage among
multiple users and systems. NAS provides easier data sharing and better scalability but may have higher
latency compared to HAS.
CH 12: I/O Systems
46. Discuss the differences and applications of host-attached, network-attached, and cloud storage.
Host-attached storage (HAS), like internal HDDs or SSDs, connects directly to a single computer,
offering fast, low-latency access, ideal for operating systems and applications requiring quick data
retrieval.
Network-attached storage (NAS) connects to a local network, providing shared storage for multiple
clients. It's used for centralized file sharing within an organization, offering ease of data access and
management.
Cloud storage, provided by services like AWS S3 or Google Cloud Storage, offers data storage over the
internet. It's highly scalable, accessible from anywhere, and ideal for backup, archiving, and
Page | 10
collaborative projects. However, it depends on network speed and has potential concerns with data
security and ongoing costs.
47. Describe the process of direct memory access (DMA) and its importance.
Direct Memory Access (DMA) is a technology that allows certain hardware subsystems to access main
system memory (RAM) independently of the central processing unit (CPU). During DMA, a DMA
controller takes over from the CPU to transfer data directly between I/O devices and RAM, bypassing
the CPU to avoid its overhead. This process is crucial for high-speed data transfer, as it frees up the CPU
to perform other tasks, significantly increasing the computer's overall performance. DMA is essential for
efficient movement of large data blocks, like disk-to-memory transfers, and is commonly used in
applications involving audio, video, and network data transfer. The importance of DMA lies in its ability
to speed up data transfers and reduce CPU workload, optimizing system performance.
48. Explain how device drivers interact with I/O hardware.
Device drivers are specialized software components that provide an interface between the operating
system and I/O hardware devices. They translate general I/O instructions from the operating system into
device-specific commands. When an application requests I/O operations, the operating system relays
these requests to the appropriate device driver. The driver, understanding the technical specifications of
its device, communicates directly with the device hardware, sending commands and interpreting its
responses. This abstraction layer provided by drivers allows for standardized communication with
diverse hardware, ensuring compatibility and simplifying application development.
CH18: Virtual Machines
49. Define virtual machines and their primary purpose.
Virtual machines are software-based simulations of physical computers that run on a host system. Their
primary purpose is to enable the execution of multiple operating systems and applications on a single
physical machine, providing isolation and flexibility.
50. Describe the structure and function of a virtual machine manager (VMM) or hypervisor.
A Virtual Machine Manager (VMM) or hypervisor is software that creates and manages virtual machines
(VMs) by abstracting the hardware of a physical computer. It allows multiple VMs to run on a single
physical machine, each with its own operating system and applications. The hypervisor operates at a
layer above the hardware and below the guest operating systems, handling resource allocation and
ensuring isolation between VMs. It provides a virtual operating platform to the guest systems and
manages the execution of their processes. This setup is crucial for efficient utilization of hardware
resources, improved scalability, isolation for security and stability, and flexibility in managing different
operating systems and applications on the same physical hardware.
51. Explain the concept of host, guest, and VMM in the context of virtual machines.
In the context of virtual machines (VMs), the "host" is the physical machine that runs the virtualization
software (VMM or hypervisor). The "Virtual Machine Manager" (VMM) or hypervisor is the software
layer that enables virtualization, creating and managing multiple isolated VMs on the host hardware.
The "guest" refers to each VM running on the host, containing its own virtual operating system and
applications. Guests operate as if they were separate physical machines, although they share and are
managed by the underlying host hardware and VMM. This architecture allows multiple, diverse
operating systems and applications to run concurrently on a single physical machine, optimizing
resource utilization and providing flexibility.
52. Discuss the different types of hypervisors (Type 0, Type 1, and Type 2) and their key features.
Hypervisors, crucial for virtualization, come in two main types:
1. Type 1 Hypervisors, also known as "bare metal" hypervisors, run directly on the host's hardware to
control the hardware and manage guest operating systems. They're highly efficient and secure, used in
enterprise environments. Examples include VMware ESXi and Microsoft Hyper-V.
Page | 11
2. Type 2 Hypervisors, known as "hosted" hypervisors, run on a conventional operating system just like
other computer programs. They're easier to set up and are suitable for testing and development
environments. Examples include VMware Workstation and Oracle VirtualBox.
The term "Type 0 Hypervisor" is less commonly used and not a standard industry classification. It
sometimes refers to hypervisors with minimal functionality embedded directly into the hardware or
firmware.
53. Discuss the fundamental idea behind virtual machines and how they abstract hardware.
Virtual machines (VMs) are software emulations of physical computers. They provide an abstraction
layer over physical hardware, allowing multiple operating systems (OS) to run concurrently on a single
physical machine. Each VM operates with its virtualized hardware resources (CPU, memory, storage,
network interface), which are mapped to the real hardware by the hypervisor or Virtual Machine
Manager (VMM). This abstraction allows each VM to function as a standalone system, unaware of
others on the same host, thereby maximizing hardware utilization and enabling flexibility in software
deployment, testing, and isolation. VMs facilitate efficient resource management, improve scalability,
and enhance security through isolation.
54. Discuss the Implementation of VMMs
Paravirtualization is a technique where the guest operating system is modified to collaborate with the
Virtual
Machine manager (VMM) for optimized performance.
Programming-environment virtualization - VMMs do not virtualize real hardware but instead create an
optimized virtual system.
Emulators – Allow applications written for one hardware environment to run on a very different
hardware environment.
Application containment - Not virtualization at all but rather provides virtualization-like features by
segregating applications from the operating system, making them more secure and manageable.
55. Discuss the benefits and features of virtual machine+
1. Isolation and Security: Virtual machines ensure the host's protection from potential vulnerabilities,
while also isolating each VM to prevent the spread of viruses.
2. Freeze, Suspend, and Snapshot: Virtual machines allow for the freezing, suspending, and
snapshotting of running instances, facilitating easy relocation, copying, and state restoration.
3. Cloning and Multiple OS Support: VMs support cloning, enabling the concurrent operation of
original and copied instances, and the ability to run diverse operating systems on a single machine.
4. Templating for Efficiency: Templating in virtual machines streamlines deployment by creating
standardized OS + application combinations, facilitating rapid instance creation for customers.
5. Live Migration: Virtual machines support live migration, enabling the seamless movement of
running instances between hosts for enhanced flexibility and resource optimization.
Page | 12