V20PCA102 - Operating System
V20PCA102 - Operating System
2 What is the kernel of an a) The outermost layer of the operating system d) The part of the
operating system? operating system
b) The part of the operating system that manages file systems
that interacts
c) The graphical user interface directly with
hardware
d) The part of the operating system that interacts directly with
hardware
d) In device drivers
9 What is the primary purpose a) To manage input and output operations c) To organize and
of a file system in an store data on
b) To provide a user interface for accessing files
operating system? secondary storage
c) To organize and store data on secondary storage devices devices
3 Which of the following two a) write & delete message d) receive & send
operations are provided by b) delete & receive message message
the IPC facility? c) send & delete message
d) receive & send message
5 What is the primary purpose a) To allocate only the memory resources to other processes c) To release all
of process termination in an resources
b) To manage the execution of processes on the CPU
operating system? associated with a
c) To release all resources associated with a terminated process terminated
process
d) To create a new process with its own execution environment
d) Priority Scheduling
8 Which of the following is a) Threads within the same process share the same address space. b) Threads have
NOT a characteristic of a their own
b) Threads have their own separate memory space.
thread? separate memory
c) Threads share resources such as code section and data section. space.
2 What is a race condition in the context a) A situation where two or more processes are d) A situation where
of concurrent programming? waiting indefinitely for an event that can never occur multiple processes try
to access and modify
b) A situation where multiple processes access a
shared data
shared resource in a predefined order c) A situation
concurrently, leading
where a process holds resources while waiting for
to unpredictable
another process to release resources
behavior
d) A situation where multiple processes try to access
and modify shared data concurrently, leading to
unpredictable behavior
3 Which synchronization construct a) Mutex a) Mutex
provides a mechanism for processes
b) Semaphore
to enter and exit critical sections
safely? c) Barrier
d) Condition variable
d) To terminate processes
6 What is a deadlock in the context of a) A situation where a process holds resources while b) A situation where
process synchronization? waiting for another process to release resources two or more
processes are waiting
b) A situation where two or more processes are
indefinitely for an
waiting indefinitely for an event that can never occur
event that can never
c) A situation where multiple processes try to access occur
and modify shared data concurrently
7 What is the primary goal of deadlock a) To identify and recover from deadlocks c) To ensure that
prevention? deadlocks do not
b) To avoid the detection of deadlocks
occur by preventing
c) To ensure that deadlocks do not occur by one of the conditions
preventing one of the conditions necessary for necessary for
deadlock deadlock
8 What is the aim of deadlock a) To identify and recover from deadlocks a) To identify and
detection? recover from
b) To ensure that deadlocks do not occur
deadlocks
c) To avoid the detection of deadlocks
1 What is the primary goal of a) To ensure the security of system memory b) To allocate
memory management in memory
b) To allocate memory resources efficiently to
operating systems? resources
processes
efficiently to
c) To prevent unauthorized access to memory processes
d) Fragmentation
d) Fragmentation
d) Demand paging
8 What happens when a page fault a) A process accesses a page that is not present in a) A process
occurs in demand paging? memory, leading to a page replacement. accesses a page
that is not
b) A process accesses a page that is read-only, leading
present in
to a segmentation fault.
memory, leading
c) A process accesses a page that has been corrupted, to a page
leading to a system crash. replacement.
d) Clock Algorithm
10 What is thrashing in the context a) The process of swapping out pages that are not b) The process of
of virtual memory management? frequently accessed rapidly swapping
pages in and out
b) The process of rapidly swapping pages in and out
of memory,
of memory, leading to excessive disk I/O
leading to
c) The process of optimizing page replacement to excessive disk
minimize memory overhead d) The process of I/O
allocating memory resources to processes efficiently
List and briefly explain Common services include process management, memory
2 five common services management, file system management, device management, and
provided by operating user interface services. Process management involves creating,
systems. scheduling, and terminating processes, while memory management
deals with allocation and deallocation of memory resources.
Differentiate between Real-time operating systems (RTOS) prioritize meeting strict timing
3 real-time and time- requirements for critical tasks, ensuring deterministic response
sharing operating times crucial for applications such as industrial control or medical
systems. equipment. They employ deterministic scheduling algorithms and
guarantee timely task completion, even under varying workloads. In
contrast, time-sharing operating systems facilitate resource sharing
among multiple users or processes concurrently, aiming to
maximize CPU utilization and responsiveness in general-purpose
computing environments.
Describe the boot process The booting process involves loading the operating system kernel
4 of an operating system into memory, initializing hardware devices, and launching system
services and user applications. It typically includes power-on self-
test (POST), bootloader execution, kernel loading, and system
initialization.
What is a file system, and A file system is a method used by operating systems to organize and
5 what are its key store data on storage devices. Its key components include files (data
components? units), directories (containers for files and directories), file metadata
(information about files), file system structure (organization of files
and directories), allocation tables (to track file locations), and file
system drivers (software for interaction with storage devices).
These components work together to manage data storage, retrieval,
and organization efficiently.
Sl No Question Answer
Explain the difference Preemptive scheduling allows the operating system to interrupt a
1 between preemptive running process to allocate the CPU to another process of higher
and non-preemptive priority. Example: Round Robin. Non-preemptive scheduling, on the
scheduling algorithms other hand, lets a process run until it voluntarily gives up the CPU.
Example: First-Come, First-Served (FCFS)
What are system calls, System calls are interfaces provided by the operating system that
2 and why are they allow user-level processes to request services from the kernel. They
important? Provide two provide a way for applications to interact with the underlying
examples operating system and access privileged resources that would
otherwise be inaccessible from user space. System calls are crucial
for performing various operations such as I/O operations, process
management, memory management, and file manipulation.
Examples include read() for reading data from a file and write() for
writing data to a file
Discuss the advantages Shared memory offers advantages such as efficient data transfer
3 and disadvantages of due to direct memory access, making it suitable for high-
using shared memory performance applications. It also allows for flexible data exchange
for interprocess between processes, accommodating complex data structures and
communication large buffers. However, shared memory communication requires
careful synchronization to maintain data consistency, introducing
overhead and complexity. Security risks are inherent, as any
process with access to the shared memory region can potentially
manipulate or corrupt data, necessitating stringent access control
measures.
Define a thread and A thread is the smallest unit of execution within a process. It
4 discuss the benefits of represents a single sequence of instructions that can be scheduled
using threads in for execution by the operating system's scheduler. Threads within
multitasking the same process share the same memory space, allowing them to
environments. access the same data and resources
Threads offer improved responsiveness, efficient resource
utilization, and simplified communication compared to processes.
Describe the steps Process termination involves releasing resources allocated to the
5 involved in the process, closing open files and sockets, and notifying parent and
termination of a process child processes about the termination. The exit() system call is used
in a Unix-like operating to terminate the process and return an exit status to the parent
system process.
Sl No Question Answer
What is process Process synchronization is the coordination of activities among
1 synchronization, and multiple processes to ensure orderly execution and prevent
why is it necessary in conflicts in accessing shared resources. It's necessary in operating
operating systems? systems to maintain data integrity by regulating access to shared
resources like memory, files, or devices. Without synchronization,
concurrent processes may interfere with each other, leading to data
corruption, race conditions, or deadlock.
Define the critical The critical section problem refers to the situation where multiple
2 section problem and processes or threads need to access shared resources, but only one
discuss its significance in can execute the critical section at a time to prevent interference and
concurrent maintain data consistency. It is significant in ensuring mutual
programming exclusion and preventing race conditions.
Explain the difference A binary semaphore can take only two integer values, typically 0
3 between binary and and 1, and is used for binary synchronization problems like mutual
counting semaphores. exclusion. A counting semaphore can take any non-negative integer
value and is used for resource counting and synchronization.
List and explain the four The four necessary conditions for deadlock are mutual exclusion
necessary conditions for (resources cannot be shared), hold and wait (processes hold
deadlock occurrence. resources while waiting for others), no preemption (resources
cannot be forcibly released), and circular wait (a circular chain of
4 processes waits for resources held by others).
Describe resource Resource allocation graph (RAG) analysis technique for deadlock
5 allocation graph detection where the system periodically examines the resource
technique for deadlock allocation graph to detect circular wait conditions. Upon detection,
detection and recovery the system can recover from deadlock by preempting resources
in operating systems. from one or more processes or by terminating processes involved
in the deadlock.
Sl No Question Answer
Describe the term Paging is a memory management scheme that allows a computer to
1 paging in memory store and retrieve data from secondary storage (like a hard disk) in
management. fixed-size blocks called pages. It eliminates the need for contiguous
allocation of physical memory, enabling more efficient memory
usage and facilitating virtual memory systems.
What is the role of TLB The TLB is a hardware cache that stores recently accessed page
2 (Translation Lookaside table entries, providing faster access to frequently used pages. It
Buffer) in virtual memory helps in reducing the overhead of address translation in virtual
management? memory systems by caching frequently accessed mappings
between virtual and physical addresses
Describe how demand Demand paging allows the operating system to load only the
3 paging helps in efficient required pages into memory, conserving memory resources by
memory utilization. avoiding the unnecessary loading of pages that are not immediately
needed. It helps in efficient memory utilization by loading pages on-
demand, reducing the initial loading time of programs and
improving overall system performance
Compare and contrast FIFO (First-In-First-Out) replaces the oldest page in memory when
4 FIFO and LRU page a page fault occurs, while LRU (Least Recently Used) replaces the
replacement algorithms. page that has not been accessed for the longest time. FIFO is
simpler to implement but may suffer from the Belady's anomaly,
while LRU provides better performance by retaining more
frequently used pages in memory.
How does thrashing Thrashing occurs when a computer's operating system spends a
5 impact system significant amount of time swapping pages between memory and
performance? secondary storage (like a disk) due to excessive memory access
requests. It leads to a decrease in system performance, as the
majority of CPU time is spent on handling page faults rather than
executing useful tasks.
Sl No Question Answer
How does linked Linked allocation uses pointers to link together blocks of storage
1 allocation differ from allocated to a file, facilitating dynamic storage allocation and
contiguous allocation? minimizing fragmentation, whereas contiguous allocation allocates
contiguous blocks of storage space, enhancing sequential access
performance but potentially leading to fragmentation issues.
How is directory Directory implementation involves organizing and managing
2 implementation done? directories within a file system to store metadata about files and
subdirectories. It includes data structures and algorithms for
efficient directory lookup, creation, deletion, and navigation, such as
hierarchical directory trees or hash tables.
List the different disk Disk scheduling algorithms control the order in which disk I/O
3 scheduling algorithms. requests are serviced to optimize disk performance and reduce
access latency. Examples of disk scheduling algorithms include
FCFS (First-Come, First-Served), SSTF (Shortest Seek Time First),
and SCAN (Elevator) algorithms, each designed to minimize disk
head movement and maximize throughput.
What role does access Access control regulates user access to shared files by enforcing
4 control play in file permissions and restrictions, ensuring that only authorized users or
sharing protection? groups can read, write, or modify specific files, thus enhancing data
security and integrity.
What is the purpose of Free space management tracks and manages available storage
5 free space management space within a file system, optimizing storage utilization, reducing
in a file system? fragmentation, and ensuring efficient allocation of storage
resources. When data is written to the storage system, free space
management allocates contiguous blocks of free space to
accommodate the data.
Sl No Question Answer
Discuss the Operating systems can be categorized into several types, including batch processing
1 different types of systems, time-sharing systems, distributed systems, real-time systems, and
operating embedded systems.
systems and
their Batch processing systems execute a series of jobs without user interaction,
characteristics. optimizing resource utilization.
How does Time-sharing systems allow multiple users to interact with the system
system design simultaneously, sharing resources such as CPU and memory.
vary based on
the type of Distributed systems coordinate multiple interconnected computers to work together
operating as a single system, enhancing scalability and reliability.
system?
Real-time systems prioritize timely execution of tasks, critical for applications with
strict timing requirements.
Embedded systems are specialized OSs designed for dedicated functions in devices
like smartphones, automobiles, and appliances.
System design varies significantly based on the type of operating system, as each
type has unique requirements and priorities. For example, real-time systems
prioritize responsiveness and predictability, while distributed systems focus on
communication and fault tolerance.
Therefore, system designers must tailor the design and implementation of an OS to
meet the specific needs of the target system type
There are two options for the parent process after creating the child:
1. Wait for the child process to terminate before proceeding. The parent
makes a wait( ) system call, for either a specific child or for any child,
which causes the parent process to block until the wait( ) returns. UNIX
shells normally wait for their children to complete before issuing a new
prompt.
2. Run concurrently with the child, continuing to process without waiting.
This is the operation seen when a UNIX shell runs a process as a
background task. It is also possible for the parent to run for a while, and
then wait for the child later, which might occur in a sort of a parallel
processing operation. ( E.g. the parent may fork off a number of
children without waiting for any of them, then do a little work of its
own, and then wait for the children. )
Two possibilities for the address space of the child relative to the parent:
1. The child may be an exact duplicate of the parent, sharing the same
program and data segments in memory. Each will have their own PCB,
including program counter, registers, and PID. This is the behavior of
the fork system call in UNIX.
2. The child process may have a new program loaded into its address
space, with all new code and data segments. This is the behavior of the
spawn system calls in Windows. UNIX systems implement this as a
second step, using the exec system call.
Sl No Question Answer
What are semaphores, Semaphores are a synchronization mechanism used in operating
1 and how do they facilitate systems to control access to shared resources and coordinate the
process synchronization execution of concurrent processes.
in operating systems?
How can semaphores be Semaphores can be used to prevent race conditions, avoid
implemented ?. deadlock situations, and ensure mutual exclusion between
processes.
Explain deadlock recovery Deadlock recovery strategies are essential in operating systems to
2 strategies employed in restore system functionality when deadlock occurs, which is a
operating systems to situation where two or more processes are unable to proceed
restore system because each is waiting for the other to release a resource.
functionality after Deadlocks can lead to system instability and performance
deadlock has occurred degradation if left unresolved
1. Process Termination:
One straightforward approach to deadlock recovery is to terminate
one or more processes involved in the deadlock. This strategy frees
up resources held by the terminated processes, allowing other
processes to proceed. However, process termination may lead to
data loss or disrupt the functionality of the terminated processes. To
minimize the impact, the operating system may prioritize the
selection of processes for termination based on factors such as
priority, resource usage, or execution progress.
Resource Preemption:
Resource preemption involves forcibly reclaiming resources from
processes to break deadlocks. The operating system identifies a
process holding a resource that another process is waiting for and
preempts the resource from the holding process. The preempted
process may be temporarily suspended or have its execution
priority reduced to allow other processes to progress. Once the
deadlock is resolved, the preempted process can resume execution
and attempt to acquire the resource again. Resource preemption
can be complex and may require careful handling to avoid data
corruption or system instability.
Rollback:
Rollback is a technique used in distributed systems to recover from
deadlocks by undoing the effects of conflicting operations. When a
deadlock occurs, the system identifies the transactions or
operations contributing to the deadlock and rolls them back to a
consistent state before the deadlock occurred. Rollback involves
undoing changes made by transactions, releasing acquired
resources, and resetting the system to a known safe state. Rollback
recovery requires maintaining transaction logs or checkpoints to
track the state of transactions and support rollback operations.
While effective, rollback recovery may incur performance overhead
and delay system responsiveness.
Sl No Question Answer
Discuss internal and Fragmentation in memory management refers to the wastage of
1 external fragmentation in memory space due to inefficient allocation and deallocation of
memory management. memory segments.
Explain how
fragmentation affects There are two main types of fragmentation: internal fragmentation
system performance and and external fragmentation.
efficiency. Internal fragmentation occurs when a process is allocated more
memory than it actually needs. As a result, the allocated memory
contains unused space, which is wasted. Internal fragmentation
typically occurs in fixed-size allocation schemes, where memory is
allocated in fixed-size blocks or pages. The unused space within a
block or page is referred to as internal fragmentation.
External fragmentation occurs when there is enough total memory
space to satisfy a memory request, but the available space is
fragmented into small, non-contiguous blocks. Even though the
total free memory may be sufficient, individual memory requests
cannot be satisfied because they cannot find a contiguous block of
memory of the required size. External fragmentation can arise in
variable-size allocation schemes, such as dynamic memory
allocation, where memory segments are allocated and deallocated
dynamically.
Discuss the concept of Virtual memory management is a key feature of modern operating
2 virtual memory systems that enables efficient utilization of physical memory
management.. Explain the resources and provides memory protection. It allows programs to
key components and use more memory than is physically available by transparently
techniques used for swapping data between main memory (RAM) and secondary
implementation of virtual storage (such as hard disk or SSD).
memory
Sl No Question Answer
Discuss any 2 file access File access methods in operating systems dictate how data is read
1 methods used in from and written to files stored on secondary storage devices such
operating system along as hard disks or SSDs.
with its limitation
Sequential Access:
Method: In sequential access, data is read from or written to a file
sequentially, from the beginning to the end. Each subsequent read
or write operation advances the file pointer to the next record or
byte in the file.
Limitations:
Not suitable for random access or accessing data out of sequence.
Inefficient for large files or when accessing specific records in the
middle of the file.
Requires scanning through all preceding records to reach a specific
record.