OS Questions
OS Questions
Answer: Five major activities of an operating system with regard to file management are:
Creating and Deleting Files: Data can't be stored in an efficient manner unless arranged in some form of file structure. In the
latter, permanent storage would quickly fill up if files were not deleted and the space occupied by them reallocated to new files.
Creating and Deleting Directories: In order to store data in files, files themselves need to be arranged in directories or folders
in order to allow their efficient storage and retrieval. Much like file deletion, unnecessary directories or folders need to be removed
in order to keep the system uncluttered.
File Manipulation Instructions: Since operating systems allow application software to perform file manipulation using symbolic
instructions, the operating system itself needs to have a machine-level instruction set in order to interface with the hardware
directly. The application's symbolic instructions need to be translated into the machine-level instructions either by an interpreter or
by compiling the application code. The operating system contains provisions to manage this machine-level file manipulation.
Mapping to Permanent Storage: Operating systems need to be able to map files and folders to their physical location on
permanent storage in order to be able to store and retrieve them. This will be recorded in some form of disk directory which varies
according to the file system or systems that the operating system uses. The operating system will include a mechanism to locate the
separate file segments where it has divided a file.
Backing up Files: Files represent a considerable investment in time, intellectual effort and often money as well, thus their loss can
have a severe impact. Computer's permanent storage devices generally contain a number of mechanical devices which can fail, and
the storage media itself may degrade. A function of operating systems is to obviate the risk of data loss by backing files up on
additional secure and stable media in a redundant system.
2.2: What are the three major activities of an operating system with regard to memory management?
Answer: Three Major activities of an operating system with regard to memory management are:
1. Keeping track of which parts of memory are currently being used and by whom.
2. Decide which process is to be loaded in memory when the space is available.
3. Allocate and De-allocate memory space as needed.
Answer: Java is an interpreted language. This means that the JVM interprets the byte code instructions one at a time. Typically,
most interpreted environments are slower than running native binaries, for the interpretation process requires converting each
instruction into native machine code.
A just-in-time (JIT) compiler compiles the byte code for a method into native machine code the first time the method is encountered.
This means that the Java program is essentially running as a native application (of course, the conversion process of the JIT takes
time as well, but not as much as byte code interpretation). Furthermore, the JIT caches compiled code so that it can be reused the
next time the method is encountered. A Java program that is run by a JIT rather than a traditional interpreter typically runs much
faster.
2.4: The services and functions provided by an operating system can be divided into two main categories.
Briefly describe the two categories and discuss how they differ.
Answer: One class of services provided by an operating system is to enforce protection between different processes running
concurrently in the system. Processes are allowed to access only those memory locations that are associated with their address
spaces. Also, processes are not allowed to corrupt files associated with other users. A process is also not allowed to access devices
directly without operating system intervention.
The second class of services provided by an operating system is to provide new functionality that is not supported directly by the
underlying hardware. Virtual memory and file systems are two such examples of new services provided by an operating system.
2.5: Why is the separation of mechanism and policy desirable?
Answer: The separation of mechanism and policy is important to provide flexibility to a system. If the interface between
mechanism and policy is well defined, the change of policy may affect only a few parameters. On the other hand, if interface
between these two is vague or not well defined, it might involve much deeper change to the system.
2.6: Would it be possible for the user to develop a new command interpreter using the system-call interface
provided by the operating system?
Answer: A user should be able to develop a new command interpreter using the system‐
call interface provided by the operating system.
The command interpreter allows a user to create and manage processes and also determine ways by which they communicate (such
as through pipes and files). As all of this functionality could be accessed by an user‐level program using the system calls, it
should be possible for the user to develop a new command‐line interpreter
2.7: What is the purpose of the command interpreter? Why is it usually separate from the kernel?
Answer: It reads commands from the user or from a file of commands and executes them, usually by turning them into one or
more system calls. It is usually not part of the kernel since the command interpreter is subject to changes.
2.8: What is the main advantage for an operating-system designer of using virtual-machine architecture? What
is the main advantage for a user?
Answer: A virtual machine is a layer of abstraction that is put over a program and provides it with on interface that has been
simplified for the purpose of interacting with possibly many different computer machines and the operating systems that they run
on. Advantages of VM:
Cross platform: This feature lets a program be written and compiled just once. Then it is able to run on a much wider variety of
operating systems without it being modified every time. This feature means that it is handy for cell phones, which often come with
this virtual machine installed.
Advantage to User: With a VM, We can have multiple OSs, but only need to boot into one.
2.9: It is sometimes difficult to achieve a layered approach if two components of the operating system are
dependent on each other. Identify a scenario in which it is unclear how to layer two system components that
require tight coupling of their functionalities.
Answer: The virtual memory subsystem and the storage subsystem are typically tightly-coupled and require careful design in a
layered system due to the following interactions. Many systems allow files to be mapped into the virtual memory space of an
executing process. On the other hand, the virtual memory subsystem typically uses the storage system to provide the backing store
for pages that do not currently reside in memory. Also, updates to the file system are sometimes buffered in physical memory
before it is flushed to disk, thereby requiring careful coordination of the usage of memory between the virtual memory subsystem
and the file system.
2.10: What is the main advantage of the layered approach to system design? What are the disadvantages of
using the layered approach?
Answer:
Advantage: In Layered architecture we separate the user interface from the business logic, and the business logic from the data
access logic. Separation of concerns among these logical layers and components is easily achieved with the help of layered
architecture. It increases flexibility, maintainability, and scalability.
Disadvantage:
There might be a negative impact on the performance as we have the extra overhead of passing through layers instead of calling a
component directly.
Development of user-intensive applications can sometime take longer if the layering prevents the use of user interface components
that directly interact with the database.
2.11: What is the relationship between a guest operating system and a host operating system in a system like
VMware? What factors need to be considered in choosing the host operating system?
Answer: A host operating system is the operating system that is in direct communication with the hardware. It has direct
hardware access to kernel mode and all of the devices on the physical machine. The guest operating system runs on top of a
virtualization layer and all of the physical devices are virtualized.
A host operating system should be as modular and thin as possible to allow the virtualization of the hardware to be as close to the
physical hardware as possible, and so that dependencies that exist in the host operating don't restrict operation in the guest
operating system.
2.12: Describe three general methods for passing parameters to the operating system.
Answer: Three general methods for passing parameters to the operating system:
a. Pass parameters in registers
b. Registers pass starting addresses of blocks of parameters
c. Parameters can be placed, or pushed, onto the stack by the program, and popped off the stack by the operating system
2.13: What is the main advantage of the microkemel approach to system design? How do user programs and
system services interact in microkernel architecture? What are the disadvantages of using the microkernel
approach?
(b) It is more secure as more operations are done in user mode than in kernel mode, and
(c) A simpler kernel design and functionality typically results in a more reliable operating system.
User programs and system services interact in microkernel architecture by using interprocess communication mechanisms such as
messaging. These messages are conveyed by the operating system.
The disadvantages of the microkernel architecture are the overheads associated with interprocess communication and the frequent
use of the operating system’s messaging functions in order to enable the user process and the system service to interact with each
other.
2.14: What system calls have to be executed by a command interpreter or shell in order to start a new process?
Answer: In UNIX systems, a fork system call followed by an exec system call need to be performed to start a new process.
2.15: What are the two models of interprocess communication? What are the strengths and weaknesses of the
two approaches?
Message-passing strengths and weaknesses: Message can be exchanged between the processes either directly or indirectly through
a common mailbox. It is useful for exchanging smaller amounts of data and easier to implement for inter computer communication.
However, its speed is slower than shared-memory model.
Shared-memory strengths and weaknesses: It allows maximum speed and convenience of communication. However, in the areas of
protection and synchronization between the processes some problems exist.
2.16: The experimental Synthesis operating system has an assembler incorporated in the kernel. To optimize
system-call performance, the kernel assembles routines within kernel space to minimize the path that the
system call must take through the kernel. This approach is the antithesis of the layered approach, in which the
path through the kernel is extended to make building the operating system easier. Discuss the pros and cons of
the Synthesis approach to kernel design and system-performance optimization.
Answer: Synthesis is impressive due to the performance it achieves through on-the-fly compilation. Unfortunately, it is difficult
to debug problems within the kernel due to the fluidity of the code. Also, such compilation is system specific; making Synthesis
difficult to port (a new compiler must be written for each architecture).
Synthesis is impressive for dynamic compilation, and unfortunately debugging problems in the kernel is difficult due to the
fluidity of the code. Moreover, such compilations are system-specific, making Synthesis difficult to port (a new compiler must be
written for each schema)
2.17: In what ways is the modular kernel approach similar to the layered approach? In what ways does it differ
from the layered approach?
Answer: The modular kernel approach requires subsystems to interact with each other through carefully constructed interfaces
that are typically narrow (in terms of the functionality that is exposed to external modules). The layered kernel approach is similar in
that respect.
However, the layered kernel imposes a strict ordering of subsystems such that subsystems at the lower layers are not allowed to
invoke operations corresponding to the upper-layer subsystems. There are no such restrictions in the modular kernel approach,
wherein modules are free to invoke each other without any constraints.
2.18: How could a system is designed to allow a choice of operating systems from which to boot? What would
the bootstrap program need to do?
Answer: Consider a system that would like to run both Windows XP and three different distributions of Linux (e.g., Red Hat,
Debian, and Mandrake). Each operating system will be stored on disk. During system boot-up, a special program (which we will call
the boot manager) will determine which operating system to boot into. This means that rather initially booting to an operating
system, the boot manager will first run during system startup. It is this boot manager that is responsible for determining which
system to boot into. Typically boot managers must be stored at certain locations of the hard disk to be recognized during system
startup. Boot managers often provide the user with a selection of systems to boot into; boot managers are also typically designed to
boot into a default operating system if no choice is selected by the user.
2.19: What are the advantages and disadvantages of using the same system call interface for manipulating
both files and devices?
Answer: Each device can be accessed as though it was a file in the file system. Since most of the kernel deals with devices
through this file interface, it is relatively easy to add a new device driver by implementing the hardware-specific code to
support this abstract file interface. Therefore, this benefits the development of both user program code, which can be written to
access devices and files in the same manner, and device driver code, which can be written to support a well-defined API.
The disadvantage with using the same interface is that it might be difficult to capture the functionality of certain devices within
the context of the file access API, thereby resulting in either a loss of functionality or a loss of performance. Some of this could be
overcome by the use of the ioctl operation that provides a general-purpose interface for processes to invoke operations on
devices.
2.20: Describe how you could obtain a statistical profile of the amount of time spent by a program executing
different sections of its code. Discuss the importance of obtaining such a statistical profile.
Answer: One could issue periodic timer interrupts and monitor what instructions or what sections of code are currently
executing when the interrupts are delivered. A statistical profile of which pieces of code were active should be consistent with the
time spent by the program in different sections of its code. Once such a statistical profile has been obtained, the programmer
could optimize those sections of code that are consuming more of the CPU resources.
2.21: Why do some systems store the operating system in firmware, while others store it on disk?
Answer: For certain devices, such as handheld PDAs and cellular telephones, a disk with a file system may be not being available
for the device. In this situation, the operating system must be stored in firmware.
----------------------------------------------------------------------------------------
3.1: What are the benefits and the disadvantages of each of the following? Consider both the system level and
the programmer level.
Answer:
a. Synchronous and asynchronous communication: A benefit of synchronous communication is that it allows a rendezvous
between the sender and receiver. A disadvantage of a blocking send is that a rendezvous may not be required and the message
could be delivered asynchronously. As a result, message-passing systems often provide both forms of synchronization
b. Automatic and explicit buffering: Automatic buffering provides a queue with indefinite length, thus ensuring the sender will
never have to block while waiting to copy a message. There are no specifications on how automatic buffering will be provided; one
scheme may reserve sufficiently large memory where much of the memory is wasted. Explicit buffering specifies how large the
buffer is. In this situation, the sender may be blocked while waiting for available space in the queue. However, it is less likely that
memory will be wasted with explicit buffering.
c. Send by copy and send by reference: Send by copy does not allow the receiver to alter the state of the parameter; send by
reference does allow it. A benefit of send by reference is that it allows the programmer to write a distributed version of a centralized
application. Java’s RMI provides both; however, passing a parameter by reference requires declaring the parameter as a remote
object as well
d. Fixed-sized and variable-sized messages: The implications of this are mostly related to buffering issues; with fixed-size messages,
a buffer with a specific size can hold a known number of messages. The number of variable-sized messages that can be held by such
a buffer is unknown. Consider how Windows 2000 handles this situation: with fixed-sized messages (anything < 256 bytes), the
messages are copied from the address space of the sender to the address space of the receiving process. Larger messages (i.e.
variable-sized messages) use shared memory to pass the message.
3.2: Consider the RPC mechanism. Describe the undesirable consequences that could arise from not enforcing
either the "at most once" or "exactly once" semantic. Describe possible uses for a mechanism that has neither
of these guarantees.
Answer: If an RPC mechanism cannot support either the “at most once” or “exactly once” semantics, then the RPC server cannot
guarantee that a remote procedure will not be invoked multiple occurrences. Consider if a remote procedure were withdrawing
money from a bank account on a system that did not support these semantics. It is possible that a single invocation of the remote
procedure might lead to multiple withdrawals on the server.
For a system to support either of these semantics generally requires the server maintain some form of client state such as the
timestamp described in the text.
If a system were unable to support either of these semantics, then such a system could only safely provide remote procedures that
do not alter data or provide time-sensitive results. Using our bank account as an example, we certainly require “at most once” or “at
least once” semantics for performing a withdrawal (or deposit!). However, an inquiry into an account balance or other account
information such as name, address, etc. does not require these semantics.
3.3: With respect to the RPC mechanism, consider the "exactly once" semantic. Does the algorithm for
implementing this semantic execute correctly even if the ACK message back to the client is lost due to a
network problem? Describe the sequence of messages and discuss whether "exactly once" is still preserved.
Answer: If an RPC mechanism cannot support either the “at most once” or “at least once” semantics, then the RPC server cannot
guarantee that a remote procedure will not be invoked multiple occurrences. Consider if a remote procedure were withdrawing
money from a bank account on a system that did not support these semantics. It is possible that a single invocation of the remote
procedure might lead to multiple withdrawals on the server. For a system to support either of these semantics generally requires the
server maintain some form of client state such as the timestamp described in the text. If a system were unable to support either of
these semantics, then such a system could only safely provide remote procedures that do not alter data or provide time-sensitive
results. Using our bank account as an example, we certainly require “at most once” or “at least once” semantics for performing a
withdrawal (or deposit!). However, an inquiry into an account balance or other account information such as name, address, etc.
does not require these semantics.
3.4: Palm OS provides no means of concurrent processing. Discuss three major complications that concurrent
processing ads to an operating system.
Answer: Three major complications that concurrent processing ads to an operating system are:
a. A method of time sharing must be implemented to allow each of several processes to have access to the system. This
method involves the preemption of processes that do not voluntarily give up the CPU (by using a system call, for instance)
and the kernel being reentrant (so more than one process maybe executing kernel code concurrently).
b. Processes and system resources must have protections and must be protected from each other. Any given process must
be limited in the amount of memory it can use and the operations it can perform on devices like disks.
c. Care must be taken in the kernel to prevent deadlocks between processes, so processes aren’t waiting for each other’s
allocated resources.
3.6: The Sun Ultra SPARC processor has multiple register sets. Describe what happens when a context switch
occurs if the new context is already loaded into one of the register sets. What happens if the new context is in
memory rather than in a register set and all the register sets are in use?
Answer: The CPU current-register-set pointer is changed to point to the set containing the new context, which takes very little
time. If the context is in memory, one of the contexts in a register set must be chosen and be moved to memory, and the new
context must be loaded from memory into the set. This process takes a little more time than on systems with one set of registers,
depending on how a replacement victim is selected.
3.7: Construct a process tree similar to Figure. To obtain process information for the
UNIX or Linux system, use the command ps -ael. Use the command man ps to get
more information about the ps command. On Windows systems, you will have to use
the task manager.
Answer:
3.8: Give an example of a situation in which ordinary pipes are more suitable than named pipes and an
example of a situation in which named pipes are more suitable than ordinary pipes.
Answer:
Example 1: In the following situation ordinary pipes are more suitable than named pipes.
If we want to establish communication between two specific processes on the same machine, then using ordinary pipes is more
suitable than using named pipes because named pipes involve more overhead in this situation. In the situation, where we will not
allow access to our pipe after the communication is finished between processes using ordinary pipes is more suitable than named
pipes.
Example 2: In the following situations named pipes are more suitable than ordinary pipes.
Named pipes are more suitable than ordinary pipes when the medium is bidirectional, and there is no parent child relationship
between processes. Named pipes are more suitable than ordinary pipes in situations where we want to communicate over a
network rather than communicating with the processes resides on the same machine. Named pipes can be used to listen to request
from other processes (similar to TCP/IP ports). If the calling processes are aware of the name, they can send requests to this.
Unnamed pipes cannot be used for this purpose.
3.9: Describe the differences among short-term, medium-term, and longterm scheduling.
Answer: Short-term scheduling is also called as CPU Scheduling. It selects among the processes that are ready to execute and
allocates the CPU to one of them. It selects a new process for the CPU frequently. It executes at least once in every 100 milliseconds.
Hence it is fast
Medium-term scheduling is having advantage to remove processes from memory and thus reduce the degree of multiprogramming.
The process can be reintroduced into the memory, and its execution can be continued where it left off. This method is called
swapping. This is done by medium-term scheduler.
Long -term scheduling is also called as job scheduling. It selects processes from this pool and loads them into memory for execution.
Long-term scheduler executes much less frequently. It also controls degree of multi programming. It select a good process mix of I/O
bound and CPU-bound process.
3.10: Including the initial parent process, how many processes are created by
the program shown in Figure?
Answer: The purpose of fork () is to create a new process, which becomes the child
process of the caller.
After a new child process is created, both processes will execute the next instruction
following the fork () system call. In the program shown, originally there is one parent process
and three fork () system calls are executed.
After the first fork () call, one new child process is created. Including the parent process, now
there are two processes. Both of the two processes then run the second fork () call, each
creating a new child process and making the number of processes four. All these four
processes then run the third fork () call, each creating a new child process and making the
number of processes eight.
Answer: When the fork function is called successfully, the return value of
the parent process is the PID of the child process, and the return value of the
child process is 0.Thus in A and B, it’s the child process.
Answer: The result is still 5 as the child updates its copy of value. When
control returns to the parent, its value remains at 5.