0% found this document useful (0 votes)
13 views

Operating System Questions

The document discusses various aspects of operating systems, including their definitions, functions, components, and management strategies. It covers topics such as process management, memory management, scheduling, and real-time systems, along with the differences between multitasking, multiprogramming, and multithreading. Additionally, it addresses concepts like fragmentation, virtual memory, and security measures within operating systems.

Uploaded by

pevahcomputers
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Operating System Questions

The document discusses various aspects of operating systems, including their definitions, functions, components, and management strategies. It covers topics such as process management, memory management, scheduling, and real-time systems, along with the differences between multitasking, multiprogramming, and multithreading. Additionally, it addresses concepts like fragmentation, virtual memory, and security measures within operating systems.

Uploaded by

pevahcomputers
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 43

Page 1 of 43

OPERATING SYSTEM QUESTIONS

1. Define an Operating System ,Trace the Developments in Operating Systems ,Identify the

functions of Operating Systems,

2. Describe the basic components of the Operating Systems, Understand Information

storage and Management Systems ,

3. List Disk Allocation and Scheduling Methods , Identify the Basic Memory Management

strategies, List the Virtual Memory Management Techniques ,Define a Process and list

the features of the Process Management System

4. Identify the Features of Process Scheduling, List the features of Inter-Process

Communication and Deadlocks,

5. Identify the Concepts of Parallel and Distributed Processing ,Identify Security Threats to

Operating Systems

Q: What are the basic functions of an operating system?

A: Operating system controls and coordinates the use of the hardware among the various

applications programs for various uses. Operating system acts as resource allocator and manager.

Since there are many possibly conflicting requests for resources the operating system must decide

which requests are allocated resources to operating the computer system efficiently and fairly.

Also operating system is control program which controls the user programs to prevent errors and

improper use of the computer. It is especially concerned with the operation and control of I/O

devices.

Q: Why paging is used?

A: Paging is solution to external fragmentation problem which is to permit the logical address

space of a process to be noncontiguous, thus allowing a process to be allocating physical memory


Page 2 of 43

wherever the latter is available. While running DOS on a PC, which command would be used to

duplicate the entire diskette? diskcopy

Q: What resources are used when a thread created? How do they differ from those when a

process is created?

A: When a thread is created the threads does not require any new resources to execute the thread

shares the resources like memory of the process to which they belong to. The benefit of code

sharing is that it allows an application to have several different threads of activity all within the

same address space. Whereas if a new process creation is very heavyweight because it always

requires new address space to be created and even if they share the memory then the inter process

communication is expensive when compared to the communication between the threads.

Q: What is virtual memory?

A: Virtual memory is hardware technique where the system appears to have more memory that it

actually does. This is done by time-sharing, the physical memory and storage parts of the memory

one disk when they are not actively being used.

Q: What is Throughput, Turnaround time, waiting time and Response time?

A: Throughput – number of processes that complete their execution per time unit. Turnaround

time – amount of time to execute a particular process. Waiting time – amount of time a process

has been waiting in the ready queue. Response time – amount of time it takes from when a request

was submitted until the first response is produced, not output (for time-sharing environment).

Q: What is the state of the processor, when a process is waiting for some event to occur?

A: Waiting state

Q: What is the important aspect of a real-time system or Mission Critical Systems?

A: A real time operating system has well defined fixed time constraints. Process must be done

within the defined constraints or the system will fail. An example is the operating system for a

flight control computer or an advanced jet airplane. Often used as a control device in a dedicated
Page 3 of 43

application such as controlling scientific experiments, medical imaging systems, industrial control

systems, and some display systems. Real-Time systems may be either hard or soft real-time. Hard

real-time: Secondary storage limited or absent, data stored in short term memory, or read-only

memory (ROM), Conflicts with time-sharing systems, not supported by general-purpose operating

systems. Soft real-time: Limited utility in industrial control of robotics, Useful in applications

(multimedia, virtual reality) requiring advanced operating-system features.

Q: What is the difference between Hard and Soft real-time systems?

A: A hard real-time system guarantees that critical tasks complete on time. This goal requires that

all delays in the system be bounded from the retrieval of the stored data to the time that it takes the

operating system to finish any request made of it. A soft real time system where a critical real-time

task gets priority over other tasks and retains that priority until it completes. As in hard real time

systems kernel delays need to be bounded

Q: What is the cause of thrashing? How does the system detect thrashing? Once it detects

thrashing, what can the system do to eliminate this problem?

A: Thrashing is caused by under allocation of the minimum number of pages required by a

process, forcing it to continuously page fault. The system can detect thrashing by evaluating the

level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by

reducing the level of multiprogramming.

Q: What is multi tasking, multi programming, multi threading?

A: Multi programming: Multiprogramming is the technique of running several programs at a time

using timesharing. It allows a computer to do several things at the same time. Multiprogramming

creates logical parallelism. The concept of multiprogramming is that the operating system keeps

several jobs in memory simultaneously. The operating system selects a job from the job pool and

starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to

another job. So the main idea here is that the CPU is never idle.
Page 4 of 43

Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of

multitasking is quite similar to multiprogramming but difference is that the switching between

jobs occurs so frequently that the users can interact with each program while it is running. This

concept is also known as time-sharing systems. A time-shared operating system uses CPU

scheduling and multiprogramming to provide each user with a small portion of time-shared

system.

Multi threading: An application typically is implemented as a separate process with several

threads of control. In some situations a single application may be required to perform several

similar tasks for example a web server accepts client requests for web pages, images, sound, and

so forth. A busy web server may have several of clients concurrently accessing it. If the web

server ran as a traditional single-threaded process, it would be able to service only one client at a

time. The amount of time that a client might have to wait for its request to be serviced could be

enormous. So it is efficient to have one process that contains multiple threads to serve the same

purpose. This approach would multithread the web-server process, the server would create a

separate thread that would listen for client requests when a request was made rather than creating

another process it would create another thread to service the request. To get the advantages like

responsiveness, Resource sharing economy and utilization of multiprocessor architectures

multithreading concept can be used.

Q: What is hard disk and what is its purpose?

A: Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on

the magnetic medium of the disk.Hard disks have a hard platter that holds the magnetic medium,

the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have

a hard disk with a capacity of between 10 and 40 gigabytes. Data is stored onto the disk in the

form of files.
Page 5 of 43

Q: What is fragmentation? Different types of fragmentation?

A: Fragmentation occurs in a dynamic memory allocation system when many of the free blocks

are too small to satisfy any request.

External Fragmentation: External Fragmentation happens when a dynamic memory allocation

algorithm allocates some memory and a small piece is left over that cannot be effectively used. If

too much external fragmentation occurs, the amount of usable memory is drastically reduced.

Total memory space exists to satisfy a request, but it is not contiguous.

Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory

blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be

slightly larger than requested memory; this size difference is memory internal to a partition, but

not being used

Q: What is DRAM? In which form does it store data?

A: DRAM is not the best, but it’s cheap, does the job, and is available almost everywhere you

look. DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose

data unless it’s recharged every couple of milliseconds, and this recharging tends to slow down the

performance of DRAM compared to speedier RAM types.

Q: What is Dispatcher?

A: Dispatcher module gives control of the CPU to the process selected by the short-term

scheduler; this involves: Switching context, Switching to user mode, Jumping to the proper

location in the user program to restart that program, dispatch latency – time it takes for the

dispatcher to stop one process and start another running.

Q: What is CPU Scheduler?

A: Selects from among the processes in memory that are ready to execute, and allocates the CPU

to one of them. CPU scheduling decisions may take place when a process: 1.Switches from
Page 6 of 43

running to waiting state. 2.Switches from running to ready state. 3.Switches from waiting to ready.

4.Terminates. Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.

Q: What is Context Switch?

A: Switching the CPU to another process requires saving the state of the old process and loading

the saved state for the new process. This task is known as a context switch. Context-switch time is

pure overhead, because the system does no useful work while switching. Its speed varies from

machine to machine, depending on the memory speed, the number of registers which must be

copied, the existed of special instructions(such as a single instruction to load or store all registers).

Q: What is cache memory?

A: Cache memory is random access memory (RAM) that a computer microprocessor can access

more quickly than it can access regular RAM. As the microprocessor processes data, it looks first

in the cache memory and if it finds the data there (from a previous reading of data), it does not

have to do the more time-consuming reading of data from larger memory.

Q: What is a Safe State and what is its use in deadlock avoidance?

A: When a process requests an available resource, system must decide if immediate allocation

leaves the system in a safe state. System is in safe state if there exists a safe sequence of all

processes. Deadlock Avoidance: ensure that a system will never enter an unsafe state.

Q: What is a Real-Time System?

A: A real time process is a process that must respond to the events within a certain time period. A

real time operating system is an operating system that can run real time processes successfully
Page 7 of 43

CHA P T E R 1: Operating Systems Introduction

1.1 What are the three main purposes of an operating system?

Answer:

• To provide an environment for a computer user to execute programs on computer hardware in a


convenient and efficient manner.

• To allocate the separate resources of the computer as needed to solve the problem given. The
allocation process should be as fair and efficient as possible.

• As a control program it serves two major functions: (1) supervision of the execution of user
programs to prevent errors and improper use of the computer, and (2) management of the
operation and control of I/O devices.

1.2 What are the main differences between operating systems for mainframe computers and
personal computers?

Answer: Generally, operating systems for batch systems have simpler requirements than for
personal computers. Batch systems do not have to be concerned with interacting with a user as
much as a personal computer. As a result, an operating system for a PC must be concerned with
response time for an interactive user. Batch systems do not have such requirements. A pure batch
system also may have not to handle time sharing, whereas an operating system must switch
rapidly between different jobs.

1.3 List the four steps that are necessary to run a program on a completely dedicated machine.

Answer:

a. Reserve machine time.

b. Manually load program into memory.

c. Load starting address and begin execution.

d. Monitor and control execution of program from console.

1.4 We have stressed the need for an operating system to make efficient use of the computing
hardware. When is it appropriate for the operating system to forsake this principle and to “waste”
resources? Why is such a system not really wasteful?

Answer: Single-user systems should maximize use of the system for the user. A GUI might
“waste” CPU cycles, but it optimizes the user’s interaction with the system.
Page 8 of 43

1.5 What is the main difficulty that a programmer must overcome in writing an operating system
for a real-time environment?

Answer: The main difficulty is keeping the operating system within the fixed time constraints of
a real-time system. If the system does not complete a task in a certain time frame, it may cause a
breakdown of the entire system it is running. Therefore when writing an operating system for a
real-time system, the writer must be sure that his scheduling schemes don’t allow response time
to exceed the time constraint.

1.6 Consider the various definitions of operating system. Consider whether the operating system
should include applications such as Web browsers and mail programs. Argue both that it should
and that it should not, and support your answer.

Answer: Point. Applications such as web browsers and email tools are performing an
increasingly important role in modern desktop computer systems. To fulfill this role, they should
be incorporated as part of the operating system. By doing so, they can provide better
performance and better integration with the rest of the system. In addition, these important
applications can have the same look-and-feel as the operating system software.

Counter point. The fundamental role of the operating system is to manage system resources
such as the CPU, memory, I/O devices, etc. In addition, its role is to run software applications
such as web browsers and email applications. By incorporating such applications into the
operating system, we burden the operating system with additional functionality. Such a burden
may result in the operating system performing a less-than satisfactory job at managing system
resources. In addition, we increase the size of the operating system thereby increasing the
likelihood of system crashes and security violations.

1.7 How does the distinction between kernel mode and user mode function as a rudimentary
form of protection (security) system?

Answer: The distinction between kernel mode and user mode provides a rudimentary form of
protection in the following manner. Certain instructions could be executed only when the CPU is
in kernel mode. Similarly, hardware devices could be accessed only when the program is
executing in kernel mode. Control over when interrupts could be enabled or disabled is also
possible only when the CPU is in kernel mode. Consequently, the CPU has very limited
capability when executing in user mode, thereby enforcing protection of critical resources.

1.8 Which of the following instructions should be privileged?

a. Set value of timer.

b. Read the clock.


Page 9 of 43

c. Clear memory.

d. Issue a trap instruction.

e. Turn off interrupts.

f. Modify entries in device-status table.

g. Switch from user to kernel mode.

h. Access I/O device.

Answer: The following operations need to be privileged: Set value of timer, clear memory, turn
off interrupts, modify entries in device-status table, access I/O device. The rest can be performed
in user mode.

1.9 Some early computers protected the operating system by placing it in a memory partition that
could not be modified by either the user job or the operating system itself. Describe two
difficulties that you think could arise with such a scheme.

Answer: The data required by the operating system (passwords, access controls, accounting
information, and so on) would have to be stored in or passed through unprotected memory and
thus be accessible to unauthorized users.

1.10 Some CPUs provide for more than two modes of operation. What are two possible uses of
these multiple modes?

Answer: Although most systems only distinguish between user and kernel modes, some CPUs
have supported multiple modes. Multiple modes could be used to provide a finer-grained security
policy. For example, rather than distinguishing between just user and kernel mode, you could
distinguish between different types of user mode. Perhaps users belonging to the same group
could execute each other’s code. The machine would go into a specified mode when one of these
users was running code. When the machine was in this mode, a member of the group could run
code belonging to anyone else in the group.

Another possibility would be to provide different distinctions within kernel code. For example, a
specific mode could allow USB device drivers to run. This would mean that USB devices could
be serviced without having to switch to kernel mode, thereby essentially allowing USB device
drivers to run in a quasi-user/kernel mode.

1.11 Timers could be used to compute the current time. Provide a short description of how this
could be accomplished.
Page 10 of 43

Answer: A program could use the following approach to compute the current time using timer
interrupts. The program could set a timer for some time in the future and go to sleep. When it is
awakened by the interrupt, it could update its local state, which it is using to keep track of the
number of interrupts it has received thus far. It could then repeat this process of continually
setting timer interrupts and updating its local state when the interrupts are actually raised.

1.12 Is the Internet a LAN or a WAN?

Answer: The Internet is a WAN as the various computers are located at geographically different
places and are connected by long-distance network links.
Page 11 of 43

CHA P T E R 2: Operating- System Structures

2.1 What is the purpose of system calls?

Answer: System calls allow user-level processes to request services of the operating system.

2.2 What are the five major activities of an operating system in regard to process management?

Answer:

a. The creation and deletion of both user and system processes

b. The suspension and resumption of processes

c. The provision of mechanisms for process synchronization

d. The provision of mechanisms for process communication

e. The provision of mechanisms for deadlock handling

2.3 What are the three major activities of an operating system in regard to memory management?

Answer:

a. Keep track of which parts of memory are currently being used and by whom.

b. Decide which processes are to be loaded into memory when memory space becomes available.

c. Allocate and deallocate memory space as needed.

2.4 What are the three major activities of an operating system in regard to secondary-storage
management?

Answer:

• Free-space management.

• Storage allocation.

• Disk scheduling.

2.5 What is the purpose of the command interpreter? Why is it usually separate from the kernel?

Answer: It reads commands from the user or from a file of commands and executes them,
usually by turning them into one or more system calls. It is usually not part of the kernel since
the command interpreter is subject to changes.
Page 12 of 43

2.6 What system calls have to be executed by a command interpreter or shell in order to start a
new process?

Answer: In UNIX systems, a fork system call followed by an exec system call need to be
performed to start a new process. The fork call clones the currently executing process, while the
exec call overlays a new process based on a different executable over the calling process.

2.7 What is the purpose of system programs?

Answer: System programs can be thought of as bundles of useful system calls. They provide
basic functionality to users so that users do not need to write their own programs to solve
common problems.

2.8 What is the main advantage of the layered approach to system design? What are the
disadvantages of using the layered approach?

Answer: As in all cases of modular design, designing an operating system in a modular way has
several advantages. The system is easier to debug and modify because changes affect only
limited sections of the system rather than touching all sections of the operating system.
Information is kept only where it is needed and is accessible only within a defined and restricted
area, so any bugs affecting that data must be limited to a specific module or layer.

2.9 List five services provided by an operating system. Explain how each provides convenience
to the users. Explain also in which cases it would be impossible for user-level programs to
provide these services.

Answer:

a. Program execution. The operating system loads the contents (or sections) of a file into
memory and begins its execution. A user level program could not be trusted to properly allocate
CPU time.

b. I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a
very low level. The user need only specify the device and the operation to perform on it, while
the system converts that request into device- or controller-specific commands. User-level
programs cannot be trusted to access only devices they should have access to and to access them
only when they are otherwise unused.

c. File-system manipulation. There are many details in file creation, deletion, allocation, and
naming that users should not have to perform. Blocks of disk space are used by files and must be
tracked. Deleting a file requires removing the name file information and freeing the allocated
blocks. Protections must also be checked to assure proper file access. User programs could
Page 13 of 43

neither ensure adherence to protection methods nor be trusted to allocate only free blocks and
deallocate blocks on file deletion.

d. Communications. Message passing between systems requires messages to be turned into


packets of information, sent to the network controller, transmitted across a communications
medium, and reassembled by the destination system. Packet ordering and data correction must
take place. Again, user programs might not coordinate access to the network device, or they
might receive packets destined for other processes.

e. Error detection. Error detection occurs at both the hardware and software levels. At the
hardware level, all data transfers must be inspected to ensure that data have not been corrupted in
transit. All data on media must be checked to be sure they have not changed since they were
written to the media. At the software level, media must be checked for data consistency; for
instance, whether the number of allocated and unallocated blocks of storage matches the total
number on the device. There, errors are frequently process-independent (for instance, the
corruption of data on a disk), so there must be a global program (the operating system) that
handles all types of errors. Also, by having errors processed by the operating system, processes
need not contain code to catch and correct all the errors possible on a system.

2.10 What is the purpose of system calls?

Answer: System calls allow user-level processes to request services of the operating system.

2.11 What are the main advantages of the microkernel approach to system design?

Answer: Benefits typically include the following (a) adding a new service does not require
modifying the kernel, (b) it is more secure as more operations are done in user mode than in
kernel mode, and (c) a simpler kernel design and functionality typically results in a more reliable
operating system.

2.12 Why do some systems store the operating system in firmware and others on disk?

Answer: For certain devices, such as handheld PDAs and cellular telephones, a disk with a file
system may be not available for the device. In this situation, the operating system must be stored
in firmware.

2.13 How could a system be designed to allow a choice of operating systems to boot from? What
would the bootstrap program need to do?

Answer: Consider a system that would like to run both Windows XP and three different
distributions of Linux (e.g., RedHat, Debian, and Mandrake). Each operating system will be
stored on disk. During system boot-up, a special program (which we will call the boot manager)
will determine which operating system to boot into. This means that rather initially booting to an
Page 14 of 43

operating system, the boot manager will first run during system startup. It is this boot manager
that is responsible for determining which system to boot into. Typically boot managers must be
stored at certain locations of the hard disk to be recognized during system startup. Boot managers
often provide the user with a selection of systems to boot into; boot managers are also typically
designed to boot into a default operating system if no choice is selected by the user.
Page 15 of 43

CHA P T E R 3: Processes

3.1 Palm OS provides no means of concurrent processing. Discuss three major complications
that concurrent processing adds to an operating system.

Answer: a. A method of time sharing must be implemented to allow each of several processes to
have access to the system. This method involves the preemption of processes that do not
voluntarily give up the CPU (by using a system call, for instance) and the kernel being reentrant
(so more than one process maybe executing kernel code concurrently).

b. Processes and system resources must have protections and must be protected from each other.
Any given process must be limited in the amount of memory it can use and the operations it can
perform on devices like disks.

c. Care must be taken in the kernel to prevent deadlocks between processes, so processes aren’t
waiting for each other’s allocated resources.

3.2 The Sun UltraSPARC processor has multiple register sets. Describe the actions of a context
switch if the new context is already loaded into one of the register sets. What else must happen if
the new context is in memory rather than in a register set and all the register sets are in use?

Answer: The CPU current-register-set pointer is changed to point to the set containing the new
context, which takes very little time. If the context is in memory, one of the contexts in a register
set must be chosen and be moved to memory, and the new context must be loaded from memory
into the set. This process takes a little more time than on systems with one set of registers,
depending on how a replacement victim is selected.

3.3 When a process creates a new process using the fork () operation, which of the following
state is shared between the parent process and the child process?

a. Stack

b. Heap

c. Shared memory segments

Answer: Only the shared memory segments are shared between the parent process and the newly
forked child process. Copies of the stack and the heap are made for the newly created process.

3.4 Again considering the RPC mechanism, consider the “exactly once” semantic. Does the
algorithm for implementing this semantic execute correctly even if the “ACK” message back to
the client is lost due to a network problem? Describe the sequence of messages and whether
"exactly once" is still preserved.
Page 16 of 43

Answer: The “exactly once” semantics ensure that a remote procedure will be executed exactly
once and only once. The general algorithm for ensuring this combines an acknowledgment
(ACK) scheme combined with timestamps (or some other incremental counter that allows the
server to distinguish between duplicate messages). The general strategy is for the client to send
the RPC to the server along with a timestamp. The client will also start a timeout clock. The
client will then wait for one of two occurrences:

(1) It will receive an ACK from the server indicating that the remote procedure was performed,
or

(2) It will time out. If the client times out, it assumes the server was unable to perform the remote
procedure so the client invokes the RPC a second time, sending a later timestamp. The client
may not receive the ACK for one of two reasons:

(1) The original RPC was never received by the server, or

(2) The RPC was correctly received—and performed—by the server but the ACK was lost. In
situation (1), the use of ACKs allows the server ultimately to receive and perform the RPC. In
situation (2), the server will receive a duplicate RPC and it will use the timestamp to identify it
as a duplicate so as not to perform the RPC a second time. It is important to note that the server
must send a second ACK back to the client to inform the client the RPC has been performed.

3.5 Assume that a distributed system is susceptible to server failure. What mechanisms would be
required to guarantee the “exactly once” semantics for execution of RPCs?

Answer: The server should keep track in stable storage (such as a disk log) information
regarding what RPC operations were received, whether they were successfully performed, and
the results associated with the operations. When a server crash takes place and a RPC message is
received, the server can check whether the RPC had been previously performed and therefore
guarantee “exactly once” semantics for the execution of RPCs.
Page 17 of 43

CHA P T E R 4: Threads

4.1 Provide two programming examples in which multithreading provide better performance
than a single-threaded solution.

Answer:

(1) A web server that services each request in a separate thread.

2) (A parallelized application such as matrix multiplication where (different parts of the matrix
may be worked on in parallel.

(3) An (interactive GUI program such as a debugger where a thread is used (to monitor user
input, another thread represents the running (application, and a third thread monitors
performance.

4.2 What are two differences between user-level threads and kernel-level threads? Under what
circumstances is one type better than the other?

Answer:

(1) User-level threads are unknown by the kernel, whereas the kernel is aware of kernel threads.

(2) On systems using either M: 1 or M: N mapping, user threads are scheduled by the thread
library and the kernel schedules kernel threads.

(3) Kernel threads need not be associated with a process whereas every user thread belongs to a
process. Kernel threads are generally more expensive to maintain than user threads as they must
be represented with a kernel data structure.

4.3 Describe the actions taken by a kernel to context switch between kernel level threads.

Answer: Context switching between kernel threads typically requires saving the value of the
CPU registers from the thread being switched out and restoring the CPU registers of the new
thread being scheduled.

4.4 What resources are used when a thread is created? How do they differ from those used when
a process is created?

Answer: Because a thread is smaller than a process, thread creation typically uses fewer
resources than process creation. Creating a process requires allocating a process control block
(PCB), a rather large data structure. The PCB includes a memory map, list of open files, and
environment variables. Allocating and managing the memory map is typically the most time-
consuming activity. Creating either a user or kernel thread involves allocating a small data
structure to hold a register set, stack, and priority.
Page 18 of 43

4.5 Assume an operating system maps user-level threads to the kernel using the many-to-many
model and the mapping is done through LWPs. Furthermore, the system allows developers to
create real-time threads. Is it necessary to bind a real-time thread to an LWP? Explain.

Answer: Yes. Timing is crucial to real-time applications. If a thread is marked as real-time but is
not bound to an LWP, the thread may have to wait to be attached to an LWP before running.
Consider if a real-time thread is running (is attached to an LWP) and then proceeds to block (i.e.
must perform I/O, has been preempted by a higher-priority real-time thread, is waiting for a
mutual exclusion lock, etc.) While the real-time thread is blocked, the LWP it was attached to
has been assigned to another thread. When the real-time thread has been scheduled to run again,
it must first wait to be attached to an LWP. By binding an LWP to a real time thread you are
ensuring the thread will be able to run with minimal delay once it is scheduled.

4.6 A P thread program that performs the summation function was provided in Section 4.3.1.
Rewrite this program in Java.

Answer: Please refer to the supporting Web site for source code solution.
Page 19 of 43

CHA P T E R 5: CPU Scheduling

5.1 A CPU scheduling algorithm determines an order for the execution of its scheduled
processes. Given n processes to be scheduled on one processor, how many possible different
schedules are there? Give a formula in terms of n.

Answer: n! (n factorial = n × n – 1 × n – 2 × ... × 2 × 1).

5.2 Define the difference between preemptive and non preemptive scheduling.

Answer: Preemptive scheduling allows a process to be interrupted in the midst of its execution,
taking the CPU away and allocating it to another process. Non preemptive scheduling ensures
that a process relinquishes control of the CPU only when it finishes with its current CPU burst.

5.3 Suppose that the following processes arrive for execution at the times indicated. Each process
will run the listed amount of time. In answering the questions, use non preemptive scheduling
and base all decisions on the information you have at the time the decision must be made.

Process Arrival Time Burst Time

P1 0.0 8

P2 0.4 4

P3 1.0 1

a. What is the average turnaround time for these processes with the FCFS scheduling algorithm?

b. What is the average turnaround time for these processes with the SJF scheduling algorithm?

c. The SJF algorithm is supposed to improve performance, but notice that we chose to run
process P1 at time 0 because we did not know that two shorter processes would arrive soon.
Compute what the average turnaround time will be if the CPU is left idle for the first 1 unit and
then SJF scheduling is used. Remember that processes P1 and P2 are waiting during this idle
time, so their waiting time may increase. This algorithm could be known as future-knowledge
scheduling.

Answer:

a. 10.53

b. 9.53

c. 6.86
Page 20 of 43

Remember that turnaround time is finishing time minus arrival time, so you have to subtract the
arrival times to compute the turnaround times. FCFS is 11 if you forget to subtract arrival time.

5.4 What advantage is there in having different time-quantum sizes on different levels of a
multilevel queuing system?

Answer: Processes that need more frequent servicing, for instance, interactive processes such as
editors, can be in a queue with a small time quantum. Processes with no need for frequent
servicing can be in a queue with a larger quantum, requiring fewer context switches to complete
the processing, and thus making more efficient use of the computer.

5.5 Many CPU-scheduling algorithms are parameterized. For example, the RR algorithm
requires a parameter to indicate the time slice. Multilevel feedback queues require parameters to
define the number of queues, the scheduling algorithms for each queue, the criteria used to move
processes between queues, and so on. These algorithms are thus really sets of algorithms (for
example, the set of RR algorithms for all time slices, and so on). One set of algorithms may
include another (for example, the FCFS algorithm is the RR algorithm with an infinite time
quantum).What (if any) relation holds between the following pairs of sets of algorithms?

a. Priority and SJF

b. Multilevel feedback queues and FCFS

c. Priority and FCFS

d. RR and SJF

Answer:

a. The shortest job has the highest priority.

b. The lowest level of MLFQ is FCFS.

c. FCFS gives the highest priority to the job having been in existence the longest.

d. None.

5.6 Suppose that a scheduling algorithm (at the level of short-term CPU scheduling) favors those
processes that have used the least processor time in the recent past. Why will this algorithm favor
I/O-bound programs and yet not permanently starve CPU-bound programs?

Answer: It will favor the I/O-bound programs because of the relatively short CPU burst request
by them; however, the CPU-bound programs will not starve because the I/O-bound programs
will relinquish the CPU relatively often to do their I/O.
Page 21 of 43

5.7 Distinguish between PCS and SCS scheduling.

Answer: PCS scheduling is done local to the process. It is how the thread library schedules
threads onto available LWPs. SCS scheduling is the situation where the operating system
schedules kernel threads. On systems using either many-to-one or many-to-many, the two
scheduling models are fundamentally different. On systems using one-to-one, PCS and SCS are
the same.

5.8 Assume an operating system maps user-level threads to the kernel using the many-to-many
model where the mapping is done through the use of LWPs. Furthermore, the system allows
program developers to create real-time threads. Is it necessary to bind a real-time thread to an
LWP?

Answer: Yes, otherwise a user thread may have to compete for an available LWP prior to being
actually scheduled. By binding the user thread to an LWP, there is no latency while waiting for
an available LWP; the real-time user thread can be scheduled immediately.
Page 22 of 43

CHA P T E R 6: Process Synchronization

6.1 In Section 6.4 we mentioned that disabling interrupts frequently could affect the system’s
clock. Explain why it could and how such effects could be minimized.

Answer: The system clock is updated at every clock interrupt. If interrupts were disabled—
particularly for a long period of time—it is possible the system clock could easily lose the correct
time. The system clock is also used for scheduling purposes. For example, the time quantum for
a process is expressed as a number of clock ticks. At every clock interrupt, the scheduler
determines if the time quantum for the currently running process has expired. If clock interrupts
were disabled, the scheduler could not accurately assign time quantums. This effect can be
minimized by disabling clock interrupts for only very short periods.

6.2 The Cigarette-Smokers Problem. Consider a system with three smoker processes and one
agent process. Each smoker continuously rolls a cigarette and then smokes it. But to roll and
smoke a cigarette, the smoker needs three ingredients: tobacco, paper, and matches. One of the
smoker processes has paper, another has tobacco, and the third has matches. The agent has an
infinite supply of all three materials. The agent places two of the ingredients on the table. The
smoker who has the remaining ingredient then makes and smokes a cigarette, signaling the agent
on completion. The agent then puts out another two of the three ingredients, and the cycle
repeats. Write a program to synchronize the agent and the smokers using Java synchronization.

Answer: Please refer to the supporting Web site for source code solution.

6.3 Give the reasons why Solaris, Windows XP, and Linux implement multiple locking
mechanisms. Describe the circumstances under which they use spin locks, mutexes, semaphores,
adaptive mutexes, and condition variables. In each case, explain why the mechanism is needed.

Answer: These operating systems provide different locking mechanisms depending on the
application developers’ needs. Spin locks are useful for multiprocessor systems where a thread
can run in a busy-loop (for a short period of time) rather than incurring the overhead of being put
in a sleep queue. Mutexes are useful for locking resources. Solaris 2 uses adaptive mutexes,
meaning that the mutex is implemented with a spin lock on multiprocessor machines.
Semaphores and condition variables are more appropriate tools for synchronization when a
resource must be held for a long period of time, since spinning is inefficient for a long duration.
Page 23 of 43

6.4 Explain the differences, in terms of cost, among the three storage types volatile, nonvolatile,
and stable.

Answer: Volatile storage refers to main and cache memory and is very fast. However, volatile
storage cannot survive system crashes or powering down the system. Nonvolatile storage
survives system crashes and powered-down systems. Disks and tapes are examples of
nonvolatile storage. Recently, USB devices using erasable program read-only memory (EPROM)
have appeared providing nonvolatile storage. Stable storage refers to storage that technically can
never be lost as there are redundant backup copies of the data (usually on disk).

6.5 Explain the purpose of the checkpoint mechanism. How often should checkpoints be
performed? Describe how the frequency of checkpoints affects:

• System performance when no failure occurs

• The time it takes to recover from a system crash

• The time it takes to recover from a disk crash

Answer: A checkpoint log record indicates that a log record and its modified data have been
written to stable storage and that the transaction needs not to be redone in case of a system crash.
Obviously, the more often checkpoints are performed, the less likely it is that redundant updates
will have to be performed during the recovery process.

• System performance when no failure occurs— If no failures occur, the system must incur the
cost of performing checkpoints that are essentially unnecessary. In this situation, performing
checkpoints less often will lead to better system performance.

• The time it takes to recover from a system crash— the existence of a checkpoint record means
that an operation will not have to be redone during system recovery. In this situation, the more
often checkpoints were performed, the faster the recovery time is from a system crash.

• The time it takes to recover from a disk crash— the existence of a checkpoint record means that
an operation will not have to be redone during system recovery. In this situation, the more often
checkpoints were performed, the faster the recovery time is from a disk crash.

6.6 Explain the concept of transaction atomicity.

Answer: A transaction is a series of read and writes operations upon some data followed by a
commit operation. If the series of operations in a transaction cannot be completed, the transaction
must be aborted and the operations that did take place must be rolled back. It is important that
the series of operations in a transaction appear as one indivisible operation to ensure the integrity
of the data being updated. Otherwise, data could be compromised if operations from two (or
more) different transactions were intermixed.
Page 24 of 43

6.7 Show that some schedules are possible under the two-phase locking protocol but not possible
under the timestamp protocol, and vice versa.

Answer: A schedule that is allowed in the two-phase locking protocol but not in the timestamp
protocol is:

Step T0 T1 Precedence

1 lock-S (A)

2 read (A)

3 lock-X (B)

4 write (B)

5 unlock (B)

6 lock-S (B)

7 read (B) T1 → T0

8 unlock (A)

9 unlock (B)

This schedule is not allowed in the timestamp protocol because at step 7, the W- timestamp of B
is 1. A schedule that is allowed in the timestamp protocol but not in the two-phase locking
protocol is:

Step T0 T1 T2

1 write (A)

2 write (A)

3 write (A)

4 write (B)

5 write (B)

This schedule cannot have lock instructions added to make it legal under two-phase locking
protocol because T1 must unlock (A) between steps 2 and 3, and must lock (B) between steps 4
and 5.
Page 25 of 43

6.8 The wait () statement in all Java program examples was part of a while loop. Explain why
you would always need to use a while statement when using wait () and why you would never
use an if statement.

Answer: This is an important issue to emphasize! Java only provides anonymous notification—
you cannot notify a certain thread that a certain condition is true. When a thread is notified, it is
its responsibility to re-check the condition that it is waiting for. If a thread did not recheck the
condition, it might have received the notification without the condition having been met.
Page 26 of 43

CHA P T E R 7: Deadlocks

7.1 List three examples of deadlocks that are not related to a computer system environment.

Answer:

• Two cars crossing a single-lane bridge from opposite directions.

• A person going down a ladder while another person is climbing up the ladder.

• Two trains traveling toward each other on the same track.

• Two carpenters who must pound nails. There is a single hammer and a single bucket of nails.
Deadlock occurs if one carpenter has the hammer and the other carpenter has the nails.

7.2 Suppose that a system is in an unsafe state. Show that it is possible for the processes to
complete their execution without entering a deadlock state.

Answer: An unsafe state may not necessarily lead to deadlock; it just means that we cannot
guarantee that deadlock will not occur. Thus, it is possible that a system in an unsafe state may
still allow all processes to complete without deadlock occurring. Consider the situation where a
system has 12 resources allocated among processes P0, P1, and P2. The resources are allocated
according to the following policy:

Max Current Need

P0 10 5 5

P1 4 2 2

P2 9 3 6

for (int i = 0; i < n; i++)

{ // first find a thread that can finish

for (int j = 0; j < n; j++)

if (!finish[j])

boolean temp = true;

for (int k = 0; k < m; k++)


Page 27 of 43

if (need[j][k] > work[k])

temp = false;

if (temp) { // if this thread can finish

finish[j] = true;

for (int x = 0; x < m; x++)

work[x] += work[j][x];

Figure 7.1 Banker’s algorithm safety algorithm. Currently there are two resources available.
This system is in an unsafe state as process P1 could complete, thereby freeing a total of four
resources. But we cannot guarantee that processes P0 and P2 can complete.

However, it is possible that a process may release resources before requesting any further. For
example, process P2 could release a resource, thereby increasing the total number of resources to
five. This allows process P0 to complete, which would free a total of nine resources, thereby
allowing process P2 to complete as well.

7.3 Prove that the safety algorithm presented in Section 7.5.3 requires an order of m × n2
operations.

Answer: Figure 7.1 provides Java code that implements the safety algorithm of the banker’s
algorithm (the complete implementation of the banker’s algorithm is available with the source
code download). As can be seen, the nested outer loops—both of which loop through n times—
provide the n2 performance. Within these outer loops are two sequential inner loops which loop
m times. The big-oh of this algorithm is therefore O(m × n2).

7.4 Consider a computer system that runs 5,000 jobs per month with no deadlock-prevention or
deadlock-avoidance scheme. Deadlocks occur about twice per month, and the operator must
terminate and rerun about 10 jobs per deadlock. Each job is worth about $2 (in CPU time), and
the jobs terminated tend to be about half-done when they are aborted. A systems programmer has
estimated that a deadlock-avoidance algorithm (like the banker’s algorithm) could be installed in
Page 28 of 43

the system with an increase in the average execution time per job of about 10 percent. Since the
machine currently has 30-percent idle time, all 5,000 jobs per month could still be run, although
turnaround time would increase by about 20 percent on average.

a. What are the arguments for installing the deadlock-avoidance algorithm?

b. What are the arguments against installing the deadlock-avoidance algorithm?

Answer: An argument for installing deadlock avoidance in the system is that we could ensure
deadlock would never occur. In addition, despite the increase in turnaround time, all 5,000 jobs
could still run. An argument against installing deadlock avoidance software is that deadlocks
occur infrequently and they cost little when they do occur.

7.5 Can a system detect that some of its processes are starving? If you answer “yes,” explain how
it can. If you answer “no,” explain how the system can deal with the starvation problem.

Answer: Starvation is a difficult topic to define as it may mean different things for different
systems. For the purposes of this question, we will define starvation as the situation whereby a
process must wait beyond a reasonable period of time—perhaps indefinitely—before receiving a
requested resource. One way of detecting starvation would be to first identify a period of time—
T—that is considered unreasonable. When a process requests a resource, a timer is started. If the
elapsed time exceeds T, then the process is considered to be starved. One strategy for dealing
with starvation would be to adopt a policy where resources are assigned only to the process that
has been waiting the longest. For example, if process Pa has been waiting longer for resource X
than process Pb , the request from process Pb would be deferred until process Pa ’s request has
been satisfied. Another strategy would be less strict than what was just mentioned. In this
scenario, a resource might be granted to a process that has waited less than another process,
providing that the other process is not starving. However, if another process is considered to be
starving, its request would be satisfied first.

7.6 Consider the following resource-allocation policy. Requests and releases for resources are
allowed at any time. If a request for resources cannot be satisfied because the resources are not
available, then we check any processes that are blocked, waiting for resources. If they have the
desired resources, then these resources are taken away from them and are given to the requesting
process. The vector of resources for which the process is waiting is increased to include the
resources that were taken away. For example, consider a system with three resource types and
the vector Available initialized to (4,2,2). If process P0 asks for (2,2,1), it gets them. If P1 asks
for (1,0,1), it gets them. Then, if P0 asks for (0,0,1), it is blocked (resource not available). If P2
now asks for (2,0,0), it gets the available one (1,0,0) and one that was allocated to P0 (since P0 is
blocked). P0’s Allocation vector goes down to (1,2,1) and its Need vector goes up to (1,0,1).
Page 29 of 43

a. Can deadlock occur? If you answer “yes”, give an example. If you answer “no,” specify which
necessary condition cannot occur.

b. Can indefinite blocking occur? Explain your answer.

Answer:

a. Deadlock cannot occur because preemption exists.

b. Yes. A process may never acquire all the resources it needs if they are continuously preempted
by a series of requests such as those of process C.

7.7 Suppose that you have coded the deadlock-avoidance safety algorithm and now have been
asked to implement the deadlock-detection algorithm. Can you do so by simply using the safety
algorithm code and redefining Maxi = Waitingi + Alloca tioni, where Waitingi is a vector
specifying the resources process i is waiting for, and Alloca tioni is as

defined in Section 7.5? Explain your answer.

Answer:

Yes. The Max vector represents the maximum request a process may make. When calculating the
safety algorithm we use the Need matrix, which represents Max— Allocation. Another way to
think of this is Max = Need + Allocation. According to the question, theWaiting matrix fulfills a
role similar to the Need matrix, therefore Max = Waiting + Allocation.

7.8 Is it possible to have a deadlock involving only one single process? Explain your answer.

Answer: No. This follows directly from the hold-and-wait condition.


Page 30 of 43

CHA P T E R 8: Memory Management

8.1 Name two differences between logical and physical addresses.

Answer: A logical address does not refer to an actual existing address; rather, it refers to an
abstract address in an abstract address space. Contrast this with a physical address that refers to
an actual physical address in memory. A logical address is generated by the CPU and is
translated into a physical address by the memory management unit (MMU). Therefore, physical
addresses are generated by the MMU.

8.2 Consider a system in which a program can be separated into two parts: code and data. The
CPU knows whether it wants an instruction (instruction fetch) or data (data fetch or store).
Therefore, two base–limit register pairs are provided: one for instructions and one for data. The
instruction base–limit register pair is automatically read-only, so programs can be shared among
different users. Discuss the advantages and disadvantages of this scheme.

Answer: The major advantage of this scheme is that it is an effective mechanism for code and
data sharing. For example, only one copy of an editor or a compiler needs to be kept in memory,
and this code can be shared by all processes needing access to the editor or compiler code.
Another advantage is protection of code against erroneous modification. The only disadvantage
is that the code and data must be separated, which is usually adhered to in a compiler-generated
code.

8.3 Why are page sizes always in powers of 2?

Answer: Recall that paging is implemented by breaking up an address into a page and offset
number. It is most efficient to break the address into X page bits and Y offset bits, rather than
perform arithmetic on the address to calculate the page number and offset. Because each bit
position represents a power of 2, splitting an address between bits results in a page size that is a
power of 2.

8.4 Consider a logical address space of eight pages of 1024 words each, mapped onto a physical
memory of 32 frames.

a. How many bits are there in the logical address?

b. How many bits are there in the physical address?

Answer:

a. Logical address: 13 bits

b. Physical address: 15 bits


Page 31 of 43

8.5 What is the effect of allowing two entries in a page table to point to the same page frame in
memory? Explain how this effect could be used to decrease the amount of time needed to copy a
large amount of memory from one place to another. What effect would updating some byte on
the one page have on the other page?

Answer: By allowing two entries in a page table to point to the same page frame in memory,
users can share code and data. If the code is reentrant, much memory space can be saved through
the shared use of large programs such as text editors, compilers, and database systems.
“Copying” large amounts of memory could be affected by having different page tables point to
the same memory location. However, sharing of non reentrant code or data means that any user
having access to the code can modify it and these modifications would be reflected in the other
user’s “copy.”

8.6 Describe a mechanism by which one segment could belong to the address space of two
different processes.

Answer: Since segment tables are a collection of base–limit registers, segments can be shared
when entries in the segment table of two different jobs point to the same physical location. The
two segment tables must have identical base pointers, and the shared segment number must be
the same in the two processes.

8.7 Sharing segments among processes without requiring the same segment number is possible in
a dynamically linked segmentation system.

a. Define a system that allows static linking and sharing of segments without requiring that the
segment numbers be the same.

b. Describe a paging scheme that allows pages to be shared without requiring that the page
numbers be the same.

Answer: Both of these problems reduce to a program being able to reference both its own code
and its data without knowing the segment or page number associated with the address.
MULTICS solved this problem by associating four registers with each process. One register had
the address of the current program segment, another had a base address for the stack, another had
a base address for the global data, and so on. The idea is that all references have to be indirect
through a register that maps to the current segment or page number. By changing these registers,
the same code can execute for different processes without the same page or segment numbers.

8.8 In the IBM/370, memory protection is provided through the use of keys. A key is a 4-bit
quantity. Each 2K block of memory has a key (the storage key) associated with it. The CPU also
has a key (the protection key) associated with it. A store operation is allowed only if both keys
Page 32 of 43

are equal, or if either is zero. Which of the following memory-management schemes could be
used successfully with this hardware?

a. Bare machine

b. Single-user system

c. Multiprogramming with a fixed number of processes

d. Multiprogramming with a variable number of processes

e. Paging

f. Segmentation

Answer:

a. Protection not necessary set system key to 0.

b. Set system key to 0 when in supervisor mode.

c. Region sizes must be fixed in increments of 2k bytes, allocate key with memory blocks.

d. Same as above. e. Frame sizes must be in increments of 2k bytes, allocate key with pages.

f. Segment sizes must be in increments of 2k bytes, allocate key with segments.

1. Explain the concept of Reentrancy.

Answer: It is a useful, memory-saving technique for multiprogrammed timesharing systems.


A Reentrant Procedure is one in which multiple users can share a single copy of a program
during the same period. Reentrancy has 2 key aspects: The program code cannot modify
itself, and the local data for each user process must be stored separately. Thus, the permanent
part is the code, and the temporary part is the pointer back to the calling program and local
variables used by that program. Each execution instance is called activation. It executes the
code in the permanent part, but has its own copy of local variables/parameters. The
temporary part associated with each activation is the activation record. Generally, the
activation record is kept on the stack. Note: A reentrant procedure can be interrupted and
called by an interrupting program, and still execute correctly on returning to the procedure.

2. Explain Belady's Anomaly.

Answer: Also called FIFO anomaly. Usually, on increasing the number of frames allocated
to a process' virtual memory, the process execution is faster, because fewer page faults occur.
Sometimes, the reverse happens, i.e., the execution time increases even when more frames
Page 33 of 43

are allocated to the process. This is Belady's Anomaly. This is true for certain page reference
patterns.
Page 34 of 43

3. What is a binary semaphore? What is its use?

Answer: A binary semaphore is one, which takes only 0 and 1 as values. They are used to
implement mutual exclusion and synchronize concurrent processes.

4. What is thrashing?

Answer: It is a phenomenon in virtual memory schemes when the processor spends most of
its time swapping pages, rather than executing instructions. This is due to an inordinate
number of page faults.

5. List the Coffman's conditions that lead to a deadlock.

Answer: Mutual Exclusion: Only one process may use a critical resource at a time.

o Hold & Wait: A process may be allocated some resources while waiting for
others.
o No Pre-emption: No resource can be forcible removed from a process holding
it.
o Circular Wait: A closed chain of processes exist such that each process holds
at least one resource needed by another process in the chain.

6. What are short-, long- and medium-term scheduling?

Answer: Long term scheduler determines which programs are admitted to the system for
processing. It controls the degree of multiprogramming. Once admitted, a job becomes a
process. Medium term scheduling is part of the swapping function. This relates to processes
that are in a blocked or suspended state. They are swapped out of real-memory until they are
ready to execute. The swapping-in decision is based on memory-management criteria.

Short term scheduler, also know as a dispatcher executes most frequently, and makes the
finest-grained decision of which process should execute next. This scheduler is invoked
whenever an event occurs. It may lead to interruption of one process by preemption.

7. What are turnaround time and response time?

Answer: Turnaround time is the interval between the submission of a job and its completion.
Response time is the interval between submission of a request, and the first response to that
request.
Page 35 of 43

8. What are the typical elements of a process image?

Answer:

o User data: Modifiable part of user space. May include program data, user
stack area, and
o programs that may be modified.

o User program: The instructions to be executed.

o System Stack: Each process has one or more LIFO stacks associated with it.
Used to store
o parameters and calling addresses for procedure and system calls.

o Process control Block (PCB): Info needed by the OS to control processes.

9. What is the Translation Lookaside Buffer (TLB)?

Answer: In a cached system, the base addresses of the last few referenced pages is
maintained in registers called the TLB that aids in faster lookup. TLB contains those page-
table entries that have been most recently used. Normally, each virtual memory reference
causes 2 physical memory accesses-- one to fetch appropriate page-table entry, and one to
fetch the desired data. Using TLB in-between, this is reduced to just one physical memory
access in cases of TLB-hit.

10. What is the resident set and working set of a process?

Answer: Resident set is that portion of the process image that is actually in real-memory at a
particular instant. Working set is that subset of resident set that is actually needed for
execution. (Relate this to the variable-window size method for swapping techniques.)

11. When is a system in safe state?

Answer: The set of dispatchable processes is in a safe state if there exists at least one
temporal order in which all processes can be run to completion without resulting in a
deadlock.

12. What is cycle stealing?

Answer: We encounter cycle stealing in the context of Direct Memory Access (DMA).
Either the DMA controller can use the data bus when the CPU does not need it, or it may
force the CPU to temporarily suspend operation. The latter technique is called cycle stealing.
Note that cycle stealing can be done only at specific break points in an instruction cycle.

13. What is meant by arm-stickiness?


Page 36 of 43

Answer: If one or a few processes have a high access rate to data on one track of a storage
disk, then they may monopolize the device by repeated requests to that track. This generally
happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc).
High-density multisurface disks are more likely to be affected by this than low density ones.

14. What are the stipulations of C2 level security?

Answer: C2 level security provides for:

o Discretionary Access Control


o Identification and Authentication

o Auditing

o Resource reuse

15. What is busy waiting?

Answer: The repeated execution of a loop of code while waiting for an event to occur is
called busy-waiting. The CPU is not engaged in any real productive activity during this
period, and the process does not progress toward completion.

16. Explain the popular multiprocessor thread-scheduling strategies.

Answer:

o Load Sharing: Processes are not assigned to a particular processor. A global queue of
threads is maintained. Each processor, when idle, selects a thread from this queue.
Note that load balancing refers to a scheme where work is allocated to processors on
a more permanent basis.
o Gang Scheduling: A set of related threads is scheduled to run on a set of processors at
the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled
this way to reduce synchronization blocking, and minimize process switching. Group
scheduling predated this strategy.
o Dedicated processor assignment: Provides implicit scheduling defined by assignment
of threads to processors. For the duration of program execution, each program is
allocated a set of processors equal in number to the number of threads in the program.
Processors are chosen from the available pool.
o Dynamic scheduling: The number of thread in a program can be altered during the
course of execution.

17. When does the condition 'rendezvous' arise?


Page 37 of 43

Answer: In message passing, it is the condition in which, both, the sender and receiver are
blocked until the message is delivered.

18. What is a trap and trapdoor?

Answer: Trapdoor is a secret undocumented entry point into a program used to grant access
without normal methods of access authentication. A trap is a software interrupt, usually the result
of an error condition.

19. What are local and global page replacements?

Answer: Local replacement means that an incoming page is brought in only to the relevant
process' address space. Global replacement policy allows any page frame from any process to be
replaced. The latter is applicable to variable partitions model only.

20. Define latency, transfer and seek time with respect to disk I/O.

Answer: Seek time is the time required to move the disk arm to the required track. Rotational
delay or latency is the time it takes for the beginning of the required sector to reach the head.
Sum of seek time (if any) and latency is the access time. Time taken to actually transfer a span of
data is transfer time.

21. Describe the Buddy system of memory allocation.

Answer: Free memory is maintained in linked lists, each of equal sized blocks. Any such block
is of size 2^k. When some memory is required by a process, the block size of next higher order is
chosen, and broken into two. Note that the two such pieces differ in address only in their kth bit.
Such pieces are called buddies. When any used block is freed, the OS checks to see if its buddy
is also free. If so, it is rejoined, and put into the original free-block linked-list.

22. What is time-stamping?

Answer: It is a technique proposed by Lamport, used to order events in a distributed system


without the use of clocks. This scheme is intended to order events consisting of the transmission
of messages. Each system 'i' in the network maintains a counter Ci. Every time a system
transmits a message, it increments its counter by 1 and attaches the time-stamp Ti to the
message. When a message is received, the receiving system 'j' sets its counter Cj to 1 more than
the maximum of its current value and the incoming time-stamp Ti. At each site, the ordering of
messages is determined by the following rules: For messages x from site i and y from site j, x
precedes y if one of the following conditions holds....(a) if Ti<j.< and Ti="Tj" if (b) or>

23. How are the wait/signal operations for monitor different from those for semaphores?
Page 38 of 43

Answer: If a process in a monitor signal and no task is waiting on the condition variable, the
signal is lost. So this allows easier program design. Whereas in semaphores, every operation
affects the value of the semaphore, so the wait and signal operations should be perfectly balanced
in the program.

24. In the context of memory management, what are placement and replacement
algorithms?

Answer: Placement algorithms determine where in available real-memory to load a program.


Common methods are first-fit, next-fit, best-fit. Replacement algorithms are used when memory
is full, and one process (or part of a process) needs to be swapped out to accommodate a new
program. The replacement algorithm determines which are the partitions to be swapped out.

25. In loading programs into memory, what is the difference between load-time dynamic
linking and run-time dynamic linking?

Answer: For load-time dynamic linking: Load module to be loaded is read into memory. Any
reference to a target external module causes that module to be loaded and the references are
updated to a relative address from the start base address of the application module. With run-time
dynamic loading: Some of the linking is postponed until actual reference during execution. Then
the correct module is loaded and linked.

26. What are demand- and pre-paging?

Answer: With demand paging, a page is brought into memory only when a location on that page
is actually referenced during execution. With pre-paging, pages other than the one demanded by
a page fault are brought in. The selection of such pages is done based on common access
patterns, especially for secondary memory devices.

27. Paging a memory management function, while multiprogramming a processor


management function, are the two interdependent?

Answer: Yes.

28. What is page cannibalizing?

Answer: Page swapping or page replacements are called page cannibalizing.

29. What has triggered the need for multitasking in PCs?

Answer: Increased speed and memory capacity of microprocessors together with the support
fir virtual memory and Growth of client server computing

30. What are the four layers that Windows NT have in order to achieve independence?
Page 39 of 43

Answer:

o Hardware abstraction layer


o Kernel

o Subsystems

o System Services.
Page 40 of 43

31. What is SMP?

Answer: To achieve maximum efficiency and reliability a mode of operation known as


symmetric multiprocessing is used. In essence, with SMP any process or threads can be
assigned to any processor.

32. What are the key object oriented concepts used by Windows NT?

Answer:

Encapsulation
Object class and instance

33. Is Windows NT a full blown object oriented operating system? Give reasons.

Answer: No Windows NT is not so, because its not implemented in object oriented language
and the data structures reside within one executive component and are not represented as
objects and it does not support object oriented capabilities .

34. What is a drawback of MVT?

Answer: It does not have the features like

o ability to support multiple processors


o virtual storage

o source level debugging

35. What is process spawning?

Answer: When the OS at the explicit request of another process creates a process, this action
is called process spawning.

36. How many jobs can be run concurrently on MVT?

Answer: 15 jobs

37. List out some reasons for process termination.

Answer:

o Normal completion
o Time limit exceeded

o Memory unavailable
Page 41 of 43

o Bounds violation

o Protection error

o Arithmetic error

o Time overrun

o I/O failure

o Invalid instruction

o Privileged instruction

o Data misuse

o Operator or OS intervention

o Parent termination.

38. What are the reasons for process suspension?

Answer:

o swapping
o interactive user request

o timing

o parent process request

39. What is process migration?

Answer: It is the transfer of sufficient amount of the state of process from one machine to
the target machine

40. What is mutant?

Answer: In Windows NT a mutant provides kernel mode or user mode mutual exclusion
with the notion of ownership

41. What is an idle thread?

Answer: The special thread a dispatcher will execute when no ready thread is found.

42. What is FtDisk?

Answer: It is a fault tolerance disk driver for Windows NT.


Page 42 of 43

43. What are the possible threads a thread can have?

Answer:

o Ready
o Standby

o Running

o Waiting

o Transition

o Terminated.
Page 43 of 43

44. What are rings in Windows NT?

Answer: Windows NT uses protection mechanism called rings provides by the process to
implement separation between the user mode and kernel mode.

45. What is Executive in Windows NT?

Answer: In Windows NT, executive refers to the operating system code that runs in kernel
mode.

46. What are the sub-components of I/O manager in Windows NT?

Answer:

o Network redirector/ Server


o Cache manager.

o File systems

o Network driver

o Device driver

47. What are DDks? Name an operating system that includes this feature.

Answer: DDks are device driver kits, which are equivalent to SDKs for writing device
drivers. Windows NT includes DDks.

48. What level of security does Windows NT meets?

Answer: C2 level security.

You might also like