Problem Review
Problem Review
2.6 List and briefly explain five storage management responsibilities of a typical OS.
• The OS must prevent independent processes from interfering with each other’s memory,
both data and instructions.
1
Automatic allocation and management:
• Programmers should be able to define program modules, and to create, destroy, and alter the
size of modules dynamically.
Protection and access control:
• Sharing of memory creates the potential for one program to address the memory space of
another.
• This is desirable when sharing is needed by particular applications.
• At other times, it threatens the integrity of programs and even of the OS itself.
• The OS must allow portions of memory to be accessible in various ways by various users.
Long-term storage:
• Many application programs require means for storing information for extended periods of
time, after the computer has been powered down. The file system implements a long-term
store, with information stored in named objects, called files.
• The file is a convenient concept for the programmer and is a useful unit of access control
and protection for the OS.
2.7 Explain the distinction between a real address and a virtual address.
monolithic kernel.
In monolithic kernel, OS functionality is provided in these large kernels, including scheduling, file
system, networking, device drivers, memory management, and more. Typically, a monolithic kernel
is implemented as a single process, with all elements sharing the same address space.
2
2.10 What is multithreading?
Multithreading is a technique in which a process, executing an application, is divided into threads
that can run concurrently. Multithreading is useful for applications that perform a number of
essentially independent tasks that do not need to be serialized.
With multiple threads running within the same process, switching back and forth among threads
involves less processor overhead than a major process switch between different processes.
Chapter 3
New batch job The OS is provided with a batch job control stream, usually
on tape or disk. When the OS is prepared to take on new
work, it will read the next sequence of job control
commands.
3.3 For the processing model of Figure 3.6, briefly define each state.
3
The five states in this new diagram are:
Running: The process that is currently being executed.
Ready: A process that is prepared to execute when given the opportunity.
Blocked/Waiting: 4 A process that cannot execute until some event occurs, such as the completion
of an I/O operation.
New: A process that has just been created but has not yet been admitted to the pool of executable
processes by the OS. Typically, a new process has not yet been loaded into main memory, although
its process control block has been created.
Exit: A process that has been released from the pool of executable processes by the OS, either
because it halted or because it aborted for some reason.
4
3.8 For what types of entities does the OS maintain tables of information for management purposes?
5
I/O Interrupt:
The operating system determines what I/O action has occurred.
Memory fault:
The processor encounters a virtual memory address reference for a word that is not in main memory.
After the I/O request is issued to bring in the block of memory, the process with the memory fault is
placed in a blocked state.
3.14 What is the difference between a mode switch and a process switch?
Mode switch may occur without changing the state of the process that is currently in Running state.
A process switch may occur any time that the operating system has gained control from the currently
running process. So, OS must save the context of the current process in its PCB, update the PCB
state and accounting information, move the PCB to the appropriate queue, select a new process,
remove the PCB of that process from the ready queue, update its memory management structures,
and restore its context into the processor.
Problem
3.1 The following state transition table is a simplified model of process management, with the labels
representing transitions between states of READY, RUN, BLOCKED and NONRESIDENT.
Give an example of an event that can cause each of the above transitions.
Answer
RUN to READY can be caused by a time-quantum expiration
READY to NONRESIDENT occurs if memory is overcommitted, and a process is temporarily
swapped out of memory
READY to RUN occurs only if a process is allocated the CPU by the dispatcher
RUN to BLOCKED can occur if a process issues an I/O or other kernel request.
BLOCKED to READY occurs if the awaited event completes (perhaps I/O completion)
BLOCKED to NONRESIDENT - same as READY to NONRESIDENT.
3. 2 Assume that at time 5 no system resources are being used except for the processor and memory.
Now consider the following events:
At time 5: P1 executes a command to read from disk unit 3.
At time 15: P5's time slice expires.
At time 18: P7 executes a command to write to disk unit 3.
At time 20: P3 executes a command to read from disk unit 2.
At time 24: P5 executes a command to write to disk unit 3.
At time 28: P5 is swapped out.
At time 33: An interrupt occurs from disk unit 2: P3's read is complete.
At time 36: An interrupt occurs from disk unit 3: P1's read is complete.
At time 38: P8 terminates.
At time 40: An interrupt occurs from disk unit 3: P5's write is complete.
At time 44: P5 is swapped back in.
At time 48: An interrupt occurs from disk unit 3: P7's write is complete.
For each time 22, 37, and 47, identify which state each process is in. If a process is blocked, further
identify the event on which is it blocked.
6
Answer
At time 22:
P1: blocked for I/O
P3: blocked for I/O
P5: ready/running
P7: blocked for I/O
P8: ready/running
At time 37
P1: ready/running
P3: ready/running
P5: blocked suspend
P7: blocked for I/O
P8: ready/running
At time 47
P1: ready/running
P3: ready/running
P5: ready suspend
P7: blocked for I/O
P8: exit
3.3 Figure 3.9b contains seven states. In principle, one could draw a transition between any two
states, for a total of 42 different transitions.
a. List all of the possible transitions and give an example of what could cause each transition.
b. List all of the impossible transitions and explain why.
Chapter 4
4.1 Table 3.5 lists typical elements found in a process control block for an unthreaded OS. Of
these, which should belong to a thread control block and which should belong to a process
control block for a multithreaded system?
Thread control block: It contains Identifiers, scheduling and State Information, Data Structuring,
Privileges and Inter-process communications
Process control block: It contains Identifiers, Processor State Information, Memory Management
and Resource ownership, a process identification, process state information and process control
information.
4.2 List reasons why a mode switch between threads may be cheaper than a mode switch
between processes.
Thread switching does not require kernel mode privileges because all of the thread management data
structures are within the user address space of a single process. This saves the overhead of two mode
switches (user to kernel; kernel back to user).
4.3 What are the two separate and potentially independent characteristics embodied in the
concept of process?
Resource Ownership
Process includes a virtual address space to hold the process image
the OS provides protection to prevent unwanted interference between processes with respect
to resources
Scheduling/Execution
Follows an execution path that may be interleaved with other processes
a process has an execution state (Running, Ready, etc.) and a dispatching priority and is
scheduled and dispatched by the OS
Traditional processes are sequential; i.e. only one execution path
7
4.4 Give four general examples of the use of threads in a single-user multiprocessing system.
Thread Use in a Single-User System
Foreground and background work : This arrangement often increases the perceived speed of the
application by allowing the program to prompt for the next command before the previous command
is complete.
Asynchronous processing : A thread can be created whose sole job is periodic backup and that
schedules itself directly with the OS; there is no need for fancy code in the main program to provide
for time checks or to coordinate input and output.
Speed of execution: A multithreaded process can compute one batch of data while reading the next
batch from a device. On a multiprocessor system, multiple threads from the same process may be
able to execute simultaneously.
Modular program structure: Programs that involve a variety of activities or a variety of sources
and destinations of input and output may be easier to design and implement using threads.
4.5 What resources are typically shared by all of the threads of a process?
All of the threads of a process share the state and resources of that process. They reside in the same
address space and have access to the same data. When one thread alters an item of data in memory,
other threads see the results if and when they access that item. If one thread opens a file with read
privileges, other threads in the same process can also read from that file.
4.6 List three advantages of ULTs over KLTs.
1. Thread switching does not require kernel mode privileges because all of the thread
management data structures are within the user address space of a single process. Therefore, the
process does not switch to the kernel mode to do thread management. This saves the overhead of two
mode switches (user to kernel; kernel back to user).
2. Scheduling can be application specific. One application may benefit most from a simple round-
robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm.
The scheduling algorithm can be tailored to the application without disturbing the underlying OS
scheduler.
3. ULTs can run on any OS. No changes are required to the underlying kernel to support ULTs.
The threads library is a set of application-level functions shared by all applications.
4.7 List two disadvantages of ULTs compared to KLTs.
In a typical OS many system calls are blocking. As a result, when a ULT executes a system
call, not only is that thread blocked, but also all of the threads within the process are blocked.
In a pure ULT strategy, a multithreaded application cannot take advantage of
multiprocessing. A kernel assigns one process to only one processor at a time. Therefore,
only a single thread within a process can execute at a time.
4.8 Define jacketing.
To overcome the problem of blocking threads is to use a technique referred to as jacketing.
The purpose of jacketing is to convert a blocking system call into a non-blocking system call.
Within this jacket routine is code that checks to determine if the I/O device is busy.
If it is, the thread enters the Blocked state and passes control (through the threads library) to
another thread.
When this thread later is given control again, the jacket routine checks the I/O device again.
Chapter 7
7.1 What requirements is memory management intended to satisfy?
o Relocation
o Protection
o Sharing
8
o Logical organization
o Physical organization
7.2 Why is the capability to relocate processes desirable?
In a multiprogramming system, the available main memory is generally shared among a number of
processes. Typically, it is not possible for the programmer to know in advance which other programs
will be resident in main memory at the time of execution of his or her program. In addition, we
would like to be able to swap active processes in and out of main memory to maximize processor
utilization by providing a large pool of ready processes to execute. Once a program has been
swapped out to disk, it would be quite limiting to declare that when it is next swapped back in, it
must be placed in the same main memory region as before. Instead, we may need to relocate the
process to a different area of memory.
7.3 Why is it not possible to enforce memory protection at compile time?
Because the location of a program in main memory is unpredictable due to relocation, it is
impossible to check absolute address at compile time to assure protection.
7.4 What are some reasons to allow two or more processes to all have access to a particular
region of memory?
If a number of processes are executing the same program, it is advantageous to allow each process to
access the same copy of the program rather than have its own separate copy. Processes that are
cooperating on some task may need to share access to the same data structure.
7.5 In a fixed-partitioning scheme, what are the advantages of using unequal-size partitions?
• The number of partitions specified at system generation time limits the number of active (not
suspended) processes in the system.
• Because partition sizes are preset at system generation time, small jobs will not utilize partition
space efficiently. In an environment where the main storage requirement of all jobs is known
beforehand, this may be reasonable, but in most cases, it is an inefficient technique.
9
Problems
7.1 In Section 2.3, we listed five objectives of memory management, and in Section 7.1, we
listed five requirements. Argue that each list encompasses all of the concerns addressed in the
other.
7.2 Consider a fixed partitioning scheme with equal-size partitions of 216 bytes and a total main
memory size of 224 bytes. A process table is maintained that includes a pointer to a partition for
each resident process. How many bits are required for the pointer?
224/216=28 partitions
8 bits are required for the pointer.
7.3 Consider a dynamic partitioning scheme. Show that, on average, the memory contains half
as many holes as segments.
• Consider a dynamic partitioning scheme. Show that, on average, the memory has half as
many holes as segments
• Let s and h denote the average number of segments and holes, respectively.
• The probability that a given segment is followed by a hole in memory (and not by another
segment) is 0.5, because deletions and creations are equally probable in equilibrium.
• So, with s segments in memory, the average number of holes must be s/2.
• It is intuitively reasonable that the number of holes must be less than the number of segments
because neighboring segments can be combined into a single hole on deletion.
7.4 To implement the various placement algorithms discussed for dynamic partitioning
(Section 7.2), a list of the free blocks of memory must be kept. For each of the three methods
discussed (best-fit, first-fit, next-fit), what is the average length of the search?
• By problem 7.3, we know that the average number of holes is s/2, where s is the number of
resident segments.
• Regardless of fit strategy, in equilibrium, the average search length is s/4.
7.5 Another placement algorithm for dynamic partitioning is referred to as worst-fit. In this
case, the largest free block of memory is used for bringing in a process. Discuss the pros and
cons of this method compared to first-, next-, and best-fit.What is the average length of the
search for worst-fit?
• A criticism of the best fit algorithm is that the space remaining after allocating a block of the
required size is so small that in general it is of no real use.
• The worst fit algorithm maximizes the chance that the free space left after a placement will be
large enough to satisfy another request, thus minimizing the frequency of compaction.
• The disadvantage of this approach is that the largest blocks are allocated first
• Therefore a request for a large area is more likely to fail.
• the average search length is s/4.
10
Chapter 8
8.1 What is the difference between simple paging and virtual memory paging?
Table 8.1
When the operating system brings one piece in, it must throw another out. If it throws out a piece just
before it is used, then it will just have to go get that piece again almost immediately. Too much of
this leads to a condition known as thrashing. Thrashing is the condition that the system spends most
of its time swapping pieces rather than executing instructions.
8.3 Why is the principle of locality crucial to the use of virtual memory?
The principle of locality is crucial to the use of virtual memory because the operating system tries to
guess, based on recent history, which pieces are least likely to be used in the near future. It can also
avoid thrashing. Therefore we can see that a virtual memory scheme may work.
8.4 What elements are typically found in a page table entry? Briefly define each element.
Each process has its own page table. Each page table entry contains
• frame number: It is the corresponding page in main memory.
• Present bit (P): to indicate whether the page is in main memory or not.
• Modify bit (M): to indicate whether the contents of the corresponding page have been altered
since the page was last loaded into main memory.
• Other control bit: to indicate whether the protection or sharing is needed or not.
11
8.5 What is the purpose of a translation lookaside buffer?
Each virtual memory reference can cause two physical memory accesses:
one to fetch the page table entry
one to fetch the data (or the next instruction)
To overcome the effect of doubling the memory access time, most virtual memory schemes
make use of a special high-speed cache called a translation lookaside buffer
Problems
8.1 Suppose the page table for the process currently executing on the processor looks like
the following. All numbers are decimal, everything is numbered starting from zero,
and all addresses are memory byte addresses.The page size is 1024 bytes.
Virtual page number Valid bit Reference bit Modify bit Page frame
number
0 1 1 0 4
1 1 1 1 7
2 0 0 0 —
3 1 0 0 2
4 0 0 0 —
5 1 0 1 0
a. Describe exactly how, in general, a virtual address generated by the CPU is translated
into a physical main memory address.
b. What physical address, if any, would each of the following virtual addresses correspond
to? (Do not try to handle any page faults, if any.)
(i) 1052
(ii) 2221
(iii) 5499
Chapter 9
12
Medium-Term Scheduling is part of the swapping function. Swapping in decision is based on the
need to manage the degree of multiprogramming. Swapping in decision will consider the memory
requirements of the swapping-out processes.
Short-Term Scheduling
Short- term scheduler is also known as the dispatcher. It executes most frequently and makes
fine-grained decision of which process to execute next. Short- term scheduler is invoked when an
event occurs that may lead to the suspension of current process. Such events are Clock interrupt, I/O
interrupts, Operating system calls, Signals.
9.2 What is usually the critical performance requirement in an interactive operating system?
Response time : For an interactive process, this is the time from the submission of a request until the
response begins to be received.
9.3 What is the difference between turnaround time and response time?
Turnaround time
This is the interval of time between the submission of a process and its completion. Includes actual
execution time plus time spent waiting for resources, including the processor. This is an appropriate
measure for a batch job.
Response time
For an interactive process, this is the time from the submission of a request until the response begins
to be received.
9.4 For process scheduling, does a low-priority value represent a low priority or a high
priority?
For process scheduling, a low-priority value represent a high priority .
9.5 What is the difference between preemptive and nonpreemptive scheduling?
Nonpreemptive
once a process is in the running state, it will continue until it terminates, blocks itself
for I/O, or give up voluntarily
Preemptive
currently running process may be interrupted and moved to ready state by the OS
preemption may occur when new process arrives, on an interrupt, or periodically
9.6 Briefly define FCFS scheduling.
Simplest scheduling policy
Also known as first-in-first-out (FIFO) or a strict queuing scheme
When the current process ceases to execute, the longest process in the Ready queue is
selected
Performs much better for long processes than short ones
Tends to favor processor-bound processes over I/O-bound processes
9.7 Briefly define round-robin scheduling.
Uses preemption based on a clock
Also known as time slicing because each process is given a slice of time before being
preempted
Principal design issue is the length of the time quantum, or slice, to be used
Particularly effective in a general-purpose time-sharing system or transaction processing
system
One drawback is its relative treatment of processor-bound and I/O-bound processes
13
9.8 Briefly define shortest-process-next scheduling.
Nonpreemptive policy in which the process with the shortest expected processing time is
selected next
A short process will jump to the head of the queue
Possibility of starvation for longer processes
One difficulty is the need to know, or at least estimate, the required processing time of each
process
If the programmer’s estimate is substantially under the actual running time, the system may
abort the job
Chapter 11
11.1 List and briefly define three techniques for performing I/O.
14
Programmed I/O : The processor issues an I/O command, on behalf of a process, to an I/O module;
that process then busy waits for the operation to be completed before proceeding.
Interrupt-driven I/O : The processor issues an I/O command on behalf of a process. There are then
two possibilities. If the I/O instruction from the process is nonblocking, then the processor continues
to execute instructions from the process that issued the I/O command. If the I/O instruction
is blocking, then the next instruction that the processor executes is from the OS, which will put the
current process in a blocked state and schedule another process.
Direct memory access (DMA) : A DMA module controls the exchange of data between main
memory and an I/O module. The processor sends a request for the transfer of a block of data to the
DMA module and is interrupted only after the entire block has been transferred.
11.2 What is the difference between logical I/O and device I/O?
Logical I/O: The logical I/O module deals with the device as a logical resource and is not concerned
with the details of actually controlling the device. The logical I/O module is concerned with
managing general I/O functions on behalf of user processes,
Device I/O: The requested operations and data (buffered characters, records, etc.) are converted into
appropriate sequences of I/O instructions, channel commands, and controller orders. Buffering
techniques may be used to improve utilization.
11.3 What is the difference between block-oriented devices and stream-oriented devices?
Give a few examples of each.
Block-oriented device
stores information in blocks that are usually of fixed size
transfers are made one block at a time
possible to reference data by its block number
disks and USB keys are examples
Stream-oriented device
transfers data in and out as a stream of bytes
no block structure
terminals, printers, communications ports, and most other devices that are not
secondary storage are examples
11.4 Why would you expect improved performance using a double buffer rather than a
single buffer for I/O?
11.5 What delay elements are involved in a disk read or write?
11.6 Briefly define the disk scheduling policies illustrated in Figure 11.7.
11.7 Briefly define the seven RAID levels.
11.8 What is the typical disk sector size?
Answer
Explanation: P1 starts but is preempted after 20ms when P2 arrives and has shorter burst time (20ms)
than the remaining burst time of P1 (30 ms) . So, P1 is preempted. P2 runs to completion. At 40ms
P3 arrives, but it has a longer burst time than P1, so P1 will run. At 60ms P4 arrives. At this point P1
has a remaining burst time of 10 ms, which is the shortest time, so it continues to run.
Once P1 finishes, P4 starts to run since it has shorter burst time than P3.
15
Non-preemptive Priority:
Explanation: P1 starts, but as the scheduler is non-preemptive, it continues executing even though it
has lower priority than P2. When P1 finishes, P2 and P3 have arrived. Among these two, P2 has
higher priority, so P2 will be scheduled, and it keeps the processor until it finishes. Now we have P3
and P4 in the ready queue. Among these two, P4 has higher priority, so it will be scheduled. After P4
finishes, P3 is scheduled to run.
Explanation: P1 arrives first, so it will get the 30ms quantum. After that, P2 is in the ready queue, so
P1 will be preempted and P2 is scheduled for 20ms. While P2 is running, P3 arrives. Note that P3
will be queued after P1 in the FIFO ready queue. So when P2 is done, P1 will be scheduled for the
next quantum. It runs for 20ms. In the mean time, P4 arrives and is queued after P3. So after P1 is
done, P3 runs for one 30 ms quantum. Once it is done, P4 runs for a 30ms quantum. Then again P3
runs for 30 ms, and after that P4 runs for 10 ms, and after that P3 runs for 30+10ms since there is
nobody left to compete with.
16