50% found this document useful (2 votes)
483 views16 pages

Problem Review

The document discusses several key concepts related to operating system design including: 1) The three main objectives of an OS design are convenience, efficiency, and the ability to evolve without interfering with service. 2) The kernel is the core of the OS that resides in memory and controls execution, scheduling, and resource management. 3) Multiprogramming allows multiple processes to reside in memory simultaneously so the CPU can quickly switch between processes when one is waiting for I/O. 4) A process is a program in execution that has its own memory space and execution context managed by the OS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
483 views16 pages

Problem Review

The document discusses several key concepts related to operating system design including: 1) The three main objectives of an OS design are convenience, efficiency, and the ability to evolve without interfering with service. 2) The kernel is the core of the OS that resides in memory and controls execution, scheduling, and resource management. 3) Multiprogramming allows multiple processes to reside in memory simultaneously so the CPU can quickly switch between processes when one is waiting for I/O. 4) A process is a program in execution that has its own memory space and execution context managed by the OS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Chapter 2

2.1 What are three objectives of an OS design?


• Convenience
Makes the computer more convenient to use
• Efficiency
Allows computer system resources to be used in an efficient manner
• Ability to evolve
Permit effective development, testing, and introduction of new system functions
without interfering with service

2.2 What is the kernel of an OS?


A kernel is portion of operating system that is in main memory, contains most frequently
used functions, also called the nucleus.
Kernel controls execution of the processor(s). The Kernel manages thread scheduling,
process switching, exception and interrupt handling, and multiprocessor synchronization. Unlike the
rest of the Executive and the user level, the Kernel’s own code does not run in threads.

2.3 What is multiprogramming?


Multiprogramming
 also known as multitasking
 There must be enough memory to hold the OS (resident monitor) and one user program
 When one job needs to wait for I/O, the processor can switch to the other job, which is likely
not waiting for I/O

2.4 What is a process?


Many definitions have been given for the term process, including
 program in execution
 An instance of a program running on a computer
 The entity that can be assigned to and executed on a processor
 A unit of activity characterized by a single sequential thread of execution and an associated
set of system resources.

2.5 How is the execution context of a process used by the OS?


The execution context, or process state, is the internal data by which the OS is able to supervise
and control the process. This internal information is separated from the process, because the OS has
information not permitted to the process.
The context includes all of the information that the OS needs to manage the process and that the
processor needs to execute the process properly including:
• contents of the various processor registers, such as the program counter and data registers
• And information of use to the OS, such as the priority of the process and whether the process
is waiting for the completion of a particular I/O event.

2.6 List and briefly explain five storage management responsibilities of a typical OS.

Operating System has five principal storage management responsibilities:


Process isolation:

• The OS must prevent independent processes from interfering with each other’s memory,
both data and instructions.
1
Automatic allocation and management:

• Programs should be dynamically allocated across the memory hierarchy as required.


• Allocation should be transparent to the programmer.
• Thus, the programmer is relieved of concerns relating to memory limitations, and the OS
can achieve efficiency by assigning memory to jobs only as needed.
Support of modular programming:

• Programmers should be able to define program modules, and to create, destroy, and alter the
size of modules dynamically.
Protection and access control:

• Sharing of memory creates the potential for one program to address the memory space of
another.
• This is desirable when sharing is needed by particular applications.
• At other times, it threatens the integrity of programs and even of the OS itself.
• The OS must allow portions of memory to be accessible in various ways by various users.
Long-term storage:

• Many application programs require means for storing information for extended periods of
time, after the computer has been powered down. The file system implements a long-term
store, with information stored in named objects, called files.
• The file is a convenient concept for the programmer and is a useful unit of access control
and protection for the OS.

2.7 Explain the distinction between a real address and a virtual address.

Virtual address is the address of a storage location in virtual memory.


Real address is a physical address in main memory.

2.8 Describe the round-robin scheduling technique.


A common strategy is to give each process in the queue some time in turn; this is referred to
as a round-robin technique. In effect, the round-robin technique employs a circular queue.
2.9 Explain the difference between a monolithic kernel and a microkernel.
A microkernel architecture assigns only a few essential functions to the kernel, including address
spaces, interprocess communication (IPC), and basic scheduling. Other OS services are provided by
processes, sometimes called servers, that run in user mode and are treated like any other application
by the microkernel. This approach decouples kernel and server development

monolithic kernel.
In monolithic kernel, OS functionality is provided in these large kernels, including scheduling, file
system, networking, device drivers, memory management, and more. Typically, a monolithic kernel
is implemented as a single process, with all elements sharing the same address space.

2
2.10 What is multithreading?
Multithreading is a technique in which a process, executing an application, is divided into threads
that can run concurrently. Multithreading is useful for applications that perform a number of
essentially independent tasks that do not need to be serialized.
With multiple threads running within the same process, switching back and forth among threads
involves less processor overhead than a major process switch between different processes.

2.11 List key design issues for SMP operating system.


1. There are multiple processors.
2. These processors share the same main memory and I/O facilities, interconnected by a
communications bus or other internal connection scheme.
3. All processors can perform the same functions (hence the term symmetric).

Chapter 3

3.1 What is an instruction trace?


The behavior of an individual process by listing the sequence of instructions that execute for that
process. Such a listing is referred to as a trace of the process.
3.2 What common events lead to the creation of a process?

New batch job The OS is provided with a batch job control stream, usually
on tape or disk. When the OS is prepared to take on new
work, it will read the next sequence of job control
commands.

Interactive logon A user at a terminal logs on to the system.

Created by OS to provide a service The OS can create a process to perform a function on


behalf of a user program, without the user having to wait
(e.g., a process to control printing).

Spawned by existing process For purposes of modularity or to exploit parallelism, a user


program can dictate the creation of a number of processes.

3.3 For the processing model of Figure 3.6, briefly define each state.

3
The five states in this new diagram are:
Running: The process that is currently being executed.
Ready: A process that is prepared to execute when given the opportunity.
Blocked/Waiting: 4 A process that cannot execute until some event occurs, such as the completion
of an I/O operation.
New: A process that has just been created but has not yet been admitted to the pool of executable
processes by the OS. Typically, a new process has not yet been loaded into main memory, although
its process control block has been created.
Exit: A process that has been released from the pool of executable processes by the OS, either
because it halted or because it aborted for some reason.

3.4 What does it mean to preempt a process?


The process is returning from the kernal to user mode but the Kernel preempts it and does a context
switch to schedule another process.

3.5 What is swapping and what is its purpose?


Swapping is moving part or all of a process from main memory to disk.
When none of the processes in main memory is in the Ready state, the OS swaps one of the blocked
processes out onto disk into a suspend queue. The space that is freed in main memory can then be
used to bring in another process. The purpose of swapping is to provide for efficient use of main
memory for process execution.

3.6 Why does Figure 3.9b have two blocked states?


Consider a system that does not employ virtual memory. Each process to be executed must be loaded
fully into main memory. Thus all of the processes in all of the queues must be resident in main
memory. In this case, memory holds multiple processes and that the processor can move to another
process when one process is blocked. But the processor is so much faster than I/O that it will be
common for all of the processes in memory to be waiting for I/O. Thus, even with
multiprogramming, a processor could be idle most of the time.
So solution is swapping, which involves moving part or all of a process from main memory to disk.
When none of the processes in main memory is in the Ready state, the OS swaps one of the blocked
processes out onto disk into a suspend queue. The space that is freed in main memory can then be
used to bring in another process.
With the use of swapping, one other state must be added to our process behavior model: the Suspend
state. So, Figure 3.9b have two blocked states .
Blocked: The process is in main memory and awaiting an event.
Blocked/Suspend: The process is in secondary memory and awaiting an event.

3.7 List four characteristics of a suspended process.


Characteristics of a Suspended Process
• The process is not immediately available for execution
• The process may or may not be waiting on an event
• The process may not be removed from this state until the agent explicitly orders the removal
• The process was placed in a suspended state by an agent: either itself, a parent process, or the
OS, for the purpose of preventing its execution

4
3.8 For what types of entities does the OS maintain tables of information for management purposes?

3.9 List three general categories of information in a process control block.

3.10 Why are two modes (user and kernel) needed?


User Mode
The less-privileged mode is referred to as the user mode, because user programs typically execute in
this mode. In user mode, necessary to protect the OS and key operating system tables, such as
process control blocks, from interference by user programs.
Kernel Mode
The more-privileged mode is referred to as system mode, control mode or kernel mode. It is the
kernel of the operating system. In the kernel mode, the software has complete control of the
processor and all its instructions, registers, and memory. This level of control is not necessary and
for safety is not desirable for user programs.

3.11 What are the steps performed by an OS to create a new process?


1. Assign a unique process identifier to the new process.
2. Allocate space for the process.
3. Initialize the process control block.
4. Set the appropriate linkages
5. Create or expand other data structures.

3.12 What is the difference between an interrupt and a trap?


The interrupt is due to some sort of event that is external to and independent of the currently running
process, such as the completion of an I/O operation.
The trap relates to an error or exception condition generated within the currently running process,
such as an illegal file access attempt.

3.13 Give three examples of an interrupt.


o clock interrupt
o I/O interrupt
o memory fault
Clock interrupt
The currently running process has been executing for the maximum allowable time slice determined
by the operating system. If it happens, this process must be switched to a ready state and another
process must be dispatched.

5
I/O Interrupt:
The operating system determines what I/O action has occurred.
Memory fault:
The processor encounters a virtual memory address reference for a word that is not in main memory.
After the I/O request is issued to bring in the block of memory, the process with the memory fault is
placed in a blocked state.

3.14 What is the difference between a mode switch and a process switch?

Mode switch may occur without changing the state of the process that is currently in Running state.

A process switch may occur any time that the operating system has gained control from the currently
running process. So, OS must save the context of the current process in its PCB, update the PCB
state and accounting information, move the PCB to the appropriate queue, select a new process,
remove the PCB of that process from the ready queue, update its memory management structures,
and restore its context into the processor.

Problem
3.1 The following state transition table is a simplified model of process management, with the labels
representing transitions between states of READY, RUN, BLOCKED and NONRESIDENT.

Give an example of an event that can cause each of the above transitions.
Answer
RUN to READY can be caused by a time-quantum expiration
READY to NONRESIDENT occurs if memory is overcommitted, and a process is temporarily
swapped out of memory
READY to RUN occurs only if a process is allocated the CPU by the dispatcher
RUN to BLOCKED can occur if a process issues an I/O or other kernel request.
BLOCKED to READY occurs if the awaited event completes (perhaps I/O completion)
BLOCKED to NONRESIDENT - same as READY to NONRESIDENT.

3. 2 Assume that at time 5 no system resources are being used except for the processor and memory.
Now consider the following events:
At time 5: P1 executes a command to read from disk unit 3.
At time 15: P5's time slice expires.
At time 18: P7 executes a command to write to disk unit 3.
At time 20: P3 executes a command to read from disk unit 2.
At time 24: P5 executes a command to write to disk unit 3.
At time 28: P5 is swapped out.
At time 33: An interrupt occurs from disk unit 2: P3's read is complete.
At time 36: An interrupt occurs from disk unit 3: P1's read is complete.
At time 38: P8 terminates.
At time 40: An interrupt occurs from disk unit 3: P5's write is complete.
At time 44: P5 is swapped back in.
At time 48: An interrupt occurs from disk unit 3: P7's write is complete.
For each time 22, 37, and 47, identify which state each process is in. If a process is blocked, further
identify the event on which is it blocked.

6
Answer
At time 22:
P1: blocked for I/O
P3: blocked for I/O
P5: ready/running
P7: blocked for I/O
P8: ready/running
At time 37
P1: ready/running
P3: ready/running
P5: blocked suspend
P7: blocked for I/O
P8: ready/running
At time 47
P1: ready/running
P3: ready/running
P5: ready suspend
P7: blocked for I/O
P8: exit

3.3 Figure 3.9b contains seven states. In principle, one could draw a transition between any two
states, for a total of 42 different transitions.
a. List all of the possible transitions and give an example of what could cause each transition.
b. List all of the impossible transitions and explain why.

Chapter 4
4.1 Table 3.5 lists typical elements found in a process control block for an unthreaded OS. Of
these, which should belong to a thread control block and which should belong to a process
control block for a multithreaded system?
Thread control block: It contains Identifiers, scheduling and State Information, Data Structuring,
Privileges and Inter-process communications
Process control block: It contains Identifiers, Processor State Information, Memory Management
and Resource ownership, a process identification, process state information and process control
information.
4.2 List reasons why a mode switch between threads may be cheaper than a mode switch
between processes.
Thread switching does not require kernel mode privileges because all of the thread management data
structures are within the user address space of a single process. This saves the overhead of two mode
switches (user to kernel; kernel back to user).

4.3 What are the two separate and potentially independent characteristics embodied in the
concept of process?
Resource Ownership
 Process includes a virtual address space to hold the process image
 the OS provides protection to prevent unwanted interference between processes with respect
to resources
Scheduling/Execution
 Follows an execution path that may be interleaved with other processes
 a process has an execution state (Running, Ready, etc.) and a dispatching priority and is
scheduled and dispatched by the OS
 Traditional processes are sequential; i.e. only one execution path

7
4.4 Give four general examples of the use of threads in a single-user multiprocessing system.
Thread Use in a Single-User System
Foreground and background work : This arrangement often increases the perceived speed of the
application by allowing the program to prompt for the next command before the previous command
is complete.
Asynchronous processing : A thread can be created whose sole job is periodic backup and that
schedules itself directly with the OS; there is no need for fancy code in the main program to provide
for time checks or to coordinate input and output.
Speed of execution: A multithreaded process can compute one batch of data while reading the next
batch from a device. On a multiprocessor system, multiple threads from the same process may be
able to execute simultaneously.
Modular program structure: Programs that involve a variety of activities or a variety of sources
and destinations of input and output may be easier to design and implement using threads.
4.5 What resources are typically shared by all of the threads of a process?
All of the threads of a process share the state and resources of that process. They reside in the same
address space and have access to the same data. When one thread alters an item of data in memory,
other threads see the results if and when they access that item. If one thread opens a file with read
privileges, other threads in the same process can also read from that file.
4.6 List three advantages of ULTs over KLTs.
1. Thread switching does not require kernel mode privileges because all of the thread
management data structures are within the user address space of a single process. Therefore, the
process does not switch to the kernel mode to do thread management. This saves the overhead of two
mode switches (user to kernel; kernel back to user).
2. Scheduling can be application specific. One application may benefit most from a simple round-
robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm.
The scheduling algorithm can be tailored to the application without disturbing the underlying OS
scheduler.
3. ULTs can run on any OS. No changes are required to the underlying kernel to support ULTs.
The threads library is a set of application-level functions shared by all applications.
4.7 List two disadvantages of ULTs compared to KLTs.
 In a typical OS many system calls are blocking. As a result, when a ULT executes a system
call, not only is that thread blocked, but also all of the threads within the process are blocked.
 In a pure ULT strategy, a multithreaded application cannot take advantage of
multiprocessing. A kernel assigns one process to only one processor at a time. Therefore,
only a single thread within a process can execute at a time.
4.8 Define jacketing.
 To overcome the problem of blocking threads is to use a technique referred to as jacketing.
 The purpose of jacketing is to convert a blocking system call into a non-blocking system call.
 Within this jacket routine is code that checks to determine if the I/O device is busy.
 If it is, the thread enters the Blocked state and passes control (through the threads library) to
another thread.
 When this thread later is given control again, the jacket routine checks the I/O device again.
Chapter 7
7.1 What requirements is memory management intended to satisfy?
o Relocation
o Protection
o Sharing

8
o Logical organization
o Physical organization
7.2 Why is the capability to relocate processes desirable?
In a multiprogramming system, the available main memory is generally shared among a number of
processes. Typically, it is not possible for the programmer to know in advance which other programs
will be resident in main memory at the time of execution of his or her program. In addition, we
would like to be able to swap active processes in and out of main memory to maximize processor
utilization by providing a large pool of ready processes to execute. Once a program has been
swapped out to disk, it would be quite limiting to declare that when it is next swapped back in, it
must be placed in the same main memory region as before. Instead, we may need to relocate the
process to a different area of memory.
7.3 Why is it not possible to enforce memory protection at compile time?
Because the location of a program in main memory is unpredictable due to relocation, it is
impossible to check absolute address at compile time to assure protection.

7.4 What are some reasons to allow two or more processes to all have access to a particular
region of memory?
If a number of processes are executing the same program, it is advantageous to allow each process to
access the same copy of the program rather than have its own separate copy. Processes that are
cooperating on some task may need to share access to the same data structure.

7.5 In a fixed-partitioning scheme, what are the advantages of using unequal-size partitions?
• The number of partitions specified at system generation time limits the number of active (not
suspended) processes in the system.
• Because partition sizes are preset at system generation time, small jobs will not utilize partition
space efficiently. In an environment where the main storage requirement of all jobs is known
beforehand, this may be reasonable, but in most cases, it is an inefficient technique.

7.6 What is the difference between internal and external fragmentation?


There is wasted space internal to a partition due to the fact that the block of data loaded is smaller
than the partition, is referred to as internal fragmentation.
External fragmentation is indicating that the memory that is external to all partitions becomes
increasingly fragmented. As time goes on, memory becomes more and more fragmented, and
memory utilization declines.
7.7 What are the distinctions among logical, relative, and physical addresses?
A logical address is a reference to a memory location independent of the current assignment of data
to memory;
A relative address is a particular example of logical address, in which the address is expressed as a
location relative to some known point, usually a value in a processor register.
A physical address, or absolute address, is an actual location in main memory.

7.8 What is the difference between a page and a frame?


 the chunks of a process, known as pages
 available chunks of memory, known as frames , or page frames.

7.9 What is the difference between a page and a segment?


Page: A fixed-length block of data that resides in secondary memory.
Segment: A variabled-length block of data that resides in secondary memory

9
Problems

7.1 In Section 2.3, we listed five objectives of memory management, and in Section 7.1, we
listed five requirements. Argue that each list encompasses all of the concerns addressed in the
other.
7.2 Consider a fixed partitioning scheme with equal-size partitions of 216 bytes and a total main
memory size of 224 bytes. A process table is maintained that includes a pointer to a partition for
each resident process. How many bits are required for the pointer?
224/216=28 partitions
8 bits are required for the pointer.

7.3 Consider a dynamic partitioning scheme. Show that, on average, the memory contains half
as many holes as segments.
• Consider a dynamic partitioning scheme. Show that, on average, the memory has half as
many holes as segments
• Let s and h denote the average number of segments and holes, respectively.
• The probability that a given segment is followed by a hole in memory (and not by another
segment) is 0.5, because deletions and creations are equally probable in equilibrium.
• So, with s segments in memory, the average number of holes must be s/2.
• It is intuitively reasonable that the number of holes must be less than the number of segments
because neighboring segments can be combined into a single hole on deletion.

7.4 To implement the various placement algorithms discussed for dynamic partitioning
(Section 7.2), a list of the free blocks of memory must be kept. For each of the three methods
discussed (best-fit, first-fit, next-fit), what is the average length of the search?
• By problem 7.3, we know that the average number of holes is s/2, where s is the number of
resident segments.
• Regardless of fit strategy, in equilibrium, the average search length is s/4.

7.5 Another placement algorithm for dynamic partitioning is referred to as worst-fit. In this
case, the largest free block of memory is used for bringing in a process. Discuss the pros and
cons of this method compared to first-, next-, and best-fit.What is the average length of the
search for worst-fit?
• A criticism of the best fit algorithm is that the space remaining after allocating a block of the
required size is so small that in general it is of no real use.
• The worst fit algorithm maximizes the chance that the free space left after a placement will be
large enough to satisfy another request, thus minimizing the frequency of compaction.
• The disadvantage of this approach is that the largest blocks are allocated first
• Therefore a request for a large area is more likely to fail.
• the average search length is s/4.

7.7 A 1-Mbyte block of memory is allocated using the buddy system.


a. Show the results of the following sequence in a figure similar to Figure 7.6:
Request 70; Request 35; Request 80; Return A; Request 60; Return B; Return D; Return C.
b. Show the binary tree representation following Return B.

10
Chapter 8

8.1 What is the difference between simple paging and virtual memory paging?

Table 8.1

8.2 Explain thrashing.

When the operating system brings one piece in, it must throw another out. If it throws out a piece just
before it is used, then it will just have to go get that piece again almost immediately. Too much of
this leads to a condition known as thrashing. Thrashing is the condition that the system spends most
of its time swapping pieces rather than executing instructions.

8.3 Why is the principle of locality crucial to the use of virtual memory?

The principle of locality is crucial to the use of virtual memory because the operating system tries to
guess, based on recent history, which pieces are least likely to be used in the near future. It can also
avoid thrashing. Therefore we can see that a virtual memory scheme may work.

8.4 What elements are typically found in a page table entry? Briefly define each element.

Each process has its own page table. Each page table entry contains
• frame number: It is the corresponding page in main memory.
• Present bit (P): to indicate whether the page is in main memory or not.
• Modify bit (M): to indicate whether the contents of the corresponding page have been altered
since the page was last loaded into main memory.
• Other control bit: to indicate whether the protection or sharing is needed or not.

11
8.5 What is the purpose of a translation lookaside buffer?

 Each virtual memory reference can cause two physical memory accesses:
 one to fetch the page table entry
 one to fetch the data (or the next instruction)
 To overcome the effect of doubling the memory access time, most virtual memory schemes
make use of a special high-speed cache called a translation lookaside buffer
Problems

8.1 Suppose the page table for the process currently executing on the processor looks like
the following. All numbers are decimal, everything is numbered starting from zero,
and all addresses are memory byte addresses.The page size is 1024 bytes.
Virtual page number Valid bit Reference bit Modify bit Page frame
number
0 1 1 0 4
1 1 1 1 7
2 0 0 0 —
3 1 0 0 2
4 0 0 0 —
5 1 0 1 0
a. Describe exactly how, in general, a virtual address generated by the CPU is translated
into a physical main memory address.
b. What physical address, if any, would each of the following virtual addresses correspond
to? (Do not try to handle any page faults, if any.)
(i) 1052
(ii) 2221
(iii) 5499

Chapter 9

9.1 Briefly describe the three types of processor scheduling.


Long-Term Scheduling
Long-term scheduler determines which programs are admitted to the system for processing. It
controls the degree of multiprogramming. More processes that are created, the smaller is the
percentage of time that each process can be executed.
Medium-Term Scheduling

12
Medium-Term Scheduling is part of the swapping function. Swapping in decision is based on the
need to manage the degree of multiprogramming. Swapping in decision will consider the memory
requirements of the swapping-out processes.
Short-Term Scheduling
Short- term scheduler is also known as the dispatcher. It executes most frequently and makes
fine-grained decision of which process to execute next. Short- term scheduler is invoked when an
event occurs that may lead to the suspension of current process. Such events are Clock interrupt, I/O
interrupts, Operating system calls, Signals.

9.2 What is usually the critical performance requirement in an interactive operating system?
Response time : For an interactive process, this is the time from the submission of a request until the
response begins to be received.
9.3 What is the difference between turnaround time and response time?
Turnaround time
This is the interval of time between the submission of a process and its completion. Includes actual
execution time plus time spent waiting for resources, including the processor. This is an appropriate
measure for a batch job.
Response time
For an interactive process, this is the time from the submission of a request until the response begins
to be received.
9.4 For process scheduling, does a low-priority value represent a low priority or a high
priority?
For process scheduling, a low-priority value represent a high priority .
9.5 What is the difference between preemptive and nonpreemptive scheduling?
 Nonpreemptive
 once a process is in the running state, it will continue until it terminates, blocks itself
for I/O, or give up voluntarily
 Preemptive
 currently running process may be interrupted and moved to ready state by the OS
 preemption may occur when new process arrives, on an interrupt, or periodically
9.6 Briefly define FCFS scheduling.
 Simplest scheduling policy
 Also known as first-in-first-out (FIFO) or a strict queuing scheme
 When the current process ceases to execute, the longest process in the Ready queue is
selected
 Performs much better for long processes than short ones
 Tends to favor processor-bound processes over I/O-bound processes
9.7 Briefly define round-robin scheduling.
 Uses preemption based on a clock
 Also known as time slicing because each process is given a slice of time before being
preempted
 Principal design issue is the length of the time quantum, or slice, to be used
 Particularly effective in a general-purpose time-sharing system or transaction processing
system
 One drawback is its relative treatment of processor-bound and I/O-bound processes
13
9.8 Briefly define shortest-process-next scheduling.
 Nonpreemptive policy in which the process with the shortest expected processing time is
selected next
 A short process will jump to the head of the queue
 Possibility of starvation for longer processes
 One difficulty is the need to know, or at least estimate, the required processing time of each
process
 If the programmer’s estimate is substantially under the actual running time, the system may
abort the job

9.9 Briefly define shortest-remaining-time scheduling.


 Preemptive version of SPN
 Scheduler always chooses the process that has the shortest expected remaining processing
time
 Risk of starvation of longer processes
 Should give superior turnaround time performance to SPN because a short job is given
immediate preference to a running longer job

9.10 Briefly define highest-response-ratio-next scheduling.


 Chooses next process with the greatest ratio
 Attractive because it accounts for the age of the process
 While shorter jobs are favored, aging without service increases the ratio so that a longer
process will eventually get past competing shorter jobs

9.11 Briefly define feedback scheduling.


 Preemptive with time quantum
 Demoted to the next lower-priority queue
 With each queue (except the lowest priority queue), FCFS
 Lowest-priority queue: RR
9.1 Consider the following workload:

a. Show the schedule using Shortest Remaining Time, non-preemptive Priority (a


smaller priority number implies higher priority) and Round Robin with
quantum 30 ms. Use time scale diagram as shown below for the FCFS example
to show the schedule for each requested scheduling policy.
Example for FCFS (1 unit = 10 ms):

b. What is the average waiting time of the above scheduling policies?

Chapter 11

11.1 List and briefly define three techniques for performing I/O.
14
Programmed I/O : The processor issues an I/O command, on behalf of a process, to an I/O module;
that process then busy waits for the operation to be completed before proceeding.
Interrupt-driven I/O : The processor issues an I/O command on behalf of a process. There are then
two possibilities. If the I/O instruction from the process is nonblocking, then the processor continues
to execute instructions from the process that issued the I/O command. If the I/O instruction
is blocking, then the next instruction that the processor executes is from the OS, which will put the
current process in a blocked state and schedule another process.
Direct memory access (DMA) : A DMA module controls the exchange of data between main
memory and an I/O module. The processor sends a request for the transfer of a block of data to the
DMA module and is interrupted only after the entire block has been transferred.

11.2 What is the difference between logical I/O and device I/O?
Logical I/O: The logical I/O module deals with the device as a logical resource and is not concerned
with the details of actually controlling the device. The logical I/O module is concerned with
managing general I/O functions on behalf of user processes,
Device I/O: The requested operations and data (buffered characters, records, etc.) are converted into
appropriate sequences of I/O instructions, channel commands, and controller orders. Buffering
techniques may be used to improve utilization.

11.3 What is the difference between block-oriented devices and stream-oriented devices?
Give a few examples of each.
 Block-oriented device
 stores information in blocks that are usually of fixed size
 transfers are made one block at a time
 possible to reference data by its block number
 disks and USB keys are examples
 Stream-oriented device
 transfers data in and out as a stream of bytes
 no block structure
 terminals, printers, communications ports, and most other devices that are not
secondary storage are examples

11.4 Why would you expect improved performance using a double buffer rather than a
single buffer for I/O?
11.5 What delay elements are involved in a disk read or write?
11.6 Briefly define the disk scheduling policies illustrated in Figure 11.7.
11.7 Briefly define the seven RAID levels.
11.8 What is the typical disk sector size?
Answer

a. Shortest Remaining Time:

Explanation: P1 starts but is preempted after 20ms when P2 arrives and has shorter burst time (20ms)
than the remaining burst time of P1 (30 ms) . So, P1 is preempted. P2 runs to completion. At 40ms
P3 arrives, but it has a longer burst time than P1, so P1 will run. At 60ms P4 arrives. At this point P1
has a remaining burst time of 10 ms, which is the shortest time, so it continues to run.
Once P1 finishes, P4 starts to run since it has shorter burst time than P3.

15
Non-preemptive Priority:

Explanation: P1 starts, but as the scheduler is non-preemptive, it continues executing even though it
has lower priority than P2. When P1 finishes, P2 and P3 have arrived. Among these two, P2 has
higher priority, so P2 will be scheduled, and it keeps the processor until it finishes. Now we have P3
and P4 in the ready queue. Among these two, P4 has higher priority, so it will be scheduled. After P4
finishes, P3 is scheduled to run.

Round Robin with quantum of 30 ms:

Explanation: P1 arrives first, so it will get the 30ms quantum. After that, P2 is in the ready queue, so
P1 will be preempted and P2 is scheduled for 20ms. While P2 is running, P3 arrives. Note that P3
will be queued after P1 in the FIFO ready queue. So when P2 is done, P1 will be scheduled for the
next quantum. It runs for 20ms. In the mean time, P4 arrives and is queued after P3. So after P1 is
done, P3 runs for one 30 ms quantum. Once it is done, P4 runs for a 30ms quantum. Then again P3
runs for 30 ms, and after that P4 runs for 10 ms, and after that P3 runs for 30+10ms since there is
nobody left to compete with.

Shortest Remaining Time: (20+0+70+10)/4 = 25ms.


Explanation: P2 does not wait, but P1 waits 20ms, P3 waits 70ms and P4 waits 10ms.

Non-preemptive Priority: (0+30+10+70)/4 = 27.5ms


Explanation: P1 does not wait, P2 waits 30ms until P1 finishes, P4 waits only 10ms since it arrived at
60ms and it is scheduled at 70ms. P3 waits 70ms.

Round-Robin: (20+10+70+70)/4 = 42.5ms


Explanation: P1 waits only for P2 (for 20ms). P2 waits only 10ms until P1 finishes the quantum (it
arrives at 20ms and the quantum is 30ms). P3 waits 30ms to start, then 40ms for P4 to finish. P4
waits 40ms to start and one quantum slice for P3 to finish.

16

You might also like