Solutions To Written Assignment 3
Solutions To Written Assignment 3
Solutions To Written Assignment 3
5.2 Discuss how the following pairs of scheduling criteria conflict in certain settings. a. CPU utilization and response time b. Average turnaround time and maximum waiting time c. I/O device utilization and CPU utilization Answer: CPU utilization and response time: CPU utilization is increased if the overheads associated with context switching are minimized. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes. Average turnaround time and maximum waiting time: Average turnaround time is minimized by executing the shortest tasks first. Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time. I/O device utilization and CPU utilization: CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches. I/O device utilization is maximized by scheduling I/O-bound jobs as soon as they become ready to run, thereby incurring the overheads of context switches.
5.3 Consider the exponential average formula used to predict the length of the next CPU burst. What are the implications of assigning the following values to the parameters used by the algorithm? a. 0 and 0 100milliseconds b. 0.99 and 0 10milliseconds Answer: When 0 and 0 100milliseconds, the formula always makes a prediction of 100 milliseconds for the next CPU burst. When 0.99 and 0 10milliseconds, the most recent behavior of the process is given much higher weight than the past history associated with the process. Consequently, the scheduling algorithm is almost memory-less, and simply predicts the length of the previous burst for the next quantum of CPU execution.
5.4 Consider the following set of processes, with the length of the CPU-burst time given in milliseconds: Process Burst Time Priority P1 10 3 P2 1 1 P3 2 3 P4 1 4 P5 5 2 The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0. a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a nonpreemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1) scheduling. b. What is the turnaround time of each process for each of the scheduling algorithms in part a? c. What is the waiting time of each process for each of the scheduling algorithms in part a? d. Which of the schedules in part a results in the minimal average waiting time (over all processes)? Answer:
FCFS RR SJF Priority P1 10 19 19 16 P2 11 2 1 1 P3 13 7 4 18 P4 14 4 2 19 P5 19 14 9 6 c. Waiting time (turnaround time minus burst time) FCFS RR SJF Priority P1 0 9 9 6 P2 10 1 0 0 P3 11 5 2 16 P4 13 3 1 18 P5 14 9 4 1 d. Shortest Job First
b. Turnaround time
5.13 The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: The higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function:
Priority = (Recent CPU usage / 2) + Base where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, process P2 is 18, and process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPUbound process? Answer: The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.
6.1 The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and P1, share the following variables:
boolean flag[2]; /* initially false */ int turn; The structure of process Pi (i == 0 or 1) is shown in Figure 6.25; the other process is Pj (j == 1 or 0). Prove that the algorithm satisfies all three requirements for the critical-section problem. Answer: This algorithm satisfies the three conditions of mutual exclusion. (1) Mutual exclusion is ensured through the use of the flag and turn variables. If both processes set their flag to true, only one will succeed. Namely, the process whose turn it is. The waiting process can only enter its critical section when the other process updates the value of turn. (2) Progress is provided, again through the flag and turn variables. This algorithm does not provide strict alternation. Rather, if a process wishes to access their critical section, it can set their flag variable to true and enter their critical section. It only sets turn to the value of the other process upon exiting its critical section. If this process wishes to enter its critical section again - before the other process it repeats the process of entering its critical section and setting turn to the other process upon exiting. (3) Bounded waiting is preserved through the use of the TTturn variable. Assume two processeswish to enter their respective critical sections. They both set their value of flag to
true, however only the thread whose turn it is can proceed, the other thread waits. If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section. However, Dekkers algorithm has a process set the value of turn to the other process, thereby ensuring that the other process will enter its critical section next.
6.5 Explain why implementing synchronization primitives by disabling interrupts is not appropriate in a single-processor system if the synchronization primitives are to be used in userlevel programs. Answer: If a user-level program is given the ability to disable interrupts, then it candisable the timer interrupt and prevent context switching from taking place, thereby allowing it to use the processor without letting other processes to execute. 6.8 Servers can be designed to limit the number of open connections. For example, a server may wish to have only N socket connections at any point in time. As soon as N connections are made, the server will not accept another incoming connection until an existing connection is released. Explain how semaphores can be used by a server to limit the number of concurrent connections. Answer: A semaphore is initialized to the number of allowable open socket connections.When a connection is accepted, the acquire() method is called, when a connection is released, the release() method is called. If the system reaches the number of allowable socket connections, subsequent calls to acquire() will block until an existing connection is terminated and the release method is invoked. 6.27 Assume that a finite number of resources of a single resource type must be managed. Processes may ask for a number of these resources and once finishedwill return them. As an example, many commercial software packages provide a given number of licenses, indicating the number of applications that may run concurrently. When the application is started, the license count is decremented. When the application is terminated, the license count is incremented. If all licenses are in use, requests to start the application are denied. Such requests will only be granted when an existing license holder terminates the application and a license is returned.
The following program segment is used to manage a finite number of instances of an available resource. The maximum number of resources and the number of available resources are declared as follows: #define MAX RESOURCES 5 int available resources = MAX RESOURCES; When a process wishes to obtain a number of resources, it invokes the decrease_count() function: /* decrease available resources by count resources */ /* return 0 if sufficient resources available, */ /* otherwise return -1 */ int decrease_count(int count) if (available_resources < count) return -1; else available_resources -= count; return 0; When a process wants to return a number of resources, it calls the increase_count() function: /* increase available resources by count */ int increase_count(int count)
The preceding program segment produces a race condition. Do the following: a. Identify the data involved in the race condition. b. Identify the location (or locations) in the code where the race condition occurs. c. Using a semaphore, fix the race condition. Answer: Identify the data involved in the race condition: The variable available resources. Identify the location (or locations) in the code where the race condition occurs: The code that decrements available resources and the code that increments available resources are the statements that could be involved in race conditions. Using a semaphore, fix the race condition: Use a semaphore to represent the available resources variable and replace increment and decrement operations by semaphore increment and semaphore decrement operations.