MCS 041 2011
MCS 041 2011
MCS 041 2011
: :
Question 1: (a) Describe the Bankers algorithm Hint: The Banker's algorithm is run by the operating system whenever a process requests resource. The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must declare the maximum number of instances of each resource type that may not exceed the total number of resources in the system. Also, when a process gets all its requested resources it must return them in a finite amount of time. function bankers_algorithm(set of processes P, currently available resources A) { while (P not empty) { Boolean found = false for each (process p in P) { Cp = current_resource_allocation_for_process(p) Mp = maximum_resource_requirement_for_process(p) if (Mp Cp A) { // p can obtain all it needs. // Assume it does so, terminates, and releases what it already has. A = A + Cp remove_element_from_set(p, P) found = true } } if (not found) { return UNSAFE } } return SAFE }
Why must the bitmap image for the file allocation be kept on mass storage rather than in main memory? Hint: Generally, the free space list is implemented as a bit map or vector. Each block is represented by 1 bit. If the block is free, the bit is 1 else 0. For example, consider a disk where blocks 2,3,4,5,8,9,10,11,12,13,17,18,25,26,27 are free and the rest of the blocks are allocated. The free space bit map would be: 0011110011111100011000000111 Bit vectors are inefficient unless the entire vector is kept in main memory ( and is written to disk occasionally for recovery needs). But keeping it in main memory is possible for smaller disks but not for large ones. A 40 GB disk with 1 KB block requires over 5 MB to store its bit map. Now if for this if we keep bit vector in main memory then 5 MB will be less available for processes. So it is better to keep
(b)
Question 2:Explain Dell and La-Padulla model for security and protection. Why is security a critical issue in Distributed OS environment? Hint: The Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell and Leonard J. LaPadula. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive (e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public"). The Bell-LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of a state machine with a set of allowable states in a computer network system. The transition from one state to another state is defined by transition functions. Question 3:Provide a solution to the Readers-Writers problem using semaphores. Explain with an example. Hint: Reader/Writer basically revolve around no. of processes using shared global data structure. Readers properties: They never modifies the shared data structure. No. of readers may use the shared data structure concurrently. Writers properties: Writers may read and write the shared data structure. Writers must be granted exclusive access to common data before using it. Solution based on semaphore: Typedef int semaphore; Semaphore mutex =1; Semaphore db=1; Int rc=0; Void reader(void) { While(TRUE) { Down(&mutex); Rc++; If(rc==1) Down(&db); Up(&mutex); Read_data_base(); Down(&mutex);
Rc--; If(rc==0) Up(&mutex); Use_data_read();; } } Void writer(void) { While(TRUE) { Think_up-data(); Down(&db); Write_data-base(); Up(&db0; } }
Question 4:Write and explain an algorithm used for ordering of events in distributed environments. Implement the algorithm with an example and explain. Hint: Vector clocks is an algorithm for generating a partial ordering of events in a distributed system and detecting causality violations. Just as in Lamport timestamps, interprocess messages contain the state of the sending process's logical clock. A vector clock of a system of N processes is an array/vector of N logical clocks, one clock per process; a local "smallest possible values" copy of the global clock-array is kept in each process, with the following rules for clock updates:
Example of a system of vector clocks Initially all clocks are zero. Each time a process experiences an internal event, it increments its own logical clock in the vector by one. Each time a process prepares to send a message, it increments its own logical clock in the vector by one and then sends its entire vector along with the message being sent. Each time a process receives a message, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the
value in its own vector clock and the value in the vector in the received message (for every element). The vector clocks algorithm was independently developed by Colin Fidge and Friedemann Mattern in 1988 Question 5:What do you understand by disk scheduling? Calculate the total head movement using FCFS, SSTF and SCAN disk scheduling algorithms for the given block sequences: 91, 130, 150, 30, 100, 120, 50 Initially the head is at cylinder number 0. Draw the diagram for all. Page 77 Hint: Disk scheduling means scheduling the disk accesses for head according to some scheduling algorithm. FCFS
SSTF Scheduling
Question 6:Explain memory management schemes in Windows 2000. List at least three system cells to memory management in Windows 2000 and explain. Hint: The win32 API provides several ways for an application to use memory like virtual memory. Memory-mapped files, heaps, and thread local storage. An application calls virtual allocation to reserve or commit virtual memory, and virtual free to decommit or release the memory. These functions enable the application to specify the virtual address at which the memory is allocated. They operate on multiples of the memory page size, and the starting address of an allocated region must be greater than OX1000. Another way for an application to use memory is by memory-mapping a file into its address space. Memory mapping is also a convenient way for two processes to share memory. Both processes map the same file into their virtual memory. Memory-mapping is a multistage process. If a process wants to map some address space just to share a memory region with another process, no file is needed. The process can call create file mapping with a file handle of 0xffffffff and a particular size. The third way for application to use memory is a heap. A heap n the win32 environment is just a region of reserved address space. When a win32 process is initialized. It is created with a 1 MB default heap. Since many win32 functions use the default heap, access to the heap is synchronized to protect the heap is space allocation data structures from being damaged by concurrent updates by multiple threads. The fourth way for application to use memory is a thread-local storage machamism. Functions that rely on global or static data typically fail to work properly in a multithreaded environment. For instance, the run-time function strok uses a static variable to keep track of its current position while passing a string. Question 7: Explain any two mutual exclusion algorithms in Distributed systems. Hint: Dekkers algorithm : This is a concurrent programming algorithm for mutual exclusion derived by the dutch mathematician T.J Dekker that allows two process to share a single-use resource without conflict, using only shared memory for communication. It was one of the first mutual exclusion algorithms to be invented. If two processes attempt to access a critical section at the same time, the algorithm will choose the process whose turn it is. If the other process is already modifying the critical section, it will wait for the process to finish. This is done by the use of two flags F0 and F1 which indicates an intention to enter the critical section and a turn variable which indicates who has priority between the two algorithms. Lamport's Distributed Mutual Exclusion Algorithm is a contention-based algorithm for mutual exclusion on a distributed system 1. Every process maintains a queue of pending requests for entering critical section order. The queues are ordered by virtual time stamps derived from (Lamport timestamps). Algorithm Requesting process 1. Enters its request in its own queue (ordered by time stamps) 2. Sends a request to every node. 3. Wait for replies from all other nodes.
4. If own request is at the head of the queue and all replies have been received, enter critical section. 5. Upon exiting the critical section, send a release message to every process. Other processes 1. After receiving a request, send a reply and enter the request in the request queue (ordered by time stamps) 2. After receiving release message, remove the corresponding request from the request queue. 3. If own request is at the head of the queue and all replies have been received, enter critical section. Question 8: Explain working set model. Explain its concept as well as implementation. Hint: The working set model uses the current memory requirements to determine the page frames to allocate to the process. The idea is to use the recent needs of process to predict its future needs. The working set model is an approximation of the program locality. Example: the sequence of memory referred is as follows. 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 4 4 4 3 t1 t2
If the working set window size is 10, then working set at t1 is {1, 2, 5, 6, 7} and at t2 is { 3, 4}. Choose , the working set parameter. At any time, all pages referenced by a process in the last seconds are considered as its working set. A process will be never executed unless its working set is resident in main memory. Pages outside the working set may be discarded at any instance. b) Compare Direct file organization with Indexed systematical file organization. Hint: Direct file organization: A direct file or hashed file exploits the capability found on disks to access directly any block of a known address. As with sequential and indexed files, a key field is required in each record. However, there is no concept of sequential ordering here. The direct files makes use of hashing on the key value. Direct files are often used where very rapid access is required. Where fixed length records are used, and where records are always accessed one at a time. Example: directories, pricing list, and name lists. Indexed Sequential file: A popular approach to overcome the disadvantages of the sequential file is the indexed sequential file organization. The indexed sequential file maintains the key characteristics of the sequential file. Records are organized in sequence based on a key field. Two features are added ( an index to the file to support random access, and an overflow file). The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file. In the simplest indexed sequential structure, a single level of indexing is used. To find a specific field, the index is searched to find the highest key value that is equal to or precedes the desired key value. The search continues in the main file at the location indicated by the pointer.