0% found this document useful (0 votes)
233 views94 pages

5CS4 03 Os - NK

Uploaded by

gravitonneutron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
233 views94 pages

5CS4 03 Os - NK

Uploaded by

gravitonneutron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 94
WHAT IS OPERATING SYSTEM Operating systems perform two basically unrelated functions, extending the machine and managing resources and depending on who is doing the talking, you hear mostly about one function or the other. ‘The Operating System as an Extended Machine The architecture (instruction set, memory organization, /O and bus structure) of most computers at the machine language level is primitive to program, especially for input/output. ‘The Operating System as a Resource Manager The concept of the operating system is primarily providing its users with a convenient interface is a top- down view. An alternative, bottom-up, view holds that the operating system is there to manage all the pieces of a complex system. The operating system is to provide for an orderly and controlled allocation of the processors, memories and /O devices among the various programs competing for them. When a computer (or network) has multiple users, the need for managing and protecting the memory, /O devices and other resources is even greater, since the users ‘might otherwise interfere with one another. TYPES OF OPERATING SYSTEM Mainframe operating system » Server operating system Multiprocessor operating system Real-time operating system Embedded operating system ‘Smart card operating system " OPERATING SYSTEM CONCEPTS All operating systems have certain basic concepts such as processes, memory and files that are central to understanding them. CPU SCHEDULING GPU scheduling is the task coeeteaeeting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher. see eee Operatinc System OPERATING SysTEM : IN A NUTSHELL First Come First Served (FCFS) scheduling First Come First Served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes. Shortest Job- First (SJF) scheduling is probably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm isa special case of the general priority scheduling algorithm, which simply allocates the CPU to the highest priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation. Round Robin (RR) Scheduling Round Robin (RR) Scheduling is more appropriate for a time shared (interactive) system. RR scheduling allocates the CPU to the first process in the reach queue for q time units where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put a tail of the ready queue, ‘The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerate to FCES scheduling and if the quantum too small, scheduling overhead in the form of context siviteh time becomes excessive. ‘The FCFS algorithm is non preemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or non preemptive. Muttilevel queue algorithms allow different algorithms to be used for various classes of processes. The most ‘common is a foreground interactive queue, which use RR scheduling and a background batch queue, which uses FCFS scheduling. Multilevel feedback queues allow processes to move from one queue to another. INPUT/OUTPUT All computers have physical devices for acquiring input and producing output. After all, what good would a computer be if the users could not tell it what to do and could not get the results after it did the work requested. ‘Many kinds of input and output devices exist, including Follow On Insta @rawcoderz 2 Keyboards, monitors, printers and 50 on. It is up to the operating system to manage these devices. ‘One of the main functions of an operating system is to control all the computer's 1/O (Input/Output) devices, It must issue commands to the devices, catch interrupts and handle errors. It should also provide an interface between the devices and the rest of the system that is simple and easy to use. To the extent possible, the interface should be the same for all devices. The I/O code represent a significant fraction of the total operating system. How the operating system manages /O. First we will look at some of the principles of /O hardware and then we will look at /O software in general. VO software can be structured in layers with each layer having a well-defined task to perform. VO DEVICES VO devices can be roughly divided into two categories: block devices and character devices. A block device is one that stores information in fixed-size blocks, each one with its own address. Common block sizes range from $12 bytes to 32,768 bytes. The essential property of a block device is that it is possible to read or write each block independently of all the other ones, Table : Some typical device, network and bus data rates for hundreds of milliseconds. While, strictly speaking at any instant of time, the CPU is running only one program, inthe course of I second, it may work on several programs, thus giving the users the illusion of parallelism, Sometimes people speak of pseudo parallelism in this context, to Contrast it with the true hardware parallelism of multiprocessor systems (which have two or more CPUs sharing the same physical memory). Keeping track of multiple, parallel activities is hard for people to do. Therefore, operating system designers over the years have evolved a conceptual model (sequential processes) that, makes parallelism easier to deal with, THREADS ‘A process has an address space containing program text and data, as well as other resources, These resource may include open files, child processes, pending alarms, signal handlers, accounting information and more. By putting them together in the form of a provess, they can be mafaged more easily. The other concept is a process has a thread of execution, usually shortened to just thread. The thread hhas a program counter that keeps track of which instruction to execute next. It has registers, which hold its current PROCESS AND THREAD In 2 multiprogramming system, the CPU also switches from program to program running each for tens pevice vena working variables. It has a stack, which contains the Maas “To0 bytesisee execution history, with one frame for each procedure called senda TENE bbutnot yet retumed from. Alttiough a thread must execute rrelgshonelcheal TKBisee in some process, the thread and its process are different Dual ISDN lines 16 KB/see concepts and can be treated separately. Processes are Taser printer 100 KB/see used to group resources together Threads are the entities ‘Scanner 400 KB/sec: scheduled for execution on the CPU. Classic Ethernet 1,25 MB/sec Process | Process 1 Process I Process. ‘USB (Universal Serial Bus) 1.5 MB/sec + + Digital camcorder “4 MB/sec IDE disk. ‘5 MB/sec User ‘40x CD-ROM 6 eee ‘Space Fast Ethernet 12.5 MI < TSA bus 16.7 MB/sec Kemelf > Taree EIDE (ATA) disk 16.7 MB/sec Sraeakt Kernel ‘Kanal FireWire (IEEE 1394) 50 MB/sec @ tO) XGA Monitor 60 MB/sec Fig. 1 : (a) Three processes each with one thread (b) One process with SONET OC-1 2 network 78 MB/sec threethreads SCSI Ultra 2itiisk ‘80 MB/sec Thread Usage Gigabit Ethernet 125 MB/sec 1, Instead of thinking about interrupts, timers and Ulirium tape 320 MB/sec context switches, we can think about parallel PCI bus 528 MB/sec processes, Only now with threads we add anew ‘Sun Gigaplane XB backplane 20 GB/sec ‘element: the ability for the parallel entities to share an address space and al ofits data among ‘themselves, This ability is essential for certain applications, which is why having multiple processes (with their separate address spaces) will not work. Follow On Insta @rawcoderz 2. A second argument for having threads is that since they do not have any resources attached to them, they are easier ta create and destroy than processes 3, Performance argument, threads yield no performance gain when all of them are CPU bound, but when there is substantial computing and also substantial /O having threads allows these activities to overlap, 4, Finally, threads are useful on systems with multiple CPUs, where real parallelism is possible. Implementing Threads in User Space Each process needs its own private user space in read table to keep track of the threads in that process, This table is analogous to the kernel's process table, except that it keeps track only of the per-thread properties such - the each thread’s program counter, stack pointer, registers, state, etc. The thread table is managed by the run-time system. Implementing Threads in the Kernel No run-time system is needed in each, as shown in fig.2. Also there is no thread table in each process. Instead, the kernel has a thread table that keeps track of all the threads in the system. ‘Thekernel’s thread table holds each thread’s registers, state and other information. All calls that might block a thread are implemented as system calls, at considerably greater cost than a call to a run-time system procedure. When a thread blocks, the “kernel, at its option, can run either another thread from the same process (if one is ready) or a thread from a different process. Kernel threads do not reqitire any new, nonblocking system calls, In addition, ifone thread in a process causes page fault, the kernel can easily check to see if the process has any other runnable threads and if so, run one of them ‘while waiting for the required page to be brought in from the disk. Process Thread Process Thread Run-time Thread Process Procéss Thread system . lathe table table table (@) t ) Fig. 2 : (a) A user-level threads package (b) A threads package eee managed by the kernel 2 INTER-PROCESS COMMUNICATION Processes frequently need to communicate with other processes: For example, in a shell pipeline, the output of the first process must be passed to the second process and so on down the line. Thus, there is a need for communication between processes. There are three issues here. The first was alluded to above: how one process can pass information to another. The second has to do with making sure two or more processes do not get into each other’s way when engaging in critical activities (suppose two processes each try to b the last 1 MB of memory). The third concerns proper sequencing when dependencies are present. If process A. produces data and process B prints them, B has to wait until A has produced some data before starting to print. © Race Condition Critical Region Mutual Exclusion with Busy Waiting Sleep and Wakeup Semaphores Mutexes Monitors © Message Passing CLASSICAL IPC PROBLEM The Dining Philosophers Problem In 1965, Dijkstra posed and solved a synchronization problem, he called the dining philosophers problem. Since that time, everyone inventing yet another synchronization primitive has felt obligated to demonstrate how wonderful the new:primitive is by showing how elegantly it solves the dining philosophers problem. The problem can be stated quite simply as follows. Five philosophers are seated around a circular table. Each philosopher has a plate of spaghetti. The spaghetti is so slippery that a philosopher needs two forks to eat it. Between each pair of plates is one fork. The Readers and Writers Problem Example, an airline reservation system, with many competing processes wishing to read and write it. It is acceptable fo have multiple processes reading the database at the samie time, but if one process is updating (writing) the database, no other processes may have access to the database, not even readers. _ Memory hierarchy, with a small amount of a very fast expensive, volatile cache memory, tens of megabytes of medium-speed, medium-price, volatile main memory (RAM) and tens or hundreds of gigabytes of slow, cheap, nonvolatile disk storage. Itis the job of the operating system to coordinate how these memories are used. Follow On insta @rawcoderz The part of the operating system that manages the memory hierarchy is called the memory manager. Its job is to keep track of which parts of memory are in.use and which parts are not in use, to allocate memory to processes when they need it and deallocate it when they are done and to manage swapping between main memory and disk when main memory is too small to hold all the processes. DEADLOCKS. When two or more processes are interacting, they can sometimes get themselves into a stalemate situation, they cannot get out of such asituation is called a deadlock. A deadlock situation can arise if the following four conditions hold simultaneously in a system. 1. Mutual exclusion : At least one resource must be held in a non-sharable mode. 2, Hold and wait : A process must be holding at least one resource and waiting to acquire additional resource that are currently being . held by other processes. 3. No preemption : Resources cannot be preempted. 4. Circular wait : A set.{ P,, P,....P, } of waiting process must exist such that Py is waiting for a resource that to held by P,. P, is waiting for a resource that is held by P (A deadlock state occurs when two, or moré processes are waiting indefinitely for an event that can be caused only by one of the Waiting process. a Gi) Toprevent deadlock we ensure that at least dne of the necessary conditions never Kolds. (iii), Principally there are three methods for dealing with deadlock. (iv) Use some protocol to prevent or avoid deadlock ensuring that the system will never enter a deadlock state. (y) Allow the system to enter deadlock state, detect it and then recover. (vi) Ignore the problem all together and pretend that deadlock never occur in the system. This solution is the one used by most operating system including UNIX, (vii) Another method for avoiding deadlock that is less stringent than the prevention algorithms is to have a prior information on how each process will be utilizing the resources. The banker's algorithm for example needs to know the maximum number of each resource class that may be requested by each process using this information, we can define a deadlock- avoidance algorithm. (vii) Ifa system does not employ a protocol to ensure that deadlock will never occur then a detection and recovery schene must be employed. A deadlock detection algorithm must be invoked to determine whether a deadlock has occurred. If a deadlock is detected, the system must recover either by terminating some of the deadlocked process or by preempting resources from some of the deadlocked processes. In a system that select victims for primarily on the basis of cost factors, starvation may occur. As a result, the selected process never completes its designated task. MEMORY MANAGEMENT Every computer has some main memory that it uses to hold executing programs. In a very simple operating system, only one program ata time is in memory. To run a second program, the first one has to be removed and the second one placed in memory. ‘More sophisticated operating systems allow multiple programs to be in memory at the same time. To keep them from interfering with one another (and with the operating system), some kind of protection mechanism is needed. While this mechanism has to be in the hardware, it is controlled by the operating system, Basic Memory Management Memory management systems can be divided info two classes: those that move processes back and forward between main memory and disk during execution (swapping and paging) and those that do not. Monoprogramming without Swapping or Paging The simplest possible memory management scheme is to run just one program at a time, sharing the memory between that program and the operating system, Multiprogramming with Fixed Partitions ‘Except simple embedded systems, monoprogramming is hardly used any more. Most modern systems allow multiple processes to run at the same time. Having multiple processes running at once means that when one process is blocked waiting for I/O to finish, another one can use the CPU. Thus multiprogramming increases the CPU Follow On Insta @rawcoderz -utilization. Network servers always have the ability to run multiple processes (for different clients) at the same time, but most client desktop) machines also have thi ability nowadays. Ore Operating Device system iver ROM. User eu poem Vee program User progam Operating Operating sysem sytem aan RAM ° ° ° Co ” © Fig. 3 : Three simple ways of organizing memory with an Gperating system and one user process. Other possibilities also exist Modeling Multiprogramming When multiprogramming is used, the CPU utilization can be improved. If the average process computes only 20 percent of the time, it is sitting in memory with five Processes in memory at once, the CPU should be busy all the time. This model is unrealistically optimistic, however since it assumes that all five processes will never be waiting for V/O at the same time. Suppose that a process spends a fraction p ofits time waiting for /O to complete. With n processes in memory at once, the probability that all 1 processes are waiting for VO (in which case the CPU will be idle) is p". The CPU utilization is then given by the formula CPU utilization = 1 —p" Fig shows the CPU utilization as a function of n, is called the degree of multiprogramming. whict The solution usually adopted was to spit the program into pieces, called overlays. Overlay would start running first. When it was done, it would call another overlay. Some overlay systems were highly complex, allowing multiple overlays in memory at once. Although the actual work of swapping overlays in and out was done by the system, the work of splitting the program into pieces had to be done by the programmer. Splitting up large programs into small, modular pieces was time consuming and boring. . The method that was devised has come to be known as virtual memory. The basic idea behind virtual memory is that the combined size of the program, data and stack may exceed the amount of physical memory available for it, The operating system keeps those parts of the program currently in use in main memory and the rest on the disk. For example, a 16-MB program can run on a 4-MB machine by carefully choosing which 4 MB to keep in memory at each instant, with pieces of the program being swapped between disk and memory as needed. Virtual memory can also work in a multiprogramming. system, with bits and pieces of many programs in memory at once, While a program is waiting for part of itself to be brought in, it is waiting for /O and cannot run. So the CPU can be given to another process, the same way as in any other multiprogramming system. The topic covered under this are : 1. Paging i 2. Page tables Multi-levels page tables (i) Structure of page table entry 3. TLB's-Translation Look aside Buffers 4. Inverted page tables PAGE REPLACEMENT ALGORITHM. The Least Recently Used (LRU) ‘A good approximation tothe optimal algorithm is based ‘on the observation that pages that have been heavily used in the last few instructions will pfobably be heavily used again in the next few. Conversely, pages that have not been used for pages will probably remain unused for a Jong time. This idea suggests a realizable algorithm: when page fault occurs, throw out the page that has been unused for the longest time. This strategy is called LRU (Least Recently Used) paging. Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, itis necessary to maintain a linked list ofall pages in memory, with the most recently used page at the front and the least recently used page at the rear. The difficulty is that the list must be updated on every memory reference, Follow On Insta @rawcoderz a ©} When a page fault occurs, le the action taken depends on DB) ther bit R= 0 Evict the page ; a R= I Clear R and advance hand 1. Oo Fig. 5: The clock page replacement algorithms For 2 machine with n page frames the LRU hardware can maintain a matrix or x 1 bits, initially all zero. Whenever page frame k is referenced, the hardware first sets all the bits of row k to 1, then sets all the bits of column k to 0. At any instant, the row whose binary value is lowest is the least recently used, the row whose value is next lowest is next least recently used and so forth. The workings of this algorithm are given in fig.6 for four page frames and page references in the order 0123210323 After page 0 is referenced, we have the situation. After page 1 is referenced, we have the situation and so forth. DISK SCHEDULING 1, The basic hardware elements involved in /O are buses, device controllers and the devices themselves. The work of moving data between devices and main ‘memory is performed by the CPU as programmed /O or is off loaded to a DMA controller. The kernel module that controls a device is a device driver. The system call interface provided to applications is designed to handle several basic categories of hardware, including block devices, character devices, memory mapped files, network sockets and programmed: interval timers. The system calls usually block the process that issue them, but non blocking and asynchronous calls are used by the kernel itself and by applications that must not sleep while waiting for an /O operation to.complete. 2 ~ 2. Disk drives are the major secondary storage /O device on most computers, requests for disk VO are generated by the file system and by the virtual memory system, Each request specifies the address on the disk to be referenced, in the form of a logical block number. 3. Disk scheduling algorithms can improve the effective bandwidth, the average response time and the variance in response time. Algorithms such as SSTF, SCAN, C-SCAN, LOOK and C-LOOK are designed to make such improvements by strategies for disk queue ordering, . Performance can be harmed by external fragmentation, some systems have utilities that sean the file system to identify fragmented files. They then move blocks around to decrease the fragmentation, Defragmenting a badly fragmented file system can significantly improve the performance, but the system may have reduced performance while the defragmentation is in progress. The operating system manages the disk blocks. First, a disk must be low level formatted to create the sectors, the raw hardware new disk usually come preformatted. Then, the disk is partitioned and file systems created and boot blocks are allocated to store the systems bootstrap program. Finally, when a block is corrupted, the system must have a way to lock out that block or to replace it logically with a spare, Because an efficient swap'space is a key to good performance. Systerrs usually bypass the full system and use raw disk access for paging 1/0, Some systems dedicate a raw disk partition to swap space and others, use a file writing the file system instead, other system allow the user or system administer to makes the decision by providing both options. . The write ahead log scheme requires the availability of stable storage. To implement such storage, we need to replicate the needed information on multiple non- volatile storage devices (usually disks) with independent failure modes. We also need to update the information in a controlled manner to ensure that, ‘we can recover the stable data after any failure during, data transfer or recovery. . Because of the amount of storage required on large systems disk are frequently mode redundant via RAID algorithm. These algorithms allow more than one disk to used fér a given operation and allow continued operation and even automatic recovery in the fact of a disk failure. RAID algorithms are organized into different levels where each level provides some combination of reliability and high transfer rates. . Disks may be attached to a computer system by one of two ways : (i) using the local I/O parts on the host computer (ii) using a network connection such as storage area networks. a Follow On Insta @rawcoderz 8. Tertiary storage is built from disk and tape drives that use removable media. Many different technologies are available, including magnetic tape, removable magnetic and magneto-optic disks and optical disks for removable disks. The operating system generally provides the full service of a file system interface, including space management and request queue scheduling. For many operating systems the name of 4 file on a removable cartridge is a combination of a drive name and a file name within that drive. This conversation is simpler but potentially more confusing than using a name, that identifies a specific cartridge. For tapes, the operating system generally just provides araw interface. Many operating system have no built in support for jukeboxes. Jukebox support can be provided by a device driver or by a privileged application designed for backups or for HSM. 10.Three important-aspects of performance are bandwidth, latency and reliability. A wide variety of bandwidth is applicable for both disks and tapes but the random- access latency for a tape is generally much slower than that fora disk. Switching cartridges ina jukebox is also relatively slow, because a jukebox has a low ratio of device to cartridges. Reading a large fraction of the data in a jukebox can take a long time, Optical media, which protect the sensitive layer by a transparent coating, are generally more robust “than magnetic media, which expose the magnetic material to a greater possibility of physical damage. FILE SYSTEMS A process is running, it can store a limited amount of information within its own address space. However, the storage capacity is restricted to the size of the virtual address space. For some applications this size is adequate, but for others, such a8 airline reservations, banking or corporate record keeping, it is far too small. ‘The second problem with keeping information within a process address space is that when the process terminates, the information is lost. 2° The third problem is that it is frequently necessary a for multiple processes ta.access (patts of) the information at the same time, Essential requirements for long-term information storage: 1, Itmust be possible to store a very large amount 2. of information. 2. The information must survive the termination of the process using it. 3. Multiple processes must be able to access the information concurrently, The usual solution to all these problems is to store information on disks and other external media in units called files. 1, A file is an abstract data type defined and implemented by the operating system, It is a sequence of logical records, A logical record may be a byte, a line (fined or variable length) or a more complex data item, The operating system may specifically support various record types or may leave that support to the application programs, 2. Each device ina file system keeps a volume table of contents or device directory listing the location of the files on the device. In addition, it is useful to create directories to allow files to be organized, A single level directory in a multi-user system causes naming problems since each file must have a unique name, A two-level directory solves this problem by creating a separate directory for each user, Each user has her own directory, containirig her own files, The directory lists the files by name and includes such information as the file’s location on.the disk, length type, owner, time of creation, time of last use and $0 on, |. The natural generalization of a two level directory is atree structured directory, A tree structured directory allows a user to create subdirectories to reorganize his files. A.cyclic-graph directory structures allow subdirectories and files to be shared, but complicate searching and deletion. A general graph structure allows complete flexibility in the sharing of files and directories, but sometimes requires garbage collection to recover unused,disk space. 4. Disks are segmented into one or more partitions, each containing a file system or left ‘row’. File systems may be mounted into the system’s naming structures to make them available. The naming scheme varies by operating system. Once mounted, the files within the partition are available for user, File systems may be unmounted to disable access or for maintenance, . File sharing depends on the semantics provided by the system. Files may have multiple readers, multiple writers or limits on the sharing. Distributed file systems allow client host to mount partitions or directories from servers as long as they can acces each other across a network remote filesystems challenger in reliability, performance and security, Follow On Insta @rawcoderz Distributed information systems maintain user host and access information such that clients and servers share state information to manage user and access, 6. Since files are the main information-storage mechanism in most computersystems, file protection is naked. Access to files can be controlled separately for each type of access-read write, execute, append, delete, list directory and so on. File protection can be provided by passwords, by access lists or by special adhoc techniques. 7. The file system resides permanently on secondar®4¥ storage, which is designed to hold a large amount of data permanently. The most common secondary- storage medium is the disk- physical disks may be segmented into partitions to control media user and to allow multiple possible varying file system per spindle. These file systems are mounted onto a logical file system architecture to make them available for use. File systems are often implemented ina layered or module structure. The lower levels deal with the physical properties of storage devices, upper levels deal with symbolic file concepts into physical device properties. 8, Every file system type can have different structure and algorithms. A VFS layer allows the upper layers to deal with each file system type uniformly. Even remote file system can be integrated into the systems directory struccare and acted on by standard system calls via the VFS interface. A various files can be allocated space on the disk in three ways: through contiguous, linked or indexed allocation. Contiguous allocation can suffer from external fragmentation. Direct access is very inefficient with linked allocation. Indexed allocation may require substantial overhead for its index block. 9. These algorithms can be optimized in many-ways. Contiguous space may be enlarged through extents to increase flexibility and to decrease external fragmentation. Indexed allocation can be done in clusters of multiple blocks to increase throughput and to reduce the number of index entries needed. Indexing in large clusters is similar to contiguous allocation with extents. 10.Free-space allocation methods also influence the efficiency of use of disk space, the performance of the file system and the reliability of secondary storage. The methods used include bit vectors and the FAT, which places the linked list in one contiguous area. 11. The directory management routines must consider efficiency performance and reliability. A hash table is the most frequently used method which is fast and efficient. 12.Network file systems, such as NFS, use client server methodology to allow users to access files and directories from remote machines as if they were on local file systems. System calls on the client are translated into network protocols and retranslated into file system operations on the server. Networking and multiple client access create challenges in the areas of data consistency and performance. File Structure Files can be structured in any of several ways. Three common possibilities are depitted in fig.7. i 1 Byte 1 Record (a) Fig, o © Three kinds of files, (a) Byte sequence, (b) Record sequence (0) Tree The file in fig.7(a) is an unstructured sequence of bytes. In effect, the operating system does not know or care what is in the file. All it sees are bytes. Any meaning must be imposed by user-level programs. Both UNIX and Windows use this approach. The file in fig.7(b) is a structured sequence of files known as records and in fig.7(c), the file is arranged in form of tree. 4 File Type Many operating systems support several types of files. UNIX and Windows, for example, have regular files and. directories. UNIX also has charaeecter and block special files Fourteen Regular files are the ones that contain user information. All the files of fig.8 are regular files. Directories are system files for maintaining the structure of the file system, Character special files are related to input/output and used to model serial /O devices such as terminals, printers and networks. Block special files are used to model disks. Regular files are generally either ASCH filds or binary goo files. Follow On Insta @rawcoderz (@peasing System esa) INTRODUCTION AND History oF OreraTInG SysTEMS Previous Years: Questions oo | QU. Consider the following set of processes, with the arrival times and the CPU burst times given in milliseconds. Process Arrival Time | Burst Time PL 0 si P2 i 3 P3 2 a PA 4 oe What is the average turn around time for these processes with the preemptive shortest remaining Process time first algorithm? [RTU. 2016} Ans. For this, we need turnaround time, which is as follows: Sequence of the process execution: PL P2 P4 P3 PI Oi.) oA 3 Q2 Write difference between multiprogramming and multiprocessing. Ans, Diffrence between multiprogramming and multi-processing =| Multiprogramming | Multitasking | Multiprocessing | 1. |Single CPU is decides | Any system that | Multiple CPU its time between more |run more than | perform more than one job. ‘one application |than one job at a program time. onetime. Time sharing system | Resource (Main frame and application, management. ° | super mini [computers Q3 Define long term scheduling. Ans. Long Term Scheduling : Which determines which programs are admitted to the system for execution and when, and which ones should be exited, Q4 Write key features of monolithic kernel. Ans. The key features of the monolit kernels are: Process | Arrival | Burst | Turnaround ‘* Monolithic kernel interacts directly with the hardware. Time Time Time * Monolithic kernel can be optimized for particular Pi 0 5 12 hardware architectural. Monolithic kemel is not very P2, i 3 3 portable, P3. a 3 6 Pa 4 1 1 Q5 What do you mean by process. ‘Therefore, Average waiting time is Ans. Process : Process is"an bperation which takes the =(12+34+6+1)/4 given instructions and performs the manipulation as per =55 code, It is program in execution. Follow On Insta @rawcoderz poo, ae | Q6 What is the need of BIOS ? Explain Boot strap loader also. [R.TU, 2018, 2015] ‘Ans, Need of BIOS : Computers are run by Operating Systems (OS). They are hosted in RAM memory, which is volatile, ic, it loses its content when the power is turied off. The BIOS helps the pe to turned on and start. It is a very small program, hosted on Read-Only-Memory (ROM) which is non-volatile, ie., it does not vanish when the power is turned off. It is automatically loaded onto the ‘Computer from the ROM by special circuitry, so that the Computer can start its boot-up process. Since the amount of ROM memory is small, itis a small program, which can do a limited number of things, primarily the following three: 1, Itperforms a self-test. 2. It checks that hardware peripherals (disk, video, keyboard, ete.) work correctly, and initializes them, 3. Determines a list of places-where the Bootstrap loader, a more advanced stage for the initialization, might reside (hard drive, a cd-rom disk, a USB peripheral device, the network, ete.), and tries to pass control to this new stage. If it succeeds, the start-up process continues, otherwise it halts with error messages. A bootstrap loader is a computer program that loads ‘an operating system or some other system software for the computer after completion of the power-on self-tests; i.e,, it is the loader for the operating system itself. A boot Joader is loaded into main memory from persistent memory, such as & hard disk drive. The bootstrap loader then loads and executes the processes that finalize the boot. Boot Strap Loader : The operating system has to be loaded in memory whei a computer's power is switched on. It typically involves loading of several programs in memory. Since, the computer's memory does not contain any programs or data at this time, not even an absolute loader, the task of loading the operating system is performed by a special-purpose loader called the bootstrap loader. The bootstrap loader is a tiny program that can fit into a single record on a floppy or hard disk. Recall that, an absolute loader loads a program and passes control to itforexecution, The bootstrap loader exploits this scheme. ints operation. The computer is configured such that when its power is switched on, its hardware loads a special record from a floppy or hard disk that contains the bootstrap loader and transfers control to it for execution. When the bootstrap loader obtains control, it loads a more capable loader-in memory and passes control to it. This loader loads the initial set of components of the operating system, which load more components, and so on until the complete operating system has been loaded in memory. This scheme is called bootstrap loading because of the legend of the man who raised himself to the heaven by using his own bootstraps. ——— Q7 What do you mean by processor scheduling? Explain the various. levels of scheduling. (RT, 2018) OR Describe the difference between short term, ‘medium term and long term scheduling? IRTU, 2016) Ans. Process Scheduling: This refers to the process by which processor determines which job (task) should be run on the computer at which time. Without scheduling, the processor would give attention to jobs based on when they arrived in the queue, which is usually not optimal. As part of the scheduling, the processor gives a priority level to different processes running on the machine, When two processes are requesting service at the same time, the processor performs the jobs for the one with the higher priority, ; Levels of Scheduling : In operating systems the processor scheduling subsystem operates on three levels, differentiated by the time scale at which they perform their operations. In this sense we have : + Long Term Scheduling : Which determines which programs are admitted to the system for execution and when, and which ones should be exited, + Medium Term Scheduling : Which determines when processes are to be suspended and resumed, + Short Term Scheduling (or Dispatching): Which determines which of the ready processes can have CPU resources, and for how long. Taking into account tite states of a process, and the time scale at which state transition occur, we can immediately recognize that Follow On Insta @rawcoderz ~ The medium term scheduling affects processes e ready-suspended » block-suspended ‘+ The.long term scheduling affects processes new e exited Long Term Scheduling : Long term scheduling controls the degree of multiprogramming in multitasking systems, following certain policies to decide whether the system can honor a new job submission or, if more than one job is submitted, which of them should be selected. ‘The need for some form of compromise between degree of multiprogramming and throughput seems evident, especially when one considers interactive systems. The higher the number of processes, the smaller the time each of them may control CPU for, if a fair share of responsiveness is to be given to all processes. A very high number of processes causes waste of CPU time for system housekeeping chores. However, the number of active processes should be high etiough to keep the CPU busy servicing the payload (i.e. the user processes) as much as possible, by ensuring that-on average-there always be a sufficient number af processes not waiting for VO. Medium Term Scheduling : Medium term scheduling is essentially concerned with memory ‘management, hence it’s very often designed as a part of the memory management subsystem of an OS. Its efficient interaction with the short term scheduler is essential for system-performances, especially in virtual memory systems, This is the reason why in paged system the pager process is usually run at a very high (dispatching) priority level. Short Term Scheduling: Short term scheduling concerns with the allocation of CPU time to processes in order to meet some predefined system performance objectives. The definition of these objectives (scheduling. policy) is an overall system design issue, and determines the “character” of the operating system from the user’s point of view, giving rise tothe traditional distinctions among, ‘multi-purpose, time shared”, “batch production”, "real- time” systems, and so on, ——————————————— Q8 What do you understand by semaphores? Can it be useful to solve reader-writer problem? Explain. ———_—_—_—_—_— ‘Ans. Semaphore : A semaphore, in its most basic form, isa protected integer variable that can facilitate and restrict access td shared resources in a multi-processing {[R.EU. 2018, 2015] , environment. The two most common kinds of semaphores are counting semaphores and binary semaphores, Counting, semaphores represent multiple resources, while binary semaphores, as the name implies, represents two possible states (generally 0 or 1; locked or unlocked). A semaphore can only be accessed using the following operations: wait() and signal(). wait() is called when a process wants access to a resource. If the semaphore is greater than 0, then the process can take that resource. If the semaphore is 0, that is the resource isn’t available, that process must wait until it becomes available, signal() is called when a process is done using a resource. Yes, semaphores can be used to solve the reader- writer problem with the following implementation, Conditions: 1. No reader will be kept waiting untess a writer has the object. 2. Writing is performed ASAP - ie, writers have precedence over readers, ‘The reader processes share the semaphores mutex and wrt and the integer readcount, The semaphore wrt is also shared with the writer processes. Mutex and wrt are each ini readcount is initialized to 0. Writer Process wait(wrt); /*writing is performed*/ signal(wrt); Reader Process wait(mutex); readcount := readcount + 1; if readcount + | then wait(wrt); signal(mutex); /*reading is performed*/ wait(mutex); readcount := readcount - 1; if readcount = 0 then signal(wrt); signal(mutex); -—__ Q9 What are different algorithmic solutions of critical section problem? Explain. _[R.1.U, 2018, 2015] —— Ans. Solutions to the Critical Section Problem Solution to the Critical Section Problem must meet three conditions : 8 1, Mutual exclusion : If process Pi is executing in its critical section, no other process is executing in its critical lized to 1, and 2, Progress : If no process is executing in its critical section and there exists some processes that wish to enter their critical sections, then only those processes that are ot executing in their remainder section can participate in the decision, of which will enter its critical section next, and this decision cannot be postponed indefinitely (a) Ifno process is in critical section, can decide quickly who enters (b) Only one process can enter the critical section 0 in practice, others are put on the queue 3. Bounded waiting : There must exist a bound on the number of times, that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted (a) The wait is the time from when a process makes a request to enter its critical section until that request is granted (b) In practice, once a process enters its critical section, it does not get another turn until a waiting process gets a turn (managed as a queue) —— Q.10 What you mean by scheduling? Why scheduling is required? Differentiate the Preemption and Non-Preemption Scheduling? : (RLU. 2017) Ans, Process Scheduling : Refer to Q.7. Importance of CPU Scheduling : The CPU scheduler is an important component of the operating system. Processes must be properly scheduled, or else the system will make inefficient use of its resources. Different operating systems have different scheduling requirements, for example a supercomputer aims to finish as many jobs asitcan in the minimum amount of time, but an interactive multi-user system such as a Windows terminal server aims to rapidly switch the CPU between each user in order to give users the “illusion” that they each have their own dedicated CPU. 4 ‘A scheduler may aifn at one or more of many goals, for example: maximizing throughput (the total amount of ‘work completed per time unit); minimizing wait time (time “from work becoming enabled until the first point it begins execution on resources); minimizing latency or response time (time from work becoming enabled until itis finished in case of batch activity, or until the system responds and hands the first output to the user in case of interactive activity); or maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process). Difference between Preemptive and Non-preemptive Scheduling : S.No.|Preemptive Non preemptive sy scheduling scheduling 1. ]Itallows a process to _|It ensures that a process be interrupted in the relinquishes control of middle of its execution, |the CPU only when it taking the CPU away | finished with its current) jand allocating it to [CPU burst. another z 2. [Complex to implement [Simple to implement 3.__|Costly (Cost is less. 4._|High overhead Low overhead. ] 5. |Process switches form | Process switches from running state to ready | running state to waiting land waiting state to state and process [ready state. |terminates. (i Qt What you mean by process and lifecycle of process. Explain context switching between two processes. 5 [RTU, 2017) a EE Ans. Process : A key concept in all operating systems is, ‘the process. A process is basically a program in execution. Associated with each process is its address space, a list ‘of memory locations from some minimum (usually 0) to some maximum which the process can read and write. The address space contains the executable program, the Program’s data and its stack. Also associated with each process is some set of registers, including the program counter, stack pointer and other hardware registers and all the other information needed to run the program. Process Life Cycle When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also hot standardized, In general, a process can have one of the following. five states at a time (® Start: This is the initial state when a process is first started/created. (li) Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process: Follow On Insta @rawcoderz (iil) Ranning: Once the process has been assigned toa processor by the OS scheduler, the process state is set to running and the processor executes its instructions. (iv) Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. (v) Terminated or Exit: Once the process finishes its execution, or itis terminated by the operating system, itis moved to the terminated state where it waits to be removed from main memory. Context Switching in Processes Process switching is context switching from one process to a different process. It involves switching out All of the process abstractions and resources in favor of those belonging to a new process. Most notably and expensively, this means switching the memory address space. This includes memory addresses, mappings, page tables, and kernel resources, a relatively expensive ‘operation. Context Switching in Threads "Thread switching is context switching from one ‘thread to another in the same process. Thread switching is much cheaper as it involves switching out only the abstraction unique to threads, the processor state. ‘Switching processor state (such as the program counter ‘and register contents) is generally very efficient. For the ‘most part, the cost of thread-to-thread switching is about ‘the same as the cost of entering and exiting the kernel. ‘Consequently, thread switching is significantly faster than process switching. ———————————————— Qu2 What you mean by thread? Explain kernel and user level thread. [RTU. 2017) a ‘Ams, Thread :4 thread is a single sequential flow of control within a program. ‘All programmers are familiar with writing sequential ‘programs. You have probably written a program that displays or sorts a list of names, or computes a list of ‘prime numbers. These are sequential programs: each has ‘a beginning, an end, a sequence, and at any given time during the runtime of the program there is a single point of execution. A thread is similar to a sequential program a single thread also has a beginning, an end, a sequence, and at any given time during the runtime of the program, there is a single point of execution. However, a thread itself is not 2 program, it cannot run on its own, but runs within a program. A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional process has a single tivend of control. If a process has multiple threads of control, it can perform more than | task at a time. Threads are visible only from within the process, where they share all process resources like address space, ‘open files, and so on. The following state is unique to each thread. . ‘Thread ID Register state (including PC and stack pointer) + Stack + Signal mask + Thread-private storage Kernel Level Threads : Kernel threads are Virtually all contemporary Windows XP, Linux, Mac OS 2, Solaris, and Tru64 UNIX support kernel threads. eye QQ ®@ © oy QD kervet Lovet Thread Fig. 1: Kernet-level threads Advantages r: + Since, kernel has full knowledge of all threads, scheduler may decide to give more time to a process having large number of threads than process having small number of threads. + Kernel-level threads are especially good for applications that frequently block. Disadvantages + The kemel-level threads are slow and inefficient. For instance, threads operations are hundreds times slower than that of user-level threads. + Since kemel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity. User level threads : In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system, In addition, the kernelalso maintains the traditional process table to keep track of processes. Operating System kernel provides system call to create and manage threads. User threads are supported above the kernel and managed directly by the operating system. User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel, In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threadeds processes. } ues tontrit Fe ‘Advantages + The most obvious advantage of this technique is that ‘user-level threads package can be implemented on an Operating System that does not support threads. + User-level threads do not require modification to ‘operating systems, + Simple Representation : Each thread is represented simply by a PC, registers, stack and a + User level threads (B.Tech WV Sen) CS. Solved Papers) small control block, all stored in the user process address space. Simple Management : This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Fast and Efficient : Thread switching is not much more expensive than a procedure call. Disadvantages There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespective of whether process has one thread or 1000 threads within, It is up to each thread to relinquish control to other threads, User-level threads require non-blocking systems call ic.,a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runnable threads left in the procésses. For example, if one thread causes a page fault, the process blocks. 2 Qu3 What are the five major activities of an operating system with regard to file management? [RTU. 2016} Ans. Five major activities of an operating system with regard to file management ar t Creating and Deleting Files: File creation and deletion are fundamental to computer operations, In the former, data cannot be stored in an efficient manner unless arranged in some form of file structure. In the latter, permanent storage would quickly fill up if files were not deleted and the space occupied by them reallocated to new files, Creating and Deleting Directories: As. corollary to the need to store data in files, files themselves need to be arranged in directories or folders in order to allow their efficient storage and retrieval. Much like file deletion, unnecessary directories or folders need to be removed in order to keep the system uncluttered, File Manipulation Instructions: Since operating systems allow application software to perform file manipulation using symbolic instructions, the operating system itself needs to have a machine- level instruction set in order to i with the hardware directly. The application’s symbolic instructions need to be translated into the machine- level instructions either by an interpreter or by compiling the application code, The operating ‘system contains provisions to manage this: er level filemanipulation. 4. Mapping to Permanent Storage: Operating ‘Weakness: eee mee noose @ Only suitable for the exchange of small amount of location on permanent storage in order data. Oe cscnailnos dan Teta (i) Communication using message passing is slower recorded in some form of disk directory, which than shared memory because of the time involved -vasies according to the file system, or systems that in connection setup, the operating system uses. The operating system | (b) Shared Memory Model: will include a mechanism to locate the separate file : segments where it bas divided a file. Shared memory communication is faster the 5 message passing model when the processes are om the same machine. money 2s well, thus their loss can have a severe Weakness = impact "s permanent storage devices (i) Different processes need to ensure that they are generally contain a number of mechanical devices, ‘not writing to the same location simultaneously. which can fail, and the storage media itself may | (i) Processes that communicate using shared memory degrade. A function of operating systems is to need to address problems of memory protection obviate the risk of data loss by backing files up on and synchronization. additional secure and stable media in a redundant COCO system. QAS What are the difference between user level ———————————— threads and kernel level threads under what Q.14 What are the two models of interprocess circumstances is one type better than the other? communication? What (RT, 2016) SS ————————— | Ars. Difference between user level threads and Ans. Two Models of Interprocess Communicatio kernel level threads : ‘There are two interprocess communication models given Yes, Kernel level and user level threads are below: 1, Message-passing Model : In this, the communicatis processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or is i through common mailbox. Message passing is useful for smaller amounts of data, because no conflicts ‘computer communication. tune darren nn} al ano 2, Shared-Memory Model : In this, processes use A user thread is more appropriate for low-level tasks, whereas a kernel thread is better for high-priority tn that should gt ih priory to syste ecu Follow On Insta @rawcod: ‘System as a Resource Manager ‘The concept of the operating system as primarily providing its users with a convenient interface is a top down view. An alternative, bottom - up view, holds that the operating system is there to manage all the pieces of a complex system. Modern computer consists of processors, memories, timers, disks, mice, network interfaces, printers and a wide variety of other devices. In the alternative view, the job of the operating system is to provide for an orderly and controlled allocation of the processor, memories and /O devices among the various programs competing for them. ‘When a computer (or network) has multiple users,” the need for managing and protecting the memory, /O devices and other resources is even greater, since the users might otherwise interface with one another. In addition, users often need to share not only hardware, but information (files, database, etc.) as well, In short, this view of the operating system holds that its primary task is to keep track of who is using which resource, to grant ‘resource requests, so account for usage and to mediate conflicting requests from different programs and users. Resource management includes multiplexing (Sharing) resources in two ways: in time and in space. ‘When a resource is time multiplexed, different programs or users take turns using it. First one of them gets to use the resource, then another and so on. For example, with only one CPU and multiple programs that want to run on it, the operating system first allocates the CPU to one program, then after it has run long enough, another one gets to use the CPU then another and then eventually the first one again. Determining how the resource is time multiplexed who goes next and for how long is the task of the operating system. Another example of time multiplexing is sharing the printer. When multiple print jobs are queued up for printing on a single printer, a decision has to be made about which one is to be printed next. The other kind of multiplexing is space multiplexing. Instead of the customer taking turns, each gets part of the resource. For example, main memory is normally divided up among several running programs, so each one ‘can-be resident at the same time (for example, in order to take turns using the CPU). Assuming there is enough ‘memory to hold multiple programs. It is more efficient to fraction ofthe total. Of course, this raises issues of faimess, protection and so on and itis up to the operating system to solve them. Another resource that is space multiplexed is the (hard) disk. In many systems a single disk can hold files from many users at the same time. Allocating disk space and keeping track of who is using which disk blocks is a typical operating system resource management disk. Operating System as Virtual Machines ‘Virtual-memory techniques as, an operating system can create the illusion that a process has its own processor with its own (virtual) memory. Of course, normally, the process has additional features, such as system calls and a file system, that are not provided by the bare hardware. The virtual-machine approach, on the other hand, does ‘not provide any additional functionality, but rather provides an interface that is identical to the underlaying bare hardware. Each process is provided with a (virtual) copy of the underlying computer (fig. 2). (o) Fig. 2: System models (a) Now-virtwal machine (b) Vinwal machine The physical computer shares resources to create the virtual machines. CPU scheduling can share out the CPU to create the appearance that users have their own processors. Spooling and a file system can provide virtual card readers and virtual line printers. A normal user time- sharing terminal provides the function of the virtual- machine operator's console. ‘A major difficulty with the virtual-machine approach involves disk systems. Suppose that the physical machine has three disk drives but wants to support seven virtual machines. It cannot allocate a disk drive to each virtual machine. The virtual-machine software itself will need substantial disk space to provide virtual memory. The solution is to provide virtual disks, which are identical in all respects except size - termed minidisks in IBM's VM ‘operating system. The system implements each minidisk by allocating as many tracks on the physical disks as the isk needs. Obviously, the sum of the sizes of all minidisk must be smaller than the size of the physical disk space available. Users thus are given their own virtual machines. They can then run any of the operating systems or software packages that are available on the underlying machine, For the IBM VM system, a user normally runs CMS-a interactive operating system. The machine software is concemed with multiple virtual machines onto a physical machine, but Follow On Insta @rawcoderz Follow On Insta @rawcod: architecture. This architecture strives to take out of the Kernel as much functionality as possible, so as to limit the code executed in privileged mode and to allow easy modifications and extensions. It does so by moving many ‘operating system services from the kernel into the “user space”. Thus, making kernel as small as possible and therefore, it is called Micro Kernel. ‘This is one advantge as it always stays in the main memory and thus consumes less memory of the system. few SD ie jee ‘jq © —— i} ‘apo Reais Ue Pec Fo Maeagee Neecowr Fig. 4: Micro kernal architecture Exokernel Architecture of Operating System Exokernel is a further extension of the microkemnel ‘approach where the “kernel” is almost devoid of functionality; it merely passes requests for resources to “user space” libraries. ‘This would mean thiat (for instance) requests for file access by one process would be passed by the kemel to the library that is directly responsible for managing file systems. Initial reports are that, this in particular result in significant performance improverents, as it does not force data to even pass through kernel data structures. Table below shows a table that compar is one of these problem? Explain (RTL. 2014, 2013) ‘Ans. The Dining Philosophers Problem The Dining Philosopher's Problem is summarized as five philosophers sitting around a circular table doing one of two things: eating spaghetti or thinking. While eating, they are not thinking, and while thinking, they are not Each philosopher has one fork to his left and one fork to his right. As spaghetti is difficult to serve and eat with a single fork, itis assumed that a philosopher must ‘eat with two forks. The philosopher can only se the fork con his iramediate left or right as shown in the fig. 1. @ @ @® ‘Fig. 1: Five Philosophers sitting around a circular table Whenever a philosopher is hungry he starts eating spaghetti having forks in both hands. He cats without releasing his forks. The philosophers never speak to each other, which create a possibility of deadlock since, ; every philosopher holds a left fork and waits continuously for a right fork (or vice versa). To avoid the problem of deadlock we can implement the rule that if philosopher is waiting forthe right fork (to be free) for more than five minutes, he put down the left fork. He then waits for five more minutes to make the new attempt. This scheme eliminates the possibility of deadiock but can cause another problem. Consider a scene if all five philosophers appear in the dining room exactly at the same time and each picks up their left fork at the same time. Then all philosophers will wait for five minutes (for the right fork to be free) then they all put their forks down and wait for another five minutes before they all pick them up again. This result in continuous waiting of all philosophers and cause a problem called ‘Starvation’, Lack of available forks refers to the lacking of shared resources. Solution using Semaphores : The simplest solution to this problem is to represent each of the fork with a semaphore and implement take_fork () and put_fork() functions. The program uses an array of semaphores, one er philosopher, so hungry philosophers can block if the needed forks are busy. This must be noted that each process will execute philosopher () as its main function but take_fork(), Eat() and put_fork() are separate procedure for each process. define N 5 Philosopher () { put_forks(i) while (TRUE) { Think () ; take fork) ‘take _forks(i) take_fork ((i+1) #N) ; — BatQ; put_fork (i); Fig. 2: Semaphore implementation of a philosopher Solution using Mutexes : Another way of solving Dining Philosophers problem is by using mutexes. Before , starting to acquire forks, a philosopher would do a down on mutex. After replacing the forks, he would do an up to ‘the mutex. This solution allows two philosophers to eat at a same time. @ Picking up forks int state [N] semaphore mutex semaphore sem ftake_forks (int) { test (int) ( Jdown (mutex); if (state [1] == HUNGRY && state [i] = HUNGRY; | | state [LEFT]! = EATING && test (i); state [RIGHT] ! = EATING) { lup (mutex); state [i] = EATING; down (semti)); up (sem [i); U } Follow On Insta @rawcoderz (ii) Putting down forks int state [N) ‘semaphore mutex = | semaphore sem {i} [put_forks (int) { down (mutex); state [i] = THINKING; test (LEFT), fest (int ) lif (state [1] == HUNGRY &&e state [LEFT]! = EATING &ée state (RIGHT] | = EATING) { test (RIGHT; state [i] = EATING; up (mutex); up (sem (iJ); H L) b Deadlock, is such a probl imposing several constraints : can be removed by 1. Maximum of four philosophers are allowed to sit, simultaneously on the dining table. 2. Allow a philosopher to pick the forks if both of them on either side are available. 3. Letthe odd philosopher picks up their left fork before their right fork, whereas even philosophers pick their right fork before their left fork. Q21 (a) Consider the following set of processes with arrival time and CPU burst time given in ms. Process Arrival time — Burst time P, o 8 P, 1 4 P, 2 9 P, 3 5 What is the average waiting time for these processes with preemptive SIF scheduling? ‘{R.T.U. 2018, 2014, 2013] () Consider the following four processes, with the length of the CPU burst time given in milliseconds. Process _| Burst time (ms) | Arrival time (ms) | PO . Is 00 PL 20 1.0 P2 7 2.0 P3 7 2.0 sider the Shortest Remaining Time First (SRTF), Round Robin (RR) (Quantum = Sms) scheduling algorithms. Illustrate the scheduling using Gantt chart, Which algorithns will give the minimum average waiting time? - (REU.2017] aa SS Gantt’s chart for the system using preemptive SIF scheduling fe Bie (PE Ry [eeRh 5 10 Liew (Ab, Process P, is started at tirne 0, it is only process in the queue. Process P, arrives at time 1, The remaining time for process P, (7 milli seconds) is larger than the time required by process P, (4 milli seconds), so process P, is preempted, and process P, is scheduled. The average waiting time Process | Arrival | Burst | Completing time | time | time Py 0 8 7 P, 1 4 5 Py 2 9 26 Pa 3 5 10 Waiting time of the process. (10-1) +(1 = 1) + (17-2) + (6-3) or 9+0+15+2=26 The average waiting time for the system is 26/4=6.5 Ans.(b) ‘A. Using Shortest Remaining Time First (SRIF): steps of Gantt chart preparation: _ @ Attime=0.0 Aavailable process = PO Scheduled process = PO Attime = 1.0 Available process = PO, PI Check remaining burst time PO = PO less time Scheduled process = PO Attime = 2.0 Available process = PO, P!, P2, P3 14,P1= Check remaining burst time PO = 13, Pl =20,P2=3,, Pee7 Scheduled process = P2 (iv) At time = 5.0, P2 Completes Available process = PO, Pl, P3 Remaining B.I, PO = 13, Pl = Scheduled process = P3 , 20, P3=7~ Follow On Insta @rawcoderz (v) Attime= 12.0, P3 Cornpletes Ayailable process = PO, PI Remaining B.I. PO = 13, P1 = 20 Scheduled process = PO (vi) At time = 25.0, PO Completes Available process = P1 ‘Scheduled process = P1 (vii) At time = 45.0, P1 Completes Process stops Gantt Chart are [=| | O1o20 80 120 250 480 ___SRTF Scheduling ‘Average Waiting Time = (WT,, + WT,, + WT, + WT,,)/4 =(10+24+0+3)/4 : =37/4 =9.25 ms B. Using Round Robin Scheduling : Quentumn = Sms Each process will get max 5 ms per round. @ Attime=0.0 Aavailable process = PO Itis scheduled for S ms (i) Attime=5.0 - Available process = PO, PI, P2, P3 Since, PO was scheduled earlier Process scheduled = P1 for 5 ms Similarly all process are scheduled in round robin manner till time = 45 ms les raf | mi Gantt Chart os 0 13 18 2830354045 wh. sdsejsee_snaneeane WT,, = (S=1)+ (23-10) +(35-28)=4 +13 +7 = 24ms WT,, = (10-2) =8 ms WT,, = (13-2) + (28-18)=11+10=21ms | Average waiting time = (20+24+8+21)/4 = 73/4 = 18.25 ms Average waiting time of SRTF is minimum at 9.25 ms. Po} pt | p2 | Ps. Write short notes on the following: (Fair share scheduling (ii) Race condition (ill) Critical section (iv) Semaphore and mutex (RLU. 2017] Ans.(i) Fair Share Scheduling : Fair-share scheduling is a scheduling strategy for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100% / 4 = 25%). If user B starts a second process, each user will still receive 25% of the total cycles, but each of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that cach user gets 20% of the whole (100% / 5 = 20%). Another layer of abstraction allows us to partition users into groups, and apply the fair share algorithm to the groups as well. In this case, the available CPU cycles are divided first among the groups, then among the users within the grc ups, and then among the processes for that user. For eavmple, if therd are three groups (1,2,3) containing three, two, and four users respectively, the available CPU cycles will be distributed as follows: 100% /3 groups = 33.3% per group Group 1; (33.3% / 3 users) = 11.1% per'user Group 2: (33.3% /2 users) = 16.7% per user Group 3: (33.3% / 4 users) = 8.3% per user (ii) Race condition : In some operating systems, processes that are working together may share some ‘common storage that each one can read and write. The shared storage may be in main memory (possibly ina kemel data structure) or it may bé a shared file; the location of the shared memory does not change the nature of the communication or the problems that arise. To see how interprocess communication works in practice, let us consider a simple but common example; a print spooler. When a process wants to print a file, it enters the file name in a special spooler directory. Another process, the printer daemon, periodically checks to see if there are any files to be printed and if there are, it prints them and then removes their names from the directory. rectory ars wine] ProcessA>) §| prope 6| prop 7 ne? Fig. : Two processes want to access shared memory at the same time Imagine that our spooler directory has a very large number of slots, numbered 0, 1,2, ..... each one capable of holding a file name. Also imagine that there are two shared variables, out which points to the next file-to be printed and in which points to the next free slot in the directory. These two variables might well be kept on a ‘two-word file available to all processes. Atacertain instant, slots to 3 are empty (the files have already been printed) and slots 4 to 6 are full (with the names of files queued for printing). More or less simultaneously, processes A |. and B decide they want to queue a file for printing. This situation is shown in fig. Situations where two or more processes are reading or writing some shared data and the final result depends ‘on who runs precisely when, are called race conditions, Debugging programs containing race conditions is no fun at all. The results of most test runs are fine, but once ina rare while something weird and unexplaied happens, (iii) Critical section : Waits enter Signals oe Fig. The key to preventing trouble involving shared storage is find some way to prohibit mofe than one process ~ from reading and writing the shared data simultaneously. ‘That part of the program where the shared memory is -accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes in Critical Sections in each thread. The characteristic, properties of the code that form a Critical Section are + Codes that reference one or more variables in a “tead-update-write” fashion while any of those variables is possibly being altered by another thread, Follow On Insta @rawcoderz

You might also like