0% found this document useful (0 votes)
7 views

Operating System Notes

Subjective paper pgt tttttttggggggggghyhhhhjnfffgggvvbbbb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
7 views

Operating System Notes

Subjective paper pgt tttttttggggggggghyhhhhjnfffgggvvbbbb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 83
ee LECTURE NOTES ON OPERATING SYSTEM Compiled by Mr. Abhaya Kumar Panda Lecturer, Department of Computer Science & Engineering, KIT Polytechnic, Bhubaneswar KIIT POLYTECHNIC CONTENTS S.NO_ | CHAPTER NAME PAGE NO 1 INTRODUCTION 1-10 2 PROCESS MANAGEMENT 11-24 3 MEMORY MANAGEMENT 25-37 4 DEVICE MANAGEMENT. 38-47 5 DEAD LOCKS 48-58 6 | FILEMANAGEMENT 59-75 7 SYSTEM PROGRAMMING 76-80 Operating System Maye Kamar Pade KIIT POLYTECHNIC UNIT-1 INTRODUCTION INTRODUCTI > Operating system is a system software that acts as an intermediary between the user of a computer and computer hardware. Itis considered as the brain of the computer. It controls the internal activities of the comp. hardware and provides the user interface. This interface enables a user to utilize the hardware resources very efficiently. I is the first program that gets loaded into the computer memory through the process called “booting” COMPONENTS OF COMPUTER SYSTEM: In general, we can divide a computer system into the following four components. © Hardware © Operating system + Application programs © Users * 2 2 i compiler assembler text editor Fd database ‘system ‘system and application programs ss —— computer hardware As we can see in the figure the user interacts with the application programs. The application program do not access the hardware resources directly. HARDWARE resources include I/O devices, primary memory, secondary memory(hard disk, floppy disk etc.)and the microprocessor. So the operating System is required to access and use these resources. The application programs are programmed in such a way that they can easily communicate with the resources. An operating System is the first program that is loaded into into the computers main memory, when acomputer is switched on. Some popular operating System are windows9x(95,98), Linux, Unix, Windows xp. vista ete. vv vvyv v y Operating System 1 Abbaye Kumar Dende KIIT POLYTECHNIC OBJECTIVES OF OPERATING SYS’ Operating system has three main objectives ‘© Convenience: An operating system makes a computer system convenient and easy to use, for the user. ‘* Efficiency: An operating system allows the computer to use the computer hardware in. an efficient way, by handling the details of the operations of the hardware. ‘* Ability to Evolve: An operating system should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions without at the same time interfering with service. NEEDS AND SERVICES OF OPERATING SYSTEM/ FUNCTIONS OF OPERATING SYSTEM Operating System performs a number of functions for the computer system that are follows: D) Itacts as an Command Interpreter: > Generally the CPU cannot understand the commands given by user. It is thefuunction of operating System to translate this command (human understandable) into m/c understandable instructions that the system (CPU) can understand, > After the execution of instructions by CPU, it retranslates the o/p back into a human understandable language. > To execute the user jobs, the Operarting System interacts with the computer hardware. 2) Itacts as the Resource Manager: > An Operating System acts as a resource manager in two ways « Time multiplexing * Space multiplexing > Intime multiplexing the different resources (hardware or software) can beshared among different users for a optimal or fixed time slot. > e.g, the Operating System allocates a resources such as CPU to program A fora fixed time slot. When the time slot of process A is over, the CPU is allocated to another program B, If program A needs more CPU attention, then the CPU again allocated to program A after the time slice period allocated to program B is over In space multiplexing, different resources are shared at the same time among different programs .e.g. sharing of hard disk and main memory by different users at the same time, 3) Memory Management: > It keeps track of the resources (memory),what part of memory is in use and by whom, which part of the memory is not in use. > Decides which processes are to be loaded when memory space is available. > Allocation and de allocation of memory 4) Process Management: > A process(task) is an instance of a program in execution. A program is just a passive entity, but a process is an active entity, > To accomplish its task a process needs certain resources like CPU time, memory, files and I/O devices. > These resources are allocated to process either at the time of creation or when it is executing, v Operating System 2 Abbaye Kumar Dende KIIT POLYTECHNIC > The operating system is responsible for the following functions related to process ‘management, i, Process creation (loading the prog. From secondary storage to main memory) ii, Process scheduling ili, Provide mechanism for process synchronization iv. Provide mechanism for deadlock handling Vv. Process termination 5) Peripheral or 1/0 device Management: Keep track of resources (device, channels, control units) attached to the system. > Communication between these devices and CPU is observed by operating system. > An operating system will have device drivers to facilitate YO functions involving device likekeyboard, mouse, monitor, disk, FDD, CD-ROM, printer etc. v > Allocation and De allocation of resources to initiate 1/O operation. > Other management activities are i. Spooling ii, Caching iii, Buffering iv. Device driver interface 6) File Management: > A file is a collection of related information or record defined by the user. > The operating system is responsible for various activities of file management are i, Creation and deletion of files ii. Creation and deletion of directories iii Manipulation of files and directories iv. Mapping files onto secondary storage 7) Secondary storage Management: > It is a larger memory used to store huge amount of data. Its capacity is much larger than primary memory. E.g. floppy disk, hard disk et > The operating system is responsible for handling all the devices that can be done by thesecondary storage management. » The various activities are: i, Free space management ii, Storage allocation (allocation of storage space when new files have to bewritten), iii, Disk scheduling (scheduling the request for memory access) 8) Protection/Security Management: > Ifa computer system has a multiple processor, then the various processes must beprotected of one another's activities. > Protection refers to mechanism for controlling user access of programs or processes or user to resources defined by the computer system. 9 Error detection and Recovery: > Error may occur during execution like divide by zero by a process, memory access violation, deadlock, I/O device error or a connection failure. > The operating system should detect such errors and handles them. Operating System 3 Abbaye Kumar Dende KIIT POLYTECHNIC CLASSIFICATION / TYPES OF OPERATING SYSTEM All operating System consists of similar component and can perform all most similar function but the methodand procedure for performing these functions are different. OPERATING SYSTEM are classified into different categories according to their different features. Thefollowing section will discuss the classification of operating system. Single user OPERATING SYSTEM: > Ina single user operating system a single user can access the computer at a particular time. > This system provides all the resources to the user at all the time. > The single user operating System is divided into the following types. © Single user, single tasking operating System * Single user , multitasking operating System Single user, single tasking operating System: ‘* Ina single user, single tasking operating system, There is a single user to execute a program at aparticular system. «Example - MS-DOS Single user , multitasking operating System ‘e Insingle user, multitasking OPERATING SYSTEM a single user can execute multiple programs. ‘© Example—A user can program different programs such as making calculations in excel sheet, printing a word document & downloading into the file from internet at the sametime. USER Application Programs | | Operating System Hardware Advantage: ‘© The CPU has to handle only one application program at a time so that process management is easy in this environment. * As the operating system is handling one application at a time most of the CPU time is wasted fulti user OPERATING SYSTE! > Ina multi-user operating system , multiple number of users can access different resources ofa computerat a time. This system provides access with the help of a network. Network generally consists of various personal computers that can and receive information to multi user mainframe computer system, Hence, the mainframe computer acts as the server and other personal computer act as the client for that server. > Ex: UNIX, Window 2000 Advantage; Operating System 4 Abbaye Kumar Dende v v KIIT POLYTECHNIC Sharing of data and information among different user. Disadvantaze: Use of expensive hardware for the mainframe computer. Batch Operating System > Ina batch processing operating system interaction between the user and processor is limited or there is no interaction at all during the execution of work. > Data and programs that need to be processed are bundled and collected as a “batch” These jobs are submitted to the computer through the punched card. then the job with similar needs executed simultaneously. Advantage: It is simple to implement. Lack of interaction between user and the program. Multiprogramming OPERATING SYSTEM: > Ina multiprogramming operating System several user ° can executemultiple jobs by using a single CPU at the operating system same time. The operating System keeps several program or job in job the mainmemory. When a job is submitted to the system in a magnetic, job 2 disk or job pool. > Then some of the jobs are transferred to the main eae memory according to the size of the main memory. > The CPU execute only one job which is selected by job 4 theoperating System. 512M > When the job requires any I/O operation, then CPU switches to next job in the main memory i.e CPU do not have to wait for the completion of1/0 operation of that job. > When the I/O operation of that job is completed then the CPU switches to the next jobatter the the execution of the current job. > E,g.UNIX, Windows 95 ete v v Advantage: CPU utilization is more i.e the most of the time the CPU is busy. Disadvantace: ‘The user can’t directly interact with the system. Time sharing Operating System: > This is the Logical extension of multiprogramming system. > The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is allocated to a job only if the job is in memory). Operating System 5 Abbaye Kumar Dende KIIT POLYTECHNIC (1 Here the CPU can execute more than one job simultaneously by switching among themselves. (1 The switching process is very fast so that the user can directly interact with the system during the execution of the program. (1 This system stores multiple jobs in the main memory and CPU execute all the jobs in sequence. 1 Generally CPU time is divided into no. of small interval known as time slice period, (1 Every process has to execute for the time slice period; then the CPU switch over to nextprocess. 11 The switching process is very fast,so it seems that several processes are executed [teers User 2 Active Link) —usere User? In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state whereas user 6 is in ready status. ‘As soon as the time slice of user 5 is completed, the control moves on to the next ready user ice. user 6. In this state user 2, user 3, user 4, and user 5 are in waiting state and user 1 is in ready state. The process continues in the same way and so on. Advantage: [CPU utilization is more ic the most of the time the CPU is busy. Disadvantage: 11 The operating system is more complex due to memory management, Disk management etc. Multitasking Operating Syster (1 A multi-tasking operating system allows more than one program to be running at the same time, (1 E.geone user can open the word | sm" -—| document and can simultaneously access the internet (1 While the processor handles only one application at a particular time itis capable of switching between the applications effectively to apparently simultaneously execute each application. (1 This type of operating system is seen everywhere today and is the most common type of operating system, the Windows operating system would be an example. fil Operating, Syston 6 hays Raman Panda KIIT POLYTECHNIC Multiprocessing Operating System: > When a system contains more than one processor in close communication, sharing the computer bus, the clock and sometimes memory and peripheral devices is known as multiprocessing operating System. This is divided into 2 types: Symmetric multiprocessing system * Asymmetric multiprocessing system Symmetric multiprocessing (SMP) > Each processor runs a shared copy of the operating system. > Different processor can communicate with each other and are able to execute this copy at the same time. > These processor are executed by a single operating System and have equal right to access all thel/O devices connected to the system. Asymmetric multiprocessing(ASMP) It is based upon the principle of master slave relationship. In this system one processor runs the operating System and other processor run the userprocesses. The processor which runs the operating System is known as master processor the processorwhich runs the user processes known as the slave processor. > Ittis used in large system. > Fach processor is assigned a specific task; master processor schedules and allocated work to slave processors. “y vv" v > More common in extremely large systems cPU cPU os cru Advantage: ‘* Improved Reliability:-As the system consists of multiple processor, failure of one processor does not disturb the computer system. The other processor in the system continues the task. ‘+ Improved throughput:-throughput is defined as_ the total no of jobs which are executed by the CPU in one second. As this system use multiple processor all the workload divided between the different processor. ‘+ Economical:-in this system different processor share the clock, bus, peripheral and memory between them, Due to this reason the system are more economical than multiple single processor system, Operating System 7 Abbaye Kumar Dende KIIT POLYTECHNIC ‘Beal time Operating System: Ina real-time operating system a job is to be completed within the right time constraint otherwise job loses its meaning. These system compete a particular job in the fixed time slot in order to respond to an event quickly. Real time introduces for corre negotiable time period. Real-time systems are usually used to control complex systems that require a lot of processing like machinery and industrial systems. ct operation and it required to produce result within a non This is of 2 types: Hard real time operating system * Soft real time operating system Hard real-time system: This system completes the critical tasks within the definite interval of time constraint. If the critical task is not completed within the time constraint, then the system fails. This system has to complete all the processes within the definite deadline and a single miss leads to critical failure. E.g.-Pace maker, flight control system( any miss in deadline leads to crash). These systems are not affected by the lapse of time interval and do not cause any critical failure. E.g.:-Live video streaming. Divi Operating Systems: In distributed operating system , the users access remote resources in the same way as the local resources are accessed. Distribute the computation among several physical processors. Loosely coupled system — each processor has its own local memory; processors communicate with one another through various communications lines, such as high- speed ‘buses or telephone lines. These systems provide features such as data and process migration. is operating system based on two models. "Client-server model * Peer -to-peer model Client-server model:-In this model, the client sends a request for a resource to the server and the server, in tum provides the requested ““**y resource as a response back to the client. a eS Operating System 8 Abbaye Kumar Dende KIIT POLYTECHNIC -—_—= 7 \ Peer —to-peer model: In a peer-to-peer model,all the computers ga a behave ‘ : as peers as well as clients. These peers communicate with each other for. 7 exchange of their resources, = —s= Advantages: > It facilitates the sharing of hardware and software resources between different processors. > Itincreases reliability as failure of one node does not affect the entire network. > It increases the computational speed of computer by sharing the workload into different nodes. > tenable different users to communicate with each other using email struct rating System The structure of Operating System comprises of 4 layers. Kernel; Hardware Kernel System call interface(shell) Application programs > It is the vital part of the operating system. It interacts directlywith the hardware of a computer. Programs interact with the kernel through 10 system calls, > System call: Thesystem call provides an interface to the operating system service System call tells the kemel to carry out various asks for the program such as opening a file, writing to a file, obtaining information about a file, executing a program, terminating a process etc. > The main function of the kernel are ‘To manage computer memory + To maintain file system % Allocation of resources 4 Control access to the computer ‘ Handle interrupts v v System Call Interface(Shel; > Shell is command line interpreter which interprets the commands given by the user. > Itis software or program which acts mediator between kernel and user. Operating System 9 Abbaye Kumar Dende KIIT POLYTECHNIC The shell read the commands, what you typed at command line and interprets them and sends request to execute a program, That's why shell is called as command line erpreter. Hardware: ‘Computer hardware refers to the physical parts or components of a computer such as the monitor, mouse, keyboard, computer data storage, hard drive disk (HDD), system unit (graphic cards, sound cards, memory, motherboard and chips), etc. all of which are physical objects that can be touched Utility and application programs: Utility programs help manage, maintain and control computer resources. These programs are available to help you with the day-to-day chores associated with personal ‘computing and to keep your system running at peak performance Application software is all the computer software that causes a computer to perform usefil tasks beyond the running of the computer itself. Examples of application programs include word processors; database programs; Web browsers; development tools; drawing, paint, and image editing programs; and communication programs. Evolution of Operating system: Serial operating system. Batch operating system. “Multiprogramming operating system. Time-Sharing operating system, Real-Time operating system. Multiprocessing operating system Distributed operating system RaVALNE Operating System 10 Abbaye Kumar Dende PROCESS: + Process is a program in execution Process is a currently executable task. KIIT POLYTECHNIC UNIT2 + Process execution must progress in a sequential manner. Process Program DA process is the set of executable instruction, those are the machine code. 7 Iisa set of instruction written in programming language i) Process is dynamic in nature. i) Program is s fatie in nature. iii) Process is an active entity. iit) Program is a passive entity. iv) A process resides in main memory. iv) A program resides in secondary storage. W) A process is expressed in assembly language or machine level language. ¥) A program is expressed through @ programmable language. vi) The time period limited. vi) Span time period is unlimited, Process in Memory:- > A process resides in memory through following section ie 1) Stack Stack 2) Heap 3) Data | 4) Text + Stack section contains local variable t ‘+ Data section contains global variable Heap ‘+ Text section contains code or instruction. Data ‘+ Heap section contains memory which will be Text dynamicallyallocated during runtime. PR TATE: When a process is executed, it changes its state. The current activity of that process is known asProcess state.A process has different states. They may be > New state: ‘+ When the request is made by the user, the process is created. Operating System a Abbaye Kamar Dende KIIT POLYTECHNIC + The newly created process moves into a new statement. ‘+ The process resides in secondary memory through a queue named as job queue or job pool. admitted interrupt terminated scheduler dispatch, VO or event completion VO or event wait Diagram of process state is said to be ready if it needs the CPU to execute. * Out of total newly created processes, specified processes are selected and copied to ‘temporary memory or main memory. © Inmain memory they resides in a queue named as ready queue. > Bunning:- A process is said to be running if it moves fiom ready queue and starts execution using CPU. > Waitt + A process may move in to the waiting state due to the following reasons. + Ifa process needs an event to occur or an input or output device and the operating system does not provide /O device or event immediately, then the process moved into a waiting state. . Ifa higher priority process arrives at the CPU during the execution of an ongoing process, then the processor switches to the new process and current process enter into the waiting state. > Temi * After completion of execution the process moves into the terminated state by exiting the system. The terminated state converts the process into a program. + Sometimes operating system terminates the process due to the following reasons. % Exceeding the time limit — Input/output failure % — Unavailability of memory * Protection error ° Operating System 2 Abbaye Kumar Dende KIIT POLYTECHNIC PROCESS CONTROL BLOCK(PCBY TASK CONTROL BLOCK(TCB) > To represent a process the operating System needs to group all the information of a process inside a datastructure, This data structure is known as process control block(PCB). > In other words operating System represents each process by a PCB. An operating System considers a process as afundamental unit for Resource Allocation, Following resources could be allocated to a process. The information stored inside the PCB includes i. Pointer-It stores the starting address of the process. fi, Process State- This field stores or represent the current state of the process whether it s in ready/running/new/waiting/terminating. Process ID/Number-Each process has uniquelD or serial no. Each process is shown an unique no. known as its Process ID or ProcessNumber. iv. Program Counter- It stores the address of the next instruction to be executed. v. _ Register-This field contains the name of the registers which are _ Pointer [Process state currently used by the processor. vi, Scheduling Information-This field stores the information about the |= "1 _ scheduling algo.used by operating System for scheduling that process. Program counter vii. Memory Management Information-This field contains the value of thebase table, segment table and page table. Registers Account Information-This field contains the total no. processes, time jfemory ims) slice period it used. —————_e—sS ix. File Management Information- It stores various information about the List of open files files used by the proces x. /O Status Information-It stores the information about various allocated 1/0 devices to the process, a list of open files & so on. Process number PROCESS SCHEDULI > The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. > The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running, > This purpose can be achieved by keeping the CPU busy at all the times. > So, when two or more processes compete for the CPU at the same time, a choice has to be made, > This procedure of determining the next process to be executed on the CPU is called as Process Scheduling. > The module of the operating system that makes this decision is called as Scheduler. > Process scheduling consists of three sub functions: Scheduling Queue IL Scheduler IIL. Context Switching LScheduling Queue The operating system maintains several queues for efficient management of processes. These are as follows: 1.fob Queue: When the process enters into the system, they are put into a job queue. This queue consists of all processes in the system on a mass storage device such as hard disk. Operating System B Abbaye Kamar Dende KIIT POLYTECHNIC 2.Ready Queue: From the job queue, the processes which are ready for execution are shifted to the main memory. In the main memory the processes are kept in the ready queue. In other words, the ready queue contains all those processes that are waiting for the CPU 3.Device Queue; Device queue is a queue for which a list of processes waiting for a particular /O device. Each device has its own device queue. When a process required some 1/O operation, it is then taken out of the ready queue and kept under the device queue. 4 Suspended Queue: It stores the list of suspended process. ‘Queuing Diagram: The process could issue an I/O request and then be placed in an I/O queue. The process could create a new subprocess and wait for its termination. The process could be removed forcibly from the CPU as a result of an interrupt, and again put back in the ready queue. = a vO Que q aterrupe WaitFor an Interrupt Occur —_— Pe Scheduler: > The module of the operating system that makes the decision of process scheduling is known as Scheduler. > Their main task is to select the jobs to be submitted into the system and to decide which process to run. Operating System 4 Abbaye Kamar Dende KIIT POLYTECHNIC Schedulers are of three types. 1. Long Term Scheduler 2. Short Term Scheduler 3. Medium Term Scheduler Long Term Scheduler(LTS) > Itis also called job scheduler; it works with the job queue. > Job scheduler selects processes from the job queue and loads them into the main memory for execution, > It executes much less frequently, as there may be long time gap between the creation of new process in the system. > The primary objective of the job scheduler is to control the degree of multiprogramming. > If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. > When process changes the state from new to ready, then there is a long term scheduler. > LTS selects a balanced mix of CPU bound and I/O bound processes. Short Term Scheduler(STS): It is also called CPU scheduler or process scheduler. It selects the process from ready queue and allocates CPU to it, Main objective is to increase the system performance. This scheduler is frequently invoked as compared to Long term scheduler. It is the change of ready state to running state of the process. vVvvyvvy This is faster one because the process executes for short time period before waiting for an /O request. Medium Term Scheduler(MTS) > It is also known as Swapper. > Sometimes the processes are removed from the memory and from CPU to reduce the degree of multiprogramming. Then after sometime the processes can be reintroduced into memory and execution can be continued where it is left off. This scheme is known as swapping. > The Medium Term Scheduler selects a process among the partially executed or unexecuted swapped out processes and swaps it in the main memory. v swap in| partiallyexecuted Swap out swapped-out processes Ready Queue 1/0 waiting queues Operating Syston 15 hays Raman Panda KIIT POLYTECHNIC lontext Switching Transferring the control of the CPU from one process to other requires saving the context of currently running process and loading the context of another ready process. This mechanism of saving and restoring the context is known as context switch. The portion of the PCB including the process state, memory management information and CPU scheduling information together constitutes the Context or State of the process The switching periods depends upon the memory speed and the number of registers used CPU SCHEDULIN. Basic Concept: The objective of multiprogramming is to improve the productivity of the computer. It can be done by maximizing the CPU utilization. That means some process running at all times and this is happened by switching the CPU among processor. But in a unipolar system only one process may run at a time and other processes must wait until the CPU is free and can be rescheduled. Scheduling is a fundamental operating system function. Almost all computer resources are scheduled before use. The CPU is one of the primary computer resource. Thus its scheduling is to control the operating System design. CPU-V/O Burst Cy The success of CPU scheduling depends on an observed property of processes. Process execution consists of a cycle of CPU execution and 1/0 wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an /O burst, which is followed by another CPU burst, then another 1/0 burst, and so on. The final CPU burst ends with a system request to terminate execution. RET wo La } vostumt waren wake voit ee co Operating System 16 Abbaye Kamar Dende KIIT POLYTECHNIC uler(Short Term Schedulai Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler). Scheduling can be of 2 types: * Non-Preemptive Scheduling © Preemptive Scheduling Non-Preemptive Scheduling: In this case once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU by terminating or by switching to the waiting state. That is when itis (process)is computed or required any [/O operation. Preemptive Scheduling: In this case CPU can be released forcefully. Under this scheduling the process has to leave the CPU forcefully on the basis of criteria like rmning to ready an d waiting state to ready state(i.e, when interrupt occur or due to completion of time slice period). DISPATCHER: Dispatcher is the module that gives control of the CPU to the process selected by the short- term scheduler. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency. This function involves the following: * Switching context * Switching to user mode + Jumping to the proper location in the user program to restart that program. Scheduling Criteria: There are several CPU scheduling algorithm, But we have to select one which is suitable for our system, There are some criteria based on which CPU scheduling algorithm select the next process to execute. ‘© CPU utilization. We want to keep the CPU as busy as possible, Conceptually, CPU utilization can range from 0 to 100 percent. Ina real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). © Throughput: can be defined for a system as “the no, of jobs completed per unit time”. * Turnaround time: The interval of time from submission of a process to the time of completion, It is the total time spent by a process within the system. Turnaround time= time spent in the ready queue + time spent in execution + time spentin VO operation OR Tum Around time = Completion time — Arrival time Operating System v Abbaye Kamar Dende KIIT POLYTECHNIC '* Waiting time: It is the sum of the periods spent waiting in the ready queue .[that means CPU scheduling affects only the amount of time that a process spends waiting in the ready queue , but does not affect the amount of time during which process executes or does I/O] Waiting time = Turn Around time — Burst time Itis the amount of time during which process is in the ready queue. ‘© Response time: It is the amount of time , process takes to start responding(first response after submission) Response Time = Time at which process first gets the CPU — Arrival time SCHEDU iG ALGORITHM The Scheduling algorithm decides which of the process in ready queue is to be attending the CPU. There are various scheduling algorithms: First Come First Serve scheduling(FCFS) Shortest Job First(SJF) Priority scheduling Round Robin Scheduling Muttilevel Queue scheduling ween irst Come This is the simplest and easiest scheduling algorithm. In this scheme, the process that requests the CPU first is allocated the CPU first. The first process is stored in the first position of the ready queue. Here the data structure of the ready queue is FIFO queue. FCFS is non preemptive. when CPU is fiee, it is allocated to other process ie the CPU has been allocated to process, that process keeps the CPU until it release the CPU either by terminating or by requesting /O, Let the process arrives in the order pl, p2, P3, p4,p5. Process Arrival Time CPU Burst PI 0 20 P2 4 2 P3 6 40 P4 8 8 PS 10 4 Find out the Average Turn Around Time(ATAT) and Avg. Waiting Time(AWT). Operating System 18 Abbaye Kamar Dende KIIT POLYTECHNIC Solution: The result of execution shown in GANTT CHART: Pi P2 [Ps iz PS 0 20 2D @ 70 74 g time: PI-0 P2=20-4=16 P3=22-6-16 P4=62-8-54 PS=70- 10-60 Hence the AWT(Average Waiting Time)=(0+16+16+54+60)/5=29.2 Turn Around Time(TAT): P1=20-0=20 P2=22-4=18 P3=62 P4=70-8-62 PS=74- 10=64 Hence Average TAT=(20+18+56+62464)/5=44 Disadvantage: The user having small job has to wait for a long time. This algorithm is particularly troublesome for tie sharing system because each user needs to geta share of the CPU at regular time intervals. Advantage FCFS scheduling is very simple to implement and understand. shortest Job First Scheduling(SJF In this type of scheduling when the CPU is available , it is assigned to the process that has the smallest next CPU burst. Iftwo processes have the same length next CPU burst, FCFS scheduling is used to break the te. It is also known as shortest next CPU burst. SIF algorithm may be either preemptive or non preemptive. 4 The choice arises when a new process arrives at the ready queue while a previous process is executing The new process may have a shortest next CPU burst than the currently executingprocess + A preemptive SIF algorithm will pre-empt the currently executing process where as a non premptive SIF algorithm will allow the currently running process to finish its CPU burst. Operating System 19 Abbaye Kamar Dende KIIT POLYTECHNIC “ Preemptive SIF scheduling is sometimes called “shortest remaining time first scheduling”. + Larger jobs will never be executed if smallest jobs arrives, Process Arrival Time CPU Burst PI 00 15, P2 05 10 PS. 07 05 Pa 10 08 ion preeny Gantt Chart PI PS Pa P2 0 15 20 28 38 Waiting Time: PI-0 P3=15- 78 P4=20- 10=10 P2=28-5=23 AWT=0+8+10+23-41/4-10.25 Turn Around Time: PI-15 P2 P3=20- 7-13 P4=28-10=18 ATAT=(15+33+1318)/4=19.75 Priority schedul In case of priority scheduling the process having highest priority value will be executed first. Problem: process PI P2 P3 06 Pa 08 SOLUTION(NON-PREEMPTIVE GANTT CHAI AT 00 oF BT 15 10 05 08 Priority PL P3 P2 P4 0 Ty 20 30 38 wa, Pl-0 P2=20- 4=16 P3=15-6=9 Operating System 20 Abbaye Kamar Dende KIIT POLYTECHNIC P4=30-1 A.W.T=0+1619422=47/4=11.75 TAT PI=15- 0-15 P2-30-4-26 P3=20-6-14 P4=38-8=30 AT.A.T=15+26+14430-85/4=21,25 Internal Priority: In Priority Scheduling a priority value is assigned to each of the process in the ready queue. The priority value can be assigned either internally or externally. The factor for assigning internal priority is: Burst time ‘ Memory Requirement > VO devices * No. Offiles jernal Priority: The factor for assigning the external priority value are: “Important of process + Amount of fund given Political pressure The priority scheduling may be preemptive or non-preemptive.the major problem with the priority scheduling is indefinite blocking or starvation. The solution to the problem is aging. Aging is a technique which gradually increases the priority value of the process that waits in the ready queue for a long time, Problem: Process BT Priority Pr 10 3 P2 1 I PS 2 4 Pa 1 2 Gantt Chart: Pa PI P3 Pl=2 P2=0 P3=12 P4=1 2+0412+1)/43.75 Pi=12 P2=1 P3=14 P4=2 ALT.A.T=(12+141442)/4=7.25 Operating System 2 Abbaye Kamar Dende KIIT POLYTECHNIC Round Robin Scheduling: This is designed for Time sharing system. It is similar to FCFS scheduling. But the CPU pre-empts among the ready process in every time slice period, which are in the ready queue. In case of FCFS scheduling the ready queue is a FIFO queue. But in RR scheduling the readyqueue is a circular queue. Round Robin Scheduling is a purely preemptive scheduling algorithm, Because after every time slice period CPU will switch over to the next process in the ready queue Process AT. BT. PI 00 20 P2 10 10 P3 15 15 Pa 15 10 CPU Time=Sms PI [Pl |[P2 |P3 |P4 [PI a a 25 40 S50 55 P3=(55-15)=40 P4=(45-15)=30 AT.A.T=(50425+40+30)/4=36.25 Mul vel Queue schedu > This algorithm partitions the ready queue into several separate queues. > Processes are permanently assigned to one queue based on some criteria such as memory size, process priority. Each queue has its own scheduling algorithm. Foreground queue may be in RR and background queue may be in FCFS. Again there is a scheduling algorithm to select a queue among many queues. If priority queue is applied .then no process in the lowest priority queue can be executed till there is a process in the higher priority queue If Round Robin scheduling is applied, then each queue gets CPU for a certain amount of time. Again that time will be divided among the processes in that queue. vvVY v Operating System 2 Abbaye Kamar Dende KIIT POLYTECHNIC Highest ———L srstem Processes nd [/-—_—+ Interactive Processes Ly — ——> system Processes Lowest Interprocess Communication(IP¢ Overview Processes are classified into 2 categories. They are:i)Independent process ii) Cooperating process Independent process: It is defined as a process that does not share any data and does not communicate with other process. In other words we can say that modification made to an independent process does not affect the functioning of other processes. Co-operating process:- It is defined as a process, which gets affected by any other proce: These processes are used for resource sharing and to speed up a computation procedure. Interprocess Communication(IPC) Interprocess communication is the mechanism provided by the operating system that allows processes to communicate with each other. Processes are classified into 2 categories. They are: Independent process: An independent process is not affected by the execution of other processes. Cooperating process: a co-operating process can be affected by other executing processes. Operating System 23 Abbaye Kamar Dende KIIT POLYTECHNIC Advantages of process cooperation Information sharing: Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to these types of resources. Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Such a speedup can be achieved only if the computer has multiple processing elements (such as CPUS or I/O channels). Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. Convenience: Even an individual user may have many tasks on which to work at one time, For instance, a user may be editing, printing, and compiling in parallel. Ways to Implement IPC 1.Shared Memory: Multiple processes can access a common shared [process p1 Process? memory. Multiple processes communicate by shared memory, onan {] 4 where one process makes changes at a time and then others view a ‘| 2 | ProvessP2 pat the change. Shared memory does not use kernel. Process P2 Md 2. Message Passing: Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. It is very usefil in case where the tasks or processes reside on different computers and are connected by a network. Messages can be of fixed or variable size. Kernel Kernel i Shared Memory System Message Passing System Operating System 24 Abbaye Kamar Dende KIIT POLYTECHNIC UNIT-3, MEMORY MANAGEMENT One of the major functions of operating system is memory management. It controls the + Allocation and de-allocation of physical memory. + Which part of the memory is currently used by which process. + Decide which processes are to be loaded into memory. + Free space management. + Dynamic allocation/de-allocation of memory to executing processes ete. Logical Address & Physical Address:- Logical Address: Address generated by a CPU is called as logical address. Physical Address:-Address generated by memory management unit is called as physical address. The logical address is known as “virtual address”, The set of all logical unit or address generated by programs referred as “logical address space” The set of physical address corresponding to logical address is referred as “physical address space”, Suppose the program size= 100 KB But it is loaded in the main memory from 240 to 340 KB. os o 240 , p__<—__oia Pl 340 99 * So 0 to 99 KB is the logical address space but 240 to 340 KB is the physical address space. + Physical address space = logical address space + content of relocation register. + The mapping between logical and physical addresses are done at run-time by the memorymanagement unit (MMU). SWAPPING: + Swapping is the method to improve main memory utilization. + When a process is executed it must be in the main memory. + A process can be swapped out temporarily to secondary memory or hard disk or backing memory and then again brought back to secondary memory for execution. This technique is known as “Swapping”. + The basic operation of swapping is © Swap-out (roll-out) © Swap-in (roll-in) Operating System 25 Abbaye Kumar Dende KIIT POLYTECHNIC Swap-out:-The mechanism to transfer the process from main memory to secondary memory. Swap-in:- The mechanism that shifts the process from secondary memory to rimary memory. “Eyeterne space = The main memory must accommodate both operating system and various user processes. + Generally, the main memory is divided into 2 partitions. © Operating system. © Application program/user processes. = The operating system place in either low memory or high memory. = Commonly the operating system is loaded in low memory. = Generally, there are two methods used for partitioning the memory allocation, © Contiguous memory allocation ‘© Non-Contiguous memory allocationContiguous Memory Allocation:- = Itis again divided into two parts. © Single partition allocation © Multiple partition allocation. Single Partiti ions. + Inthis memory allocation method, the operating system reside in the low memory. Operating] ° ‘yom 100% User processes 450% sek Operating System 26 Abbaye Kamar Dende KIIT POLYTECHNIC OK 200K os sa s2 ]u3|a 4 eae 900K 1024 And the remaining part/space will be treated as a single partition, + This single partition is available for user space/application program. Only one job can be loaded in this user space is the main memory consisting of only one processat a time, because the user space treated as a single partition, i. Itis very simple. ii, It does not require expertise to understand. Disadvantag i. Memory is not utilized property. ii. Poor utilization of processor (waiting for 1/0), Multiple Partition Allocation:- This method can be implemented in 3 ways. These are: co Fixed equal multiple partition, o Fixed variable multiple partition. 0 Dynamic multiple partition, Fixed equal multiple partition:- i. In this memory management scheme the operating system occupies the low memory and rest of mainmemory is available for user space. il, The user space is divided into fixed partitions. The partition size depends upon the operating system. ili, A partition of main memory is wasted within a partition is said to be “Internal Fragmentation” and the wastage of an entire partition is said to be "External Fragmentatio iv. There is one problem with this method is memory utilization is not effic causesinternal and external fragmentation. Operating System ey Abbaye Kumar Dende KIIT POLYTECHNIC Advantage ‘+ This scheme supports multiprogramming. ‘+ Efficient utilization of CPU & I/O devices. + Simple and easy to implement. Disadvantages: ‘© This scheme suffers from internal as well as external fragmentation, Since, the size of partitions are fixed, the degree of multiprogramming is also fixed. Fixed variable partition:- (unequal size partition) © In this scheme the user space of main memory is divided into number of partitions, but thepartitions sizes are different length. © The operating system keep a table which indicates, which partition of memory are available and which areoccurred. This table is known as “Partition Description Table” (PDT). o When a process arrives and needs allocation or memory, we search for partition which is bigenough to allocate this process. If find one allocation, then allocate the partition to that process. Advantage:- i, Supports multiprogramming ii, Smaller memory loss (expected). iii, Simple & easy to implement. Disadvantage:- ,, + __ Suffers from internal as well as external fragmentation, Figure 4: Fixed Partitioning Figure 5: Variable Partitioning ome ‘Operating ystem 1} Operating See | syetem ab ant some ; wave] fan nants ab J Dynamic Multiple Partition Memory Management:= (Variable partition) © To overcome/eliminate some of the problems with fixed partition, another method is developedknown as “Dynamic Partitioning”. In this technique, the amount of memory allocated is processrequires. actly the amount of memory a Operating System 28 Abbaye Kamar Dende KIIT POLYTECHNIC © In this method the partition are made dynamically. © Initially when there is no process in the memory, the whole memory is available for allocationand it is treated as a single large partition of available memory (a hole). © Whenever a process request for memory, the hole large enough to accommodate that process isallocated. © The rest of the memory is available to other process. As soon as the process terminates, the memory occupied by it is de-allocated and can be usedby other process. Dynami partvoring Operating system Block sue = 2MB P1=2MB Block size = 7 MB Block size = 1 MB a= aM Block size = 5MB Empty space of RAM Partition size = process size So, no internal Fraamentaton Advantage 1. Partition changed dynamically, So no internal fragmentation. 2. Efficient memory and CPU utilization Disadvantage:- 1. Suffers from external fragmentation, Partition Selection Algorithms: Whenever a process arrives and there are various holes large enough to accommodate it, the operating system mayuse one of the following algorithm to select a partition for the process. ist fit:- In this algorithm, the operating system scans the free storage list and allocates the first partition thatis large enough for that process. ‘Advantage:- 1. This algorithm is fast because very little search is, involved Disadvantage: 1. Memory loss may be high. Operating System 29 Abbaye Kumar Dende KIIT POLYTECHNIC © Best fit: In this algorithm the operating system scans the free storage list and allocate the smallest partitionthat is big enough for the process, Advantage:- 1. Memory loss will be smaller than the first fit, Disadvantage:- Search time will be larger as compared to first fit. © Worst-fit:- In this algorithm the operating system scans the entire free storage list and allocate the largestpartition to the process. Disadvantage:-Maximum interval fragmentation. Compaction: Compaction is a technique of collecting all the free spaces together in one block, so that other process can use this block or partition. There are large no. of small chunks of free memory that may be scattered all over the physical memory and individual each of chunks may not big enough to accommodate even a small program. So, compaction is a technique by which the small chunk of free spaces are made contiguous to each other into a single fiee partition, that may be big enough to accommodate some other processes. Ex-Collect all the fragmentation together in one block and now the figure is:~ ‘System Memory o System Memory Operating System Operating System Process 1 Process 1 Memory hole Process 3 Proess 9 Prue Memory nole Unused space Processes 3 and eee 5 are re-located Unused space Operating System 30 Abbaye Kumar Dende KIIT POLYTECHNIC Non contiguous memory partition:- ‘As one program terminates, the memory partition occupied by it becomes available to be usedby another program. Let the size of the freed memory be S, the next program to be run on the memory may need aspace which is larger or smaller than S. If it is larger then it cannot be loaded, if it is smaller, than a part of the partition remainsunutilized. This unutilized memory is known as fragment..This concept is known as fragmentation, Fragmentation is of 2 types:- ¥ External fragmentation Y Internal fragmentation. External fragmentation When the fragment is too small for a running program to be load, then there a fragment or portionremains unutilized. Internal fragmentation When the fragment remains unutilized inside a larger memory partition already allocated to a program. Both lead to poor memory utilization. To overcome this problem the memory is allocated in such a way that parts of a single process may be placed in non-contiguous areas of physical memory. This type of allocation is known as Non-contiguous allocation. The two popular schemes in Non-contiguous allocation are paging & segmentation, Paging Paging is an efficient memory management scheme because it is Non-contiguous memory allocation method, The partition method supports the contiguous memory allocation ie the entire process loaded in partition but in paging the process is divided into small parts, these are loaded into elsewhere in main memory. The basic idea of paging is physical memory/ main memory is divided into fixed size blocks calledas frames. Logical memory (or user job) is divided into fixed size block called pages Page size and fiame size should be equal. Backing store is also divided into fixed size block that are of same size as memory frames. When a process is to be executed its pages are loaded into the main memory in any availablememory frame. Every logical address generated by CPU is divided into two parts:- Operating System 31 Abbaye Kamar Dende KIIT POLYTECHNIC 1, Page number (P) 2.Page offset (d) logical physical . address address. CPU pla tla P| -|- physical memory page table Structure of paging scheme Page number is used as an index into the page table. Page table is a data structure maintained by operating system. It is used for mapping purpose The page table specifies- * Which frames are allocated + Which frames are available = How many total frames are there and so on. The page table consists of 2 fields- 1)page number 2)frame number page table contains the base address of each page in physical memory. The base address is combined with the page ofiset to define the physical memory address, The page size or frame size is depending upon operating system. But it is generally a power of 2,such as4MB,8MB,16MB etc The page map table specifies which page is loaded in which frame ,but displacement or offset iscommon. The paging has no external fragmentation, but there may be internal fragmentation .In paging itis called as page break. Advantage It supports time sharing system It doesnot effect from fragmentation It support virtual memory. Operating System 32 Abbaye Kamar Dende KIIT POLYTECHNIC Disadvantage The scheme may suffer “page break”. If the number of pages are high, itis difficult to maintain page table. Segmentation In case of paging the user’s view of memory is different from physical memory. Jser don’t think that memory is a linear array of byte, some containing instruction and somecontaing data, But he view the memory as a collection of variable sized segments and there is no ordering ofsegments. Segment is a memory management technique that supports user’s view of memory. Subroutine: program logical address A segment can be defined as a logical grouping of instruction ,such as subroutine, array or a dataarea. “Every program is a collection of these segments” Here the logical address is a collection of segment Each segment has a name and length. Segmentation is a technique for managing these segments. Each segment are numbered and referred by segment number. The logical address is consisting of two tuples Ex-the length of a segment main is 100K, here ‘main’ is the name of the segment and the offset value is 100K.the operating system searches the entire main memory for free space to load a segment. This mapping is done by segment table. The segment table is a table, each entry of it has a segment “Bas Logical address consist of two parts. 1. Segment Number(s) 2. Offset into that segment(d) The segment number is used as an index into the segment table. The offset is compared with segment limit.offset should be less than or equal to limit, else there is an error. If the offset is valid then “d” will be added with base value to get the actual physical address. ” and a segment “limit”. Operating System 33 Abbaye Kumar Dende Operating, Syston Error logical address space 34 +400| ar00) 700] e700 physical memory Physical Adress Space lsegmont KIIT POLYTECHNIC [Base Aad Base Add, + Limit hays Raman Panda KIIT POLYTECHNIC PAGE FAULT When the processor need to execute a particular page, that page is not available in mainmemory then an interrupt occurs to the operating system called as page fault. When the page fault is happened, the page replacement will be needed. The word page replacement means to select a victim page in the main memory. Replace the page with the required page from backing store or secondary memory. STEPS FOR HANDLING PAGE FAULT To access a page the operating system first check the page table to know whether the reference is valid or not. If invalid, an interrupt occur to operating system called page fault Then operating system search for free frame in the memory. Then the desire page is loaded from disk to allocate free frame. ‘When the disk read is complete the page table entry is modified by setting the valid bit. The the execution of the process starts where it was left. Difference, Between Paging and segmentation Paging Segmentation The main memory partitioned into frames or blocks The main memory partitioned info segments, The logical address space divided intopages | The logical address space is divide into by compiler or memory management unit segments specified by the programmer. It may suffer from page break orintemnal fragmentation This scheme suffer from external fragmentation ‘The operating system maintain page map table for mapping between frames and pages. ‘Segment map table is used for ‘mapping. Tedoesn't support user view of memory Tt support user view of memory. The processor uses page no. and offsetto calculate absolute address The processor uses the segment no. and displacement to calculate the absolute address. Operating System 35 Abbaye Kumar Dende KIIT POLYTECHNIC VIRTUAL MEMORY Virtual memory is a technique which allows the execution of a process, even the logical address space is a greater than the physical memory. Ex:let the program size or logical address space is 15 MB., but the available memory is 12MB. So, the 12MB is loaded in main memory and remaining 3MB is loaded in the secondary memory. When the 3MB is needed for execution then swap out the 3MB from main memory tosecondary ‘memory and swap in 3MB from secondary memory to main memory. Advantages Large programs can be written, as virtual space available is huge compared to physical memory. Less I/O required, leads to faster and easy swapping of processes. More physical memory available, as programs are stored on virtual memory, so they occupy very less space on actual physical memory. DEMAND PAGING Demand paging is the application of virtual memory. tis the combination of paging and swapping. The criteria of this scheme is “a page is not loaded into the main memory from secondary memory,until it is needed”. So,a page is loaded into the main memory by demand, so this scheme is called as “Demand Paging”. For ex: Assume that the logical address space is 72 KB. The page and frame size is 8KB. So thelogical address space is divided into 9 pages, i.e numbered from Oto 8. The available main memory is 40 KB.ie.5 frames are available. The remaining 4 pages areloaded in the secondary storage. Whenever those pages are required ,the operating System swap-in those pages into main memory. Operating System 36 Abbaye Kamar Dende KIIT POLYTECHNIC P.NO [ F.No | V/I uBOP UBOP. a a }—+5 11 =z]o]>]>]o]o]=|> w ~Jo w ofele S|S 3|5 S|S t— 8 4 sM ‘UBOP tock uBoP Memory EM i MM In the above figure the mapping between pages and frames done by page map table. In demand paging the PMT consisting of 3 fields i.e. page no., frame no. and valid/invalid bit. If a page resides in the main memory the vil bit set to valid. Otherwise the page resides in the secondary storage and the bit set to Invalid. ‘The page numbers 1,3,4,6 are loaded in the secondary memory. So those bits are set to invalid. Remaining pages resides in the main memory, so those bits are set to valid. ie available free frames in main memory is 5,so 5 pages are loaded, remaining frames are used by other process(UBOP). Operating System 37 Abbaye Kumar Dende KIT POLYTECHNIC UNIT-4 DEVICE MANAGEMENT Magnetic Disk Structure: In modern computers, most of the secondary storage is in the form of magnetic disks. Hence, knowing the structure of'a magnetic disk is necessary to understand how the data in the disk is accessed by the computer. arm assembly evlinder platter Platter:- Each disc has a flat circular shape named as platter. The diameter of platter is from range 1.8"'to 2.25" inches. The information Are stored magnetically on platter. ‘Tracks:- Surface of the platter are logically divided into cireular tracks.Secto -ks are subdivided into no. of sections termed as sector, Each track may contain hundreds of sectors. Cylinder:- Set of tracks that are at one composition makes up a cylinder. There may be thousands of concentric cylinders in dise drive. Read/write head:- This is present just above each surface of every platter. Disc arm:- The heads are attached to a dise arm that moves all the heads as a unit. Operating, System 38 Abaya Raman Panda KIT POLYTECHNIC Note: The storage capacity of a normal disk drive is measured by GB. Seek time:- The time required to reach the desired track by the read and write head is a seek time. ‘There are 2 components to calculate seek time, i) Internal start up time ii) The time taken to traverse the cylinder Ts=m+nts Where Ts = Estimated seek time n= no. of tracks traverses m= constant tart up time, Rotational delays time:- The time required to reach the desired sector by write/ read head is called rotational delay time. Generally the average of rotational delay is between 100 to 200 ms. Disk scheduling algorithm:- Whenever a process request an i/p, o/p operations from the basic it will issue a system call to the OS. The request have been processed according to some sequence. This sequence has been made by various algorithm named as disk scheduling algorithm Disk scheduling algorithm Schedules all the request properly and in some order. ‘This algorithm, Is implemented in a multi programming system, Following are the disk scheduling algorithm i) FCFS ii) SSTF (Shortest seek time first) iil) SCAN C-SCAN (Circular scan) v) Look vi) C-Look FIRST COME, FIRST SERVE It is not an efficient algorithm but itis fair in scheduling the disk access. For example, we are given a list of request for disk 1/0 to blocks on a cylinder. 98, 183, 37, 122, 14, 124, 65, 67 If the starting point is 53 then the access would be like below Operating Syston 39 Abbaye Kumar Panda KIT POLYTECHNIC 0 44 aT s3 sass 6557 98122124 a3 199 The big jump from 183 to 37 could be avoided if somehow 14, 37 and 122, 124 are served together This indicates the problem with the CFS algorithm which is larger head movement. SSTF ULING The main idea of the Shortest-Seek-Time-First algorithm is to service all the requests close to the current position of the head before moving far away to service other requests. Example: Considering our previous sequence of disk blocks access. Queue = 98, 183, 37, 122, 14, 124, 65, 67 Ou 37 535455 6567 98122124 182 199 — 67-27 = 20 98-67 = 23 a7 -14 = 23 98-37 =1 Operating Syston 40 Abaya Kumar Panda KIT POLYTECHNIC There is a substantial improvement compared to FCS algorithm. The total head movement is as follows. 65 ~53=1237-14= 23 124-122=2 67-65 = 29814 =84 183 - 124-59 67-37 = 30 122-98 =24 Total Head Movement = 236 cylinders, But suppose 14 and 183 are a queue and a request near 14 came, it will be served and next one is also close to 14 came, it will be served first and this will lead to starvation of 183 in the queue. SSTF is an improvement but not the optimal algorithm, SCAN SCHEDULING In this algorithm, the disk arm works like an elevator starting at one end servicing all the way up to the other end and then start from the other end in reverse order.To use the SCAN algorithm,we need to know two information. 4. Direction of Sean 2. Starting point Let’s consider our example and suppose the disk start at 53 and move in the direction of 0. 0 14 37 535455 6567 98.122 124 183 199 ee Ne Total Head Movement 53 —37 = 16 67-65 = 2 124 122=2 37-14=23 98-67 = 31 183 —124=59 65 —14=51 122-98 =24 Total head movement = 158 The SCAN move in one direction and service all the request immediately, but while returning inreverse direction it does not serve any request since they have been serviced recently. More of the request is at the opposite end, we will see an algorithm that wants to go the other end directly. Operating, System a Abbaye Kumar Panda KIT POLYTECHNIC G-SCAN ALGORITHM In this algorithm, the head from one end to the other servicing request along the way, however, it does not do a reverse trip and go to the beginning directly as if it is a circular queue. O 14 37 535455 6567 98122124 183 199 ie. — Total Head movement 183 -124 = 59 98 — 67 = 31 183 ~ 14 = Look for Request 124~ 122=267-65=237-14=23 122-98 = 24 65-53 = 12 Total Head Movement = 153 LOOK Disk Scheduling Algorithm: LOOK Algorithm is an improved version of the SCAN Algorithm. Head starts from the first request at one end of the disk and moves towards the lastrequest at the other end servicing all the requests in between, * After reaching the last request at the other end, head reverses its direction. # It then returns to the first request at the starting end servicing all the requests inbetween, ‘The same process repeats. Operating, System a2 Abaya Raman Panda KIT POLYTECHNIC Consider a disk queue with requests for 1/0 to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. The LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving ‘towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. Total head movements incurred while servicing these requests = (65 — 53) + (67-65) + (98 67) + (122-98) + (124 ~ 122) + (183 - 124) + (183 -41) + 41 - 14) =12+24314+244+2459 + 142427 =299 LOOK Disk Scheduling Algorithm: . Circular-LOOK Algorithm is an improved version of the LOOK Algorithm. . Head starts from the first request at one end of the disk and moves towards the last request at the other end servicing all the requests in between. . After reaching the last request at the other end, head reverses its direction. . It then returns to the first request at the starting end without servicing any request in between. . The same process repeats. Consider a disk queue with requests for HO to blocks on cylinders 98, 183, 41, 122, 14, 124, 65, 67. ‘The C-LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. Operating System 43 Abhaga Kumar Panda KIT POLYTECHNIC 0 140 at 53 65 or 98 8«6122« 124 183199 Total head movements incurred while servicing these requests = (65 — 53) + (67 — 65) + (98 — 67) + (122 — 98) + (124 - 122) + (183 - 124) + (183 — 14) + (41 - 14) =12424314+244+24594 169427 = 326 Device management ‘The main functions of the device manager are: 1. Monitor the status of all devices, including storage drives, printers and other peripherals 2. Enforce pre-set policies on which process gets which device for how long 3. Deal with the allocation of devices to processes 4, Deal with the de-allocation of devices to processes, both at a temporary basis (e.g. when the process is interrupted) and on a permanent basis (e.g. when the process is completed). Operating Syston 44 Abaya Raman Panda KIIT POLYTECHNIC Device management technique:- There are 3 techniques for device management, ie i) Dedicated. il) Shared ii) Virtual Dedicated:- These are devices that are assigned to one process at a time, and the process only releasesthe device once it is completed. The problem with this is that it means only one user is using it at a time, and it might be inefficient if the device isn’t being used 100% of the time that it is being locked by the user. Ex.:-Printer, card reader and disk. Share These are devices that can be shared between several processes. Considering an example like a hard disk, it is shared, but interleaving between different processes’ requests. All conflicts for device need to be resolved but predetermined policies to decide which request is handled first. irtual: These are devices are a combination of Dedicated and Shared Devices. So a printer is a dedicated device, but using the spooling (queues) means it can be shared, A print job isn’t sent straight to the printer, instead it goes to the disk (spool) until it is fully prepared with all the necessary printer sequences and formatting, then it goes to the printer,ensuring that the printers (and all I/O devices) are used efficiently. ‘1/0 traffic controller: ‘1/0 traffic controller, control all the device track or channel. The traffic controller maintain all the status information, The traffic controller, attend 3 questions, i.e. i) Is there a path, available to server or I/O request? il) [fmore, than, one path available ii) [Eno path currently available, when, one will be free. In order to answer these question, 1/0 traffic controller use one of the following data base, i) Unit control block (UCB) ii) Central Unit Control Block (CUCB) ii) Channel control Block (CCB) Operating System 45 hays Raman Panda KIIT POLYTECHNIC CCB uce cue Device unit identification, [It is control unit = Status of device. identification = List of control unit, | = List of device connected to ‘connected to the device. the control unit. list of processes, waiting | = List of channel connected for this device. to the control unit = Waiting for the control, = Channel identification = Status of the channel, = list of control unit connected to the channel, = List of the process, waiting for the channel VO scheduler:~ If there are more I/O request, pending, then, available path is necessary to choose, which, I/O request is satisfied, first. Then, the process of scheduling, is applied here, and it is known as l/Oscheduler. VO device handles + 1/0 processes the I/O interrupts, handles error condition, and provides detailed scheduling algorithms, which are extremely device dependent. + Each type of /O device has own device handler algorithm like FCFS, SST! Operating, Syston 46 "SCAN, hays Raman Panda KIIT POLYTECHNIC Spooling: + SPOOL is an acronym for simultancous peripheral operations on-line. + It is a kind of buffering mechanism or a process in which data is temporarily held to beused and executed by a device, program or the system, * Data is sent to and stored in memory or other volatile storage until the program or computer requests it for execution. System CPU UP devices Main OIP devices Memory/ # Spooling uses buffer to manage files to be printed. Files which are spooled are queued and copied to printer one at a time. To manage I/O requests, operating system has a component that is called spooler. * Spooler manages I/O requests to a printer. Spooler operates in the background and creates a printing schedule. Race condition: Race condition is a situation, where, several process, access and manipulate, same data, concurrentlyand the execution depend on a particular order. Operating, Syston a7 hays Raman Panda KIIT POLYTECHNIC ‘UNIT-5 EAD LOCKS System Model: A system consists ofa finite number of resources to be distributed among a number of competing processes. The resources are partitioned into several types each of which consists of a number of identical instances. A process ‘may utilized a resources in the following sequence 1) Request:-process request for a resource through a system call. If the resource is not available it will wait Example: system calls open( ), malloc( ), new( ), and request). 2) Use:~ After getting the resource, the process can make use of it by performing the execution. Example: prints to the printer or reads from the file. 3). Release:- After the completion of the task the resource is not required by that process, in that it should be released, Example: closet ), free( ), delete( ), and release( ). Resources such as->CPU, memory, I/O devices etc. When a process requests for a resources then if itis free then it will be allocated to that process. But if the resource is busy with other process then the previous processhas to wait till that resource is free. Deadtock: Deadlock is a situation where a set of processes are blocked because each process is holding aresource and waiting for another resource acquired by some other process. For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which isacquired by process 2, and process 2 is waiting for resource 1 Waiting Assiened, For fssianed Waiting (3 For Resource 2 Operating System 48 Abbaye Kamar Dende KIIT POLYTECHNIC REASONS/NECESSARY CONDITIONS FOR ARISING DEADLOCK:-- A deadlock situation can arise if the following four condition hold simultaneously in the system. 1, Mutual exclusion 2. Hold and wait 3. No pre-emption 4. Circular wait 1) MUTUAL EXCLUSION:- At least one resources must be held in a non-sharable mode. That ‘meansonly one process can use that resource at a time. EX: printer is non-sharable But HDD is a sharable resource. So that if the resource is not free then the requesting process has to wait till the resource is released bythe other process. 2)HOLD AND WAIT:-There must be aprocess which is already holding using one resource andrequesting(waiting) for another resource which is currently held by another waiting process. 3)NO PREEMPTION: -Resources cannot be pre-empted, That means a resource can’t released by the process unless until it has completed its task. Le. printer will be released only when printing work is finished. 4)CIRCULAR WAIT: waitingprocesses. Suppose there are n-processes {P0, PI, P2, Pi-1} they all are PO is waiting for the resource hold by PL 1 is waiting for the resource hold by P2. P2 is waiting for the resource hold by P3. Pn-1 is waiting for the resource hold by PO. Ifall above 4 conditions are satisfied in a system, then deadlock may occur but if anyone of the condi (criteria) is not satisfied then deadlock will never occur. Resource allocation graph (RAG):- > A diagrammatic represent to det. The existence of deadlock in the system by using a graph named as RAG, > Itis a directed graph. Operating System 49 Abbaye Kamar Dende KIIT POLYTECHNIC > RAG consists of several no, of nodes and edges. >It contains i) Process C@) node Circle ii) Resource node Square, > The bullet symbol within the resource is known as instances of that resource. > Instance of resources means identical resources of same type. > There exist 2 kinds of edges. i) Request edge. il) Allocation / assignment edge REQUI T EDGE:- Whenever a process request for resources then it is called a request edge. ~ A request edge is drawn from the process to resource. ASSIGNMENT EDGE:- Whenever a resource is allocated to a process the request edge is converted to an assignment edge from the instance of the resource to the process. NOTE:-If the RAG contains NO CYCLE , then there is NO DEADLOCK in the system. - Ifthe RAG contains a CYCLE, then there MAY BE A DEADLOCK in the system. - Ifthe resources have exactly one instance then a cycle indicates a deadlock. - If the resources have multiple instance per resource then a cycle indicates that “there may be deadlock”. + Process wait for graph(PWFG) :- A process wait for graph can be obtained by removing/collapsing resource symbols in the RAG. The resource allocation graph shown in figure has the R, R, following situation. + The sets PR, E . P1, P2, P3} + R= {RI,R2, R3, Ra} + E={P1—+R1,P2—+R3,RI > P2,R2 > P2,R2 PLR3 — P3} The resource instances are Resource R1 has one instance Resource R2 has two instances. Resource R3 has one instance Resource R4 has three instances, Operating System 50 Abbaye Kamar Dende KIIT POLYTECHNIC ‘The process states are: ‘© Process PI is holding an instance of R2 and waiting for an instance of RI. «Process P2 is holding an instance of RI and R2 and waiting for an instance R3. * Process P3 is holding an instance of R3. ‘The following example shows the resource allocation graph with a deadlock. © PL>R1->P2->R3-> P3 >R2->P1 © P2->R3->P3->R2->Pl Methods for Handling Deadlocks Deadlocks can be handled by an of the following methods. © Deadlock prevention © Deadlock avoidance © Deadlock detection and recovery DEADLOCK PREVENTION: -Deadlocks can be prevented from occurring by preventing one of the necessary four conditions. Le. mutual exclusion, hold and wait, no pre-emption and circular wait If one of the condition can be prevented from occurring then deadlock will not occur. Eliminate Mutual Exclusion It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive and printer, are inherently non-shareable. Eliminate Hold and w: 1. Allocate all required resources to the process before the start of its execution, this way hold and wait condition is eliminated but it will lead to low device utilization. for example, if a process requires printer at a later time and we have allocated printer before the start of its execution printer will remain blocked till it has completed its execution. 2. The process will make a new request for resources after releasing the current set of resources. his solution may lead to starvation, Eliminate No Preemption Preempt resources from the process when resources required by other high priority processes, Eliminate Circular Wait Each resource will be assigned with a numerical number. A process can request the resources increasing/decreasing. order of numbering. For Example, if PI process is allocated RS resources, now next time if PI ask for R4, R3 lesser than RSsuch request will not be granted, only request for resources more than RS will be granted. Operating System 51 Abbaye Kumar Dende KIIT POLYTECHNIC DEADLOCK AVOIDANCE > To avoid deadlock it is required to keep the additional information of the process i.e. the operating System will begiven prior information about the process such that = Which process will request for which resource. ~ At what time and for how long time. > So that the operating System find out a sequence of execution. > Ifall the process is execute in the sequence then system will never enter into a deadlock state, SAEE STATE: - A state is safe if the system can allocate the available resources to each process in same order and still avoid deadlock. unsate *SAFE SEQUENCE: - A sequence of processes (P1, P2, and P3. -Pn) is a safe sequence if the available resources can be allowed to the processes in that _ safe sequence and avoid deadlock. If no safe sequence exists the system is said to be deadlock unsafe state. An unsafe state may lead to a deadlock. Ex. Suppose a system contains 12 tape drives Process max. Allocation need PO 10 05 5 PI 04 02 2 P2 09 02 7 Free= 12-9=3 So, the safe sequence is . Ifthe system will always remain in safe state then deadlock will never occur. So, when a process requests for a resource that is currently available the system must decide whether that resource will be allocated or the process will wait. The request will be granted only if the allocation leaves the system in a safe state. This method is used only when the system contain one type of resource having multiple instances. Operating System 52 Abbaye Kumar Dende KIIT POLYTECHNIC ‘Resource allocation graph :-( Multiple Resources Having single instance) We can use this algorithm for deadlock avoidance if the system contains different types of resources but each is having single instances, In this graph, beside assignment edge and request edge, third edge known as “claim edge” isadded. A claim edge from process Pi to Ri indicates that the process Pi may request for resource Rj in FUTURE. Claim edge is similar to request edge but it is represented as dashed line, Ifa process request for resource Rj then that request only be granted if by converting the requesting edge Pi>Rj to the assignment edge Rj->Pi does not forma cycle A, Ri Re If there are two claimedges for the same resource then it can avoid. If only one of the processesis allocated the resource R1 then a deadlock ean arise. B Multine R A ple The resource allocation graph allocation is not applicable to a resource having multiple instances of each resource type ‘+ This algorithm used for system having multiple resources along with multiple instances. ‘+ Banker's algorithm is less efficient the RAG. + This name was choose because it could be used in a banking system to ensure that bank never allocates its available cash such that it can no longer satisfy the needs of all its customers. ‘+ When a new process enters into the system - ILmust declare maximum number of instances of each resource type that it may need, - This number should be less than the total number of resources. Operating System 53 Abbaye Kumar Dende KIIT POLYTECHNIC ‘+ When a process requests a set of resources, the system must determine:- - Whether the allocation of these resources will leave the system in a safe state - IF YES, the resources are allocated to chat process. Is NO, the process have to wait until some other process releases enough resources, Banker's algorithm consists of two parts. - Safety algorithm - Resource request algorithm. ‘+ The safety algorithm is used to determine whether a system is in safe state or not ‘+ The resource request algorithm is used determine whether or not a request generated bya process for a resource, would lead the system to an unsafe state. ‘+ The algorithm uses several data structures such as vector & matrixes. Ex. Let in a system there are ‘n’ processes and ‘m’ resources. 1) AVAILABLE: - It is an arraylvector of length ‘m’ indicates the number of available resource of each type. Ifavailable [J] =k means there are k instances of resource type Rj available. 2) MAX: - It is am matrix defines the maximum demand of each process. If max [i,j]k, then the process Pi may request at most k instances of resource type Rj 3) ALLOCATION: - It is an n*m matrix definesthe number of resources of each type currently allocated to each process. Ex. If allocation [i,j]+k, then process Pi is currently allocated k instances of resource to Rj. 4) NEED: - It is an n*m matrix indicates the remaining resources need each process if need [iiJHk, then the process Pi may need k more instances of resource type Rj to complete its task. Needfi,jJ-max{i,j] — allocation{i,j) Safetvalverithn The algorithm for finding out whether or not a system is in safe state. It can be described as follow. STEP I:- work is a vector of length m.Finish is a vector of length n. Work= available. Finish{i] false For 1, 2, 3 STEP 2:- find an i such that Finish{i] =false Need i<= work. If no such i then go to step 4. Operating System 54 Abbaye Kumar Dende KIIT POLYTECHNIC STEP 3:- work = work allocationFinishfi] ~ true. Go to step 2. STEP 4:- If finish[i] = true for all I, then system is in safe state. The complexity of the algorithm is O(m*n*2)i.e, the algorithm may require on order of m*n*2 operation to decide whether a state is safe. Resource request algorithm Request is a vector of length m. 1. Ifrequest i<= need i go to step-2. Else raise an error condition. 2. Ifrequest < available go to step-3. Otherwise Pi must wait. 3. Available = Available - Request(i) Allocation = Allocation RequestiNeed i= need i-Request i DEADLOCK DETECTION Ifa system does not employ either a deadlock prevention or a deadlock a avoidance algorithm, thena deadlock situation may occur. ‘© So at the time or environment the system should provide:- + Analgorithm that will check whether the deadlock has occurred into the system (deadlock detection). + Analgorithm to recover the system from deadlock. State (deadlock Recovery) Operating System 55 Abbaye Kumar Dende KIIT POLYTECHNIC Single instance of each resource type:- Ifall the resources have onl graph” (WFG). y single instance, then we can detect a deadlock state by using “wait-for- Itis similar to RAG but only difference is that here the vertices are only processes. ‘© There is an edge from Pi to Pj if there is an edge from Pi to R and also on edge from R to Pj 7 gr P2.R2 . Bs EE ae et pz —_ Go AY 5 adlocks Wait-for-Graph "2 Ra P2 Pe Ra PT ‘+ In this fig. there is two eycles one is P1 to P2 to PL. Second are P2 to P3 to P2.So the system consisting of two deadlocks. Multipleseveral instances of Resource Type: The wait for graph method is not applicable to several instance of a resource type. So, we need another method to resolve this problem. The algorithm used is known as “deadlock detection algo”. This algorithm like “banker's algo” and it uses several data structures, -Available: - A vector of length ‘m’ indicates the number of available resources of each type. -Allocation: - An n*m matrix defines the no, of resources of each type currently allocated to each process. Operating System 56 Abbaye Kumar Dende KIIT POLYTECHNIC Request: - An n*m matrix indicates the current request of each process. If request [I,jJ=k, then the process Pi is requesting k more instances of resource type Rj Detection algorithm: STEP I:- (work-available) Available I, for i-1, 2, 3, 4 If Allocation i! = 0 Then finishfi] =false Otherwise finish{i] true. STEP 2:- Find an index such that B finish{i] =false Jj request[i] <-available or workFinish [i] = true Go to step-2 STEP 4:- If finish [i] =false, for some i, then the system is in deadlock state. Le. process Pi is deadlocked, BECOVERY FROM DEADLOCK When the detection algorithm detects that deadlock exists in the system then there are two methods for breaking a deadlock. One solution is simply to abort one by one process to Break the circular wait, - Second solution is to pre-empt some resources from one or more of the deadlock process. > PROCESS TERMINATION This method used to recover from deadlock. We use one of two methods for process termination. = Abort all deadlocked process ~ Abort one by one process until all cycle is eliminating. i. Abort all deadlock processes: - It means that release all the processes in the deadlocked state and start the allocation from the starting point. - Itis an expensive method. ii, Abort one by one process until the deadlock cycle is broken: In this method first abort the one of the processes in the deadlocked state and allocated the resources (resources from abort process) to some other process in the DL state. - Then check whether the deadlock braked or not. - If YES, then it is ok ie. deadlock is eliminated. If NO, abort the process from the deadlock state then check. - Continue this process until we recover from deadlock. ~ This also expensive method but better than first one. Operating System 37 Abbaye Kumar Dende KIIT POLYTECHNIC - In this method there is an overhead because every time the DL detection algorithm is invoked afler cach process is aborted. - Ex. End task in windows. - There are some features which determines the which process to be aborted: |. Priority of the process. Il, How long the process is computed and how long time it is needed to completion, Ill. How many resources the process has currently used. IV. How many more resources it needs for completion. » RESOURCE PREEMPTIO’ resourcepre-emption. They are- ‘There are three methods to eliminate deadlocks using © Selection a victim, © Roll back © Starvation Selecting a vietim:- Select a victim resource from the DL state, and pre-empt that one. - Selection of victim done so that the cost will be minimum, Rollback: - When a resource will be pre-empted from a process, then naturally the process will go into the waiting state. So we must roll back the process to some safe state so that it will be started fromsame state again or not from the beginning. Le, roll back the processes and resources up to some safe state, and restart it from that state. - This method requires the system to keep more information about the state of all the running processes. Starvation: - How to ensure that starvation will not occur? It should be kept in mind that resources should not be pre-empted from etc. same process again and again; otherwise that process will not be completed for a long period of time. - That is a process can a resources can be picked as a victim only a finite number of times, not morethan that, otherwise it create a starvation. Operating System 58 Abbaye Kamar Dende

You might also like