Osrz
Osrz
tions of OS:An Operating System acts as a communication bridge (interface) between the (6) Control over system performance:Monitors overall system health to help improve
the computer bya boot program, manages all of the other application programs in a computer. user and computer hardware. The purpose of an operating system is to provide a platform on performance. Records the response time between service requests and system response.
The application programs make use of the operating system by making requests for services which a user can execute programs in a convenient and efficient manner. Functions are Improves the performance by providing important information needed to troubleshoot problems.
through a defined application program interface (API). In addition, users can interact directly specified below: (7) Coordination between other software and users
with the operating system through a user interface, such as a command-line interface (CLI) or a (1) Processor Management: A program does nothing unless its instructions are executed by a Operating systems also coordinate and assign interpreters, compilers, assemblers and other
graphical UI (GUI) .The operating system manages a computer's software hardware resources, CPU. A program in execution is a process.. A process needs certain resources - including CPU software to the various users of the computer systems.
including: Input devices such as a keyboard and mouse.,Output devices such as display time, memory, files, and I/O devices to accomplish its task. These resources are either given to Operating System Operations:Modern operating systems are interrupt driven. If there are no
monitors and printers.,Network devices such as modems, routers and network connections., the process when it is created or allocated to it while it is running. The operating system is processes to execute, no I/0 devices to service, an operating system will sit quietly, waiting for
Storage devices such as internal and external drives. responsible for the following activities in connection with process management:*Scheduling something to happen. Events are almost always signalled by the occurrence of an interrupt or a
System calls: provide an interface to the services made available by an operating system. processes and threads on the CPUs *Creating and deleting both user and system processes trap. (is a software-generated interrupt caused either by an error or by a specific request from a
these call sare generally available as routines written in C and C++, although certain low-level *Suspending and resuming processes *Providing mechanisms for process synchronization user program that an operating system service be performed. For each type of interrupt,
tasks .Once the two file names are obtained, the program must open the input file and create *Providing mechanisms for process communication separate segments of code in the operating system determine what action should be taken.
the output file. Each of these operations requires system call. There are also possible error (2) Memory Management:For a program to be executed, it must be mapped to absolute Dual-Mode Operation:The proper execution of the operating system must be able to distinguish
conditions for each operation. When the program tries to open the input file, it may find that addresses and loaded into memory. As the program executes, it accesses program instructions between the execution of operating-system code and user defined code. The approach taken by
there is no file of that name or that the file is protected against access. after the entire file is and data from memory by generating these absolute addresses. . most computer systems is to provide hardware support that allows us to differentiate among
copied, the program may close both files (another system call), write a message to the console The operating system is responsible for the following activities in connection with memory various modes of execution. we need two separate mode of operation: user mode and kernel
or window (more system calls), and finally terminate normally (the final system call). Frequently/ management:*Keeping track of which parts of memory are currently being used and by whom mode (supervisor mode, system mode). A bit, called the mode bit, is added to the hardware of
systems execute thousands of system calls per second. Types of System Calls :System calls *Deciding which processes (or parts thereof) and data to move into and out of memory the computer to indicate the current mode: kernel(0) or user(1). With the mode bit we are to
can be grouped roughly into six major categories: process control, file manipulation, device *Allocating and de-allocating memory space as needed distinguish between a task that is executed on behalf of the operating system and one that is
manipulation, information maintenance, communication, device manipulation, information (3) Storage Management: the operating system provides a uniform,logical view of information executed on behalf of user. systems.
maintenance, communications, and protection. storage. The operating system abstracts from the physical properties of its storage devices to Evolution of OS:The evolution of OS is directly dependent on the development of computer
Operating System StructureA system as large and complex as a modern operating system must define a logical storage unit that is the file. The operating system maps files onto physical media systems and how users use them. The following specifications are the computing systems
be engineered carefully if it is to function properly and be modified easily. A common approach and accesses these files via the storage devices. through the past 75 years in the timeline.Early Evolution:1945: ENIAC, Moore School of
is to partition the task into small components rather than have one monolithic system. Each of 3.1 File-System Management:it is one of the most visible components of an operating system. Engineering, University of Pennsylvania.1949: EDSAC and EDVAC 1949: BINAC - a successor
these modules should be a well- defined portion of the system, with carefully defined inputs, Computers can store information on several different types of physical media.hem easier to use. to the ENIAC 1951: UNIVAC by Remington 1952: IBM 701 1956: The interrupt 1954-1957:
outputs, and functions. These components are interconnected and melded into a kernel. 3.2 Mass-Storage Management: The main memory is too small to accommodate all data and FORTRAN was developed OSs - Late 1950s:By the late 1950s OSs were well improved and
Layered Approach:With proper hardware support, OS can be broken into pieces that are smaller programs, and because the data that it holds are lost when power is lost, the computer system started supporting following usages:It was able to perform Single stream batch processing. It
and more appropriate than those allowed by the original MS-DOS and UNIX systems. The OS must provide secondary storage to back up main memory. could use Common, standardized, input/output routines for device access. Program transition
can then retain much greater control over the computer and over the applications that make use 3.3 Caching:Cache is a high-speed memory with a limited size which locates in between the capabilities to reduce the overhead of starting a new job were added. Error recovery to clean up
of that computer.Implementers have more freedom in changing the inner workings of the system position of RAM and CPU registers. Internal programmable registers, such as index registers, after a job terminated abnormally was added. Job control languages that allowed users to
and in creating modular operating systems. Under a top-down approach,the overall functionality provide a high-speedcache for main memory. specify the job definition and resource requirements were made possible. OSs - In 1960s 1961:
and features are determined and are separated into components. Information hiding isalso (4) Protection and Security:IProtection, then, is any mechanism for controlling the access of The dawn of minicomputers 1962: Compatible Time-Sharing System (CTSS) from MIT 1963:
important, because it leaves programmers free to implement the low-level routines as they see processes or users to the resources defined by a computer system. This mechanism must Burroughs Master Control Program (MCP) for the B5000 system 1964: IBM System/360 1960s:
fit, provided that the external interface of the routine stays unchanged provide means to specify the controls to be imposed and means to enforce the controls. Disks became mainstream 1966: Minicomputers got cheaper, more powerful, and really useful.
Components: of os A computer system can be divided roughly into four components: the Protection can improve reliability by detecting latent errors at the interfaces between component 1967-1968: Mouse was invented. 1964 and onward: Multics 1969: The UNIX Time-Sharing
hardware, the operating system, the application programs, and the users.nThe hardware - the subsystems. Early detection of interface errors can often prevent contamination of a healthy System from Bell Telephone Laboratories.OS Features by 1970s Multi User & Multi tasking
Central Processing Unit (CPU) the memory, and the Input /Output devices - provides the basic subsystem by another subsystem that is malfunctioning. wasintroduced.Dynamic address translation hardware and Virtual machines came into picture.
computing resources for the system. The application programs - such as word processors/ (5) Device Management:An Operating System manages device communication via their Modular architectures came into existence. Personal, interactive systems came into existence.
spreadsheets/ compilers, and Web browsers - define the ways in which these resources respective drivers. It performs the following activities for device management. after 1970 1971: Intel announces the microprocessor 1972: IBM comes out with VM: the Virtual
are used to solve users' computing problems. The operating system controls the hardware and *Keeps tracks of all devices connected to system.*Designates a program responsible for every Machine OS 1973: UNIX 4th Edition is published \ 1973: Ethernet 1974 The Personal Computer
coordinates its use among the various application programs for the various users. We can also device known as the Input/output controller.*Decides which process gets access to a certain Age begins 1974: Gates and Allen wrote BASIC for the Altair 1976: Apple II August 12, 1981:
view a computer system as consisting of hardware, software, and data. Operating system device and for how long. *Allocates devices in an effective and efficient way.*De-allocates IBM introduces the IBM PC 1983 Microsoft begins work on MS-Windows 1984 Apple Macintosh
provides the means for proper use of these resources in the operation of the computer devices when they are no longer required. comes out 1990 Microsoft Windows 3.0 comes out 1991 GNU/Linux 1992 The first Windows
system. Operating systems have two viewpoints: that of the user and that of the system. virus comes out 1993 Windows NT 2007: iOS 2008: Android
Unit 2 Process Scheduling:The objective of multiprogramming is to have some process running at all Inter process Communication:Processes executing concurrently in the operating system may be
Process State:As a process executes, it changes state. The state of a process is defined in part times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among either independent processes or cooperating processes. A process is independent if it cannot
by the current activity of that process. Each process may be in one of the following states: processes so frequently that users can interact with each program while it is running. To meet affect or be affected by the other processes executing in the system. Any process that does not
New:► The process is being created. these objectives, the process scheduler selects an available process (possibly from a set of share data with any other process is independent. A process is cooperating if it can affect or be
Running:► Instructions are being executed. several available processes) for program execution on the CPU. For a single-processor system, affected by the other processes executing in the system. Clearly, any process that shares data
Waiting:► The process is waiting for some event to occur (such as an I/0 completion or there will never be more than one running process. If there are more processes, the rest will with other processes is a cooperating process
reception of a signal). have to wait until the CPU is free and can be rescheduled. *Information sharing: Since several users may be interested in the same piece of information
Ready.► The process is waiting to be assigned to a processor. Scheduling Queues:As processes enter the system, they are put into a job queue, which we must provide an environment to allow concurrent access to such information.
Terminated. ► The process has finished execution. consists of all processes in the system. The processes that are residing in main memory and *Computation speedup: If we want a particular task to run faster, we must break it into subtasks,
It is important to realize that only one process can be running on any processor at any instant. are ready and waiting to execute are kept on a list called the ready queue. This queue is each of which will be executing in parallel with the others. Notice that such a speedup can be
Process Control Block:Each process is represented in the operating system by a Process generally stored as a linked list.A ready-queue header contains pointers to the first & final PCBs achieved only if the computer has multiple processing elements (such as CPUs or I/O channels)
Control Block (PCB) – also called a task control block. It contains many pieces of information in the list.Each PCB includes a pointer field that points to the next PCB in the ready queue. *Modularity: It has to construct the system in a modular fashion, dividing the system
associated with a specific process, including these: o Process state. The state may be new, Schedulers:. The long-term scheduler, or job scheduler, selects processes from this pool and functions into separate processes or threads.
ready running, waiting, halted, and so on. o Program counter. The counter indicates the address loads them into memory for execution. The short-term scheduler, or CPU scheduler, selects *Convenience: Even an individual user may work on many tasks at the same time. For
of the next instruction to be executed for this process.3 o CPU registers. The registers vary in from among the processes that are ready to execute and allocates the CPU to one of them. instance, a user may be editing, printing, and compiling in parallel.
number and type, depending on the computer architecture. They include accumulators, index The primary distinction between these two schedulers lies in frequency of execution. The Cooperating processes require an Inter Process Communication (IPC) mechanism that will
registers, stack pointers, and general-purpose registers, plus any condition-code information. short-term scheduler must select a new process for the CPU frequently. A process may execute →allow them to exchange data and information. There are two fundamental models of inter
Along with the program counter, this state information must be saved when an interrupt occurs, for only a few milliseconds before waiting for an I/0 request. Often, the short-term scheduler process communication: (1) shared memory and (2) message passing. In the shared-memory
to allow the process to be continued correctly afterward. (diagram below) o CPU-scheduling executes at least once every 100 milliseconds. Because of the short time between executions, model, a region of memory that is shared by cooperating processes is established. Processes
information. This information includes a process priority, pointers to scheduling queues, and any the short-term scheduler must be fast. can then exchange information by reading and writing data to the shared region. In the message
other scheduling parameters. o Memory-management information. This information may include Operations on Processes:The processes in most systems can execute concurrently, and they passing model, communication takes place by means of messages exchanged between the
such information as the value of the base and limit registers, the page tables, or the segment may be created and deleted dynamically. Thus, these systems must provide a mechanism for cooperating processes.
tables, depending on the memory system used by the operating system . o Accounting process creation and termination. Process: Scheduling CriteriaDifferent CPU-scheduling algorithms have different properties, and
information. This information includes the amount of CPU and real time used, time limits, *Process Creation:A process may create several new processes, via a create-process system the choice of a particular algorithm may favour one class of processes over another. Many
account numbers, job or process numbers, and so on. o I/O status information. This information call, during the course of execution. The creating process is called a parent process,and the criteria have been suggested for comparing CPU-scheduling algorithms. Which characteristics
includes the list of I/O devices allocated to the process, a list of open files, and so on. new processes are called the children of that process. Each of these new processes may in turn are used for comparison can make a substantial difference in which algorithm is judged to be
Synchronization:Communication between processes takes place through calls to send() and create other processes, forming a tree of processes. best. The criteria include the following:
receive () primitives.There are different design options for implementing each primitive. *Process Termination:A process terminates when it finishes executing its final statement and CPU utilization:We want to keep the CPU as busy as possible.Conceptually, CPU utilization can
Message passing may be either blocking or non-blocking also known as synchronous and asks the operating system to delete it by using the exit () system calL Termination can occur in range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
asynchronous.* Blocking send. The sending process is blocked until the message is received by other circumstances as well. A process can cause the termination of another process via an loaded system) to 90 percent (for a heavily used system).
the receiving process or by the mailbox.* Non-blocking send. The sending process sends the appropriate system call (for example, TerminateProcess () in Win32). Usually, such a system Throughput:If the CPU is busy executing processes, then work is being done. One measure of
message and resumes operation.* Blocking receive. The receiver blocks until a message is call can be invoked only by the parent of the process that is to be terminated work is the number of processes that are completed per time unit, called throughput. For long
available.* Non-blocking receive. The receiver retrieves either a valid message or a null. Buffering:Whether communication is direct or indirect, messages exchanged by communicating processes, this rate may be one process per hour; for short it may ten processes per second.
Context Switch:Switching the CPU to another process requires performing a state save of the processes reside in a temporary queue. Basically, such queues can be implemented in three Turnaround time. From the point of view of a particular process, the important criterion is how
current process and a state restore of a different process. This task is known as a context ways:*Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any long it takes to execute that process. The interval from the time of submission of a process to
switch. When a context switch occurs, the kernel saves the context of the old process in its PCB messages waiting in it.the sender must block until the recipient receives the message. the time of completion is the turnaround time. Turnaround time is the sum of the periods spent
and loads the saved context of the new process scheduled to run. Context-switch time is pure *Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/0.
overhead, because the system does no useful work while switching. Its speed varies from the queue is not full when a new message is sent, the message is placed in the queue and the Waiting time. The CPU-scheduling algorithm does not affect the amount of time during which a
machine to machine, depending on the memory speed, the number of registers that must be sender can continue execution without waiting. The link's capacity is finite, however. If the link is process executes or does I/0; it affects only the amount of time that a process spends waiting in
copied, and the existence of special instructions (such as a single instruction to load or store all full, the sender must block until space is available in the queue. the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
registers). Typical speeds are a few milliseconds. Context-switch times are highly dependent on *Unbounded capacity. The queue's length is potentially infinite; thus, any number of messages Response time. In an interactive system, turnaround time may not be the best criterion. another
hardware support. can wait in it. The sender never blocks.The zero-capacity case is sometimes referred to as a measure is the time from the submission of a request until the first response is produced. This
message system with no buffering; the other cases are referred to as systems with automatic measure, called response time, is the time it takes to start responding, not the time it takes to
buffering. output the response. The turnaround time is generally limited by the speed of the output device.
Process: Scheduling Algorithms: CPU scheduling deals with the problem of deciding which of Unit3 Deadlock Prevention:Deadlock prevention is a set of methods for ensuring that at least one of
the processes in the ready queue is to be allocated the CPU. There are many different Deadlock is a situation where a set of processes are permanently blocked because each these necessary conditions cannot hold.
CPU-scheduling algorithms process is holding some resources and waiting for another resource which are held by Mutual Exclusion: The mutual exclusion condition must holds for non-sharable resources. For
First-Come, First-Served Scheduling:By far the simplest CPU-scheduling algorithm is the others-thus any of the processes are impossible to proceed. Eg :- Processes P1 holding example a printer cannot be simultaneously shared by several processes. Sharable resources
first-come, first-served (FCFS) scheduling algorithm. With this scheme, the process that resource R1 and waiting for R2 which is acquired by P2, and P2 is waiting for R1, which is do not require mutual exclusive access and thus cannot be involved in a dead lock. Read only
requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is shown in the following model. files are good example of a shared resource. If several processes attempt to open the read only
easily managed with a FIFO queue. When a process enters them ready queue, its PCB is linked Deadlock characterisation file at the same time they can be granted simultaneous access to the file. A process never
onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the Necessary Conditions:A deadlock situation may occur if the following four conditions hold needs to wait for a shareable resource. In general deadlock cannot prevent by denying the
queue. The running process is then removed from the queue. simultaneously in a system.a) Mutual Exclusion b) Hold and Wait c) No Preemption d) Circular mutual exclusion condition. Some resources are intrinsically non-sharable.
Shortest-Job-First Scheduling:The shortest-job-first (SJF) scheduling algorithm associates with Wait Hold and wait:-The hold and wait condition can be eliminated by requiring or forcing a process
each process the length of the process's next CPU burst. When the CPU is available, it is Mutual Exclusion:-At least one resource must be held in non-sharable mode that is only one to release all resources hold by it whenever it request a resource that is not available. In other
assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two process can use the resources. If another process requests that resource, requesting process words deadlocks are prevented because waiting processes are not holding any resources. One
processes are the same, FCFS scheduling is used to break the tie. Note that a more must wait until the resource has been released. protocol can be used requires each process to request and be allocated all its resources before
appropriate term for this scheduling method would be the shortest-next-CPU-burst algorithm, Hold and wait:A process must be holding at least one resource and waiting to acquire additional it begins execution. Another protocol allows a process to request resources only when the
because scheduling depends on the length of the next CPU burst of a process, rather than its resource that is currently held by other processes. process has none. A process may request some resources and use them. Before it request any
total lengt No Preemption:Resources allocated to a process can’t be forcibly taken out from it unless it additional resources, however, it must release all the resources that it is currently allocated.
Priority Scheduling:A priority is associated with each process, and the CPU is allocated to the releases that resource after completing the task. These protocols have two main disadvantages. First, resource utilization may be low, since
process with the highes priority. Equal-priority processes are scheduled in FCFS order. An SJF Circular Wait: A set {P0 , P1 , …….Pn} of waiting processes must exists such that P0 is waiting many of the resources may be allocated but unused for a long period. Second, starvation is
algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next for a resource that is held by P1 , P1 is waiting for the resource that is held by P2 ….. P (n – 1) possible. A process that needs several popular resources may have to wait indefinitely, because
CPU burst. The larger the CPU burst, the lower the priority, and vice versa. Priorities are is waiting for the resource that is held by Pn and Pn is waiting for the resources that is held by at least one of the resources that it needs is always allocated to some other process.
generally indicated by some fixed range of numbers; here assume that low numbers represent P0 No Preemption: The third necessary condition is that there be no pre-emption of resources that
high priority. Synchronization:A cooperating process is one that can affect or be affected by other processes have already been allocated. To ensure that this condition does not hold, we can use the
Round-Robin Scheduling:The Round-Robin (RR) scheduling algorithm is designed especially executing in the system. Cooperating processes can either directly share a logical address following protocol. If a process is holding some resources and request another resource that
for timesharing systems. It is similar to FCFS scheduling, but preemption is added to enable the space (that is, both code and data) or be allowed to share data only through files or messages. cannot be immediately allocated to it (that is the process must wait), then all resources currently
system to switch between processes. A small unit of time, called a time quantum or time slice, is The Critical Section Problem Consider a system consisting of n processes {Po, P 1 , ... , P 11 _ being held are pre-empted. In other words these resources are implicitly released. The
defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is I}. Each process has a segment of code, called a critical section, in which the process may be pre-empted resources are added to the list of resources for which the process is waiting. The
treated as a circular queue. TheCPU scheduler goes around the ready queue, allocating the changing common variables, updating a table, writing a file, and so on. The important feature of process will restart only when it can regain its old resources, as well as the new ones that it is
CPU to each process for a time interval of up to 1 time quantum. the system is that, when one process is executing in its critical section, no other process is to requesting.
Multilevel Queue Scheduling:A multilevel queue scheduling algorithm partitions the ready queue be allowed to execute in its critical section. There is no two processes are executing in their Circular Wait:One way to ensure that circular wait never holds: ● impose a total ordering of all
into several separate queues The processes are permanently assigned to one queue, generally critical sections at the same time. The critical-section problem is to design a protocol that the resource types and● require that each process requests resources in an increasing order of
based on some property of the process, such as memory size, process priority, or process type. processes can use to cooperate. Each process must request permission to enter its critical enumeration. A process can initially request any number of instances of a resource type -say,
Each queue has its own scheduling algorithm. For example, separate queues might be used for section. The section of code implementing this request is the entry section. The critical section Ri. After that, the process can request instances of resource type Rj if and only if F(Rj) >
foreground or interactive and background or batch processes. The foreground queue might be may be followed by an exit section. The remaining code is the remainder section F(Ri).Where F(Rj) is the numeric order of resources Rj.
scheduled by an RR algorithm, while the background queue is scheduled by an FCFS Semaphoresit can use as a synchronization tool for application programmers. A semaphore is
algorithm. a variable or abstract data type used to control access to a common resource by multiple
Multilevel Feedback Queue Scheduling:This Algorithm allows a process to move between processes in a current system such as multitasking OS. A semaphore S is an integer variable
queues. The idea is to separate processes according to the characteristics of their CPU bursts. that, apart from initialization, is accessed only through two standard atomic operations: wait ()
If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme and signal (). The wait () operation was originally termed P ("to test"); signal () was originally
leaves I/O-bound and interactive processes in the higher-priority queues. In addition, a process called V ("to increment"). The definition of wait () is as follows:
that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form wait(S) { } while S <= 0 ; // no-op s--; The definition of signal () is as follows: Signal (S) { S++; }
of aging prevents starvation. All modifications to the integer value of the semaphore in the wait () and signal () operations
must be executed indivisibly. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value.
Deadlock Avoidance:Simplest and most useful model requires that each process declare the Deadlock Detection: ● System does not use either a deadlock-prevention or a deadlock Unit 4
maximum number of resources of each type that it may need ● The deadlock-avoidance avoidance algorithm, and may enter in to a deadlock state ● system may provide: • An algorithm swapping:In multiprogramming, a memory management scheme called swapping can be used
algorithm dynamically examines the resource-allocation state to ensure that there can never be that examines the state of the system to determine whether a deadlock has occurred• An to increase the CPU utilisation. The process of bringing a process to memory and after running
a circular-wait condition ● Resource-allocation state is defined by the number of available and algorithm to recover from the deadlock. a while, temporarily copying it to disk is known as swapping, for example assuming a
allocated resources, and the maximum demands of the processes. Single instance of each Resource Type: ● Deadlock detection algorithm use a variant of multiprogramming environment with a round-robin CPU scheduling algorithm. When time slice
Safe State:● When a process requests an available resource, system must decide if resource allocation graph called wait for graph. ● Edge Pi→Pj implies Pi is waiting for Pj to expires, the memory manager will start to swap out the process that just finished, and to swap
immediate allocation leaves the system in a safe state ● System is in safe state if there exists a release a resource ● An edge Pi→Pj in a wait-for graph is obtained by merging two edges another process to the memory space that has been freed.Normally a process that is swapped
sequence <P1, P2, ..., Pn> of ALL the processes in the systems such that for each Pi, the Pi→Rq and Rq→Pj in the corresponding resource allocation graph\ out will be the swapped back to the same memory space that it occupied previously. If binding is
resources that Pi can still request can be satisfied by currently available resources + resources Single Instance of Each Resource Type:● a deadlock exists in the system if and only if the done at load time, then the process cannot be moved to different memory locations. If execution
held by all the Pj, with j < I wait-for graph contains a cycle. ● Periodically invoke an algorithm that searches for a cycle in time binding is being used, then a process can be swapped into a different memory space,
Resource-Allocation Graph Scheme: ● Claim edge Pi→ Rj indicated that process Pj may the graph. ● An algorithm to detect a cycle in a graph requires an order of n2 operations, because the physical address are computed during execution time.
request resource Rj; represented by a dashed line ● Claim edge converts to request edge when where n is the number of vertices in the graph. Contiguous Memory Allocation:The memory is usually divided into two partitions: one for the
a process requests a resource● Request edge converted to an assignment edge when the Detection Algorithm: resident operating system and one for the user processes. We can place the operating system
resource is allocated to the process ● When a resource is released by a process,assignment 1. Let Work and Finish be vectors of length m and n, respectively Initialize: in either low memory or high memory. The major factor affecting this decision is the location of
edge reconverts to a claim edge ● Resources must be claimed a priori in the system (a) Work = Available (b) For i = 1,2, ..., n, if Allocation i =! 0, then Finish[i]=false;otherwise, the interrupt vector. Since the interrupt vectoris often in low memory, programmers usually place
Banker's Algorithm: ● When a new process enters the system, it must declare the maximum Finish[i] = true. the operating system in low memory as well. In. contiguous memory allocation, each process is
number of instances of each resource type that it may need. ● The requirement may not exceed 2. Find an index i such that both: (a) Finish[i] == false (b) Request i<=Work If no such i exists, contained in a single contiguous section of memory
the total number of resources in the system. ● When a user requests a set of resources, the go to step 4. operating system during program execution.
system must determine whether the allocation of these resources will leave the system in a safe 3. Work = Work + Allocation i Finish[i] = true go to step 2 Memory Allocation:One of the simplest methods for allocating memory is to divide memory into
state. ● If it will, the resources are allocated; otherwise, the process must wait until some 4. If Finish[i] == false, for some i, 1<=i<=n, then the system is in deadlock state. several fixed-sized partitions. Each partition may contain exactly one process. Thus, the degree
other process releases enough resources. Moreover, if Finish[i] == false, then Pi is deadlocked of multiprogramming is bound by the number of partitions. In this, when a partition is free a
Data structures need to implement the banker's algorithm: Recovery from Deadlock : Process Termination ● Abort all deadlocked processes. process is selected from the input queue and is loaded into the free partition. When the process
1. Available: A vector of length m indicates the number of available resources of each type. ● Abort one process at a time until the deadlock cycle is eliminated. ● In which order should we terminates, the partition becomes available for another process. The operating system can
• If Available[j] = k, then k instances of resource type Rj are available. choose to abort?• Priority of the process.• How long process has computed, and how much order the input queue according to a scheduling algorithm. This procedure is a particular
2. Max: An n x m matrix defines the maximum demand of each process. • If Max[i] [j] = k, then longer to completion. • Resources the process has used. • Resources process needs to instance of the general dynamic storage allocation problem, which concerns how to satisfy a
process Pi may request at most k instances of resource type Rj. complete. • How many processes will need to be terminated. • Is process interactive or batch? request of size n from a list of free size. There are many solutions to this problem. The first-fit,
3. Allocation: An n x m matrix defines the number of resources of each type currently allocated Recovery from Deadlock : Resource Preemption ● Selecting a victim – determine the order of best-fit and worst-fit strategies are the ones most commonly used to select a free hole from the
to each process. • If Allocation[i][j] = k, then process Pi is currently allocated k instances of preemption to minimize cost. Cost factors may include such parameters as the number of set of available holes.
resource type Rj. resources a deadlocked process is holding and the amount of time the process has thus far First fit . Allocate the first hole that is big enough. Searching can start either at the beginning of
4. Need: An n x m matrix indicates the remaining resource need of each process.• If Need[i][j] = consumed during its execution. ● Rollback – We must roll back the process to some safe state the set of holes or at the location where the previous first-fit search ended. It can stop searching
k, then process Pi may need k more instances of resource type Ri to complete its task. and restart it from that state. • Since, in general, it is difficult to determine what a safe state is, as soon as it finds a free hole that is large enough.
•Need[i][j]=Max[i][j]-Allocation [i][j]. the simplest solution is a total rollback: abort the process and then restart it. • Although it is Best fit . Allocate the smallest hole that is big enough. It must search the entire list, unless the
Safety Algorithms: 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: more effective to roll back the process only as far as necessary to break the deadlock, this list is ordered by size. This strategy produces the smallest leftover hole.
Work = Available Finish [i] = false for i = 0, 1, ..., n- 1 method requires the system to keep more information about the state of all running processes Worst fit . Allocate the largest hole. Again, it must search the entire list, unless it is sorted by
2. Find an i such that both: (a) Finish [i] = false (b) Needi<=Work If no such i exists, go to step4 return to some safe state,restart process for that state. size. This strategy produces the largest leftover hole, which may be more useful than the
3. Work = Work + Allocation i Finish[i] = true go to step 2 smaller leftover hole from a best-fit approach.
4. If Finish [i] == true for all i, then the system is in a safe state.
Monitors:Various types of errors can be generated easily when programmers use semaphores
incorrectly to solve the critical-section problem. Similar problems may arise in the other
synchronization models. To deal with such errors, researchers have developed high-level
language constructs. One of the fundamental high-level synchronization construct is the monitor
type
Fragmentation:Both the first-fit and best-fit strategies for memory allocation suffer from external Virtual Memory Management:Virtual memory is a technique that allows the execution of Unit5
fragmentation.As processes are loaded and removed from memory, the free memory space is processes that are not completely in memory. One major advantage of this scheme is that Access Methods:Files store information. When it is used, this information must be accessed and
broken into littlepieces. External fragmentation exists when there is enough total memory space programs can be larger than physical memory. Further, virtual memory abstracts main memory read into computer memory. The information in the file can be accessed in several ways. Some
to satisfy a request but the available spaces are not contiguous; storage is fragmented into a into an extremely large, uniform array of storage, separating logical memory as viewed by the systems provide only one access method for files. Other systems, such as those of IBM, support
large number of small holes. This fragmentation problem can be severe. In the worst case, It user from physical memory. This technique frees programmers from the concerns of many access methods, and choosing the right one for a particular application is a major design
could have a block of free (or wasted) memory between every two processes. If all these small memory-storage limitations. Virtual memory also allows processes to share files easily and to problem.
pieces of memory were in one big free block instead, we might be able to run several more implement shared memory. In addition, it provides an efficient mechanism for process creation. Sequential Access:The simplest access method is sequential access. Information in the file is
processes.Memory fragmentation can be internal as well as external. Consider a Virtual memory involves the separation of logical memory as perceived by users from physical processed in order, one record after the other. This mode of access is by far the most common;
multiple-partition allocation scheme with a hole of 18,464 bytes. Suppose that the next process memory. This separation allows an extremely large virtual memory to be provided for for example, editors and compilers usually access files in this fashion.
requests 18,462 bytes. If we allocate exactly the requested block, we are left with a hole of 2 programmers when only a smaller physical memory is available (figure below). Virtual memory Direct Access:A file is made up of fixed-length that allow programs to read and write records
bytes. The overhead to keep trackof this hole will be substantially larger than the hole itself. The makes the task of programming much easier, because the programmer no longer needs to rapidly in no particular order. The direct-access method is based on a disk model of a file, since
general approach to avoiding this problem is to break the physical memory into fixed-sized worry about the amount of physical memory available; the programmer can concentrate instead disks allow random access to any file block. For direct access, the file is viewed as a numbered
blocks and allocate memory in units based on block size. With this approach, the memory on the problem to be programmed. sequence of blocks or records.
allocated to a process may be slightly larger than the requested memory. The difference Demand Paging:Consider how an executable program might be loaded from disk into memory. Other Access Methods:These methods generally involve the construction of an index for the file.
between these two numbers is internal memory that is internal fragmentation - used internal to a One option is to load the entire program in physical memory at program execution time. The index like an index in the back of a book contains pointers to the various blocks. To find a
partition. However, a problem with this approach is that it may not initially need the entire program in record in the file, we first search the index and then use the pointer to access the file directly
Paging is a memory-management scheme that permits the physical address space a process to memory. demand-paging :system is similar to a paging system with swapping (Figure below) and to find the desired record.
be non-contiguous. Paging avoids external fragmentation and the need for compaction. It also where processes reside in secondary memory (usually a disk). When we want to execute a File System Structure:Disks provide the bulk of secondary storage on which a file system is
solves the considerable problem of fitting memory chunks of varying sizes onto the backing process, we swap it into memory. A swapper manipulates entire processes, whereas a page is maintained. They have two characteristics that make them a convenient medium for storing
store; most memory management schemes used before the introduction of paging suffered from concerned with the individual pages of a process. multiple files: *A disk can be rewritten in place; it is possible to read a block from the disk,
this problem. The problem arises because, when some code fragments or data residing in main Page Replacement:Consider the system memory is not used only for holding program pages. modify the block, and write it back into the same place. *A disk can access directly any block of
memory need to be swapped out, space must be found on the backing store. The backing store Buffers for I/ 0 also consume a considerable amount of memory. This use can increase the information it contains. Thus, it is simple to access any file either sequentially or randomly, and
has the same fragmentation problems, but access is much slower, so compaction is impossible. strain on memory-placement algorithms. Deciding how much memory to allocate to I/0 and how switching from one file to another requires only moving the read-write heads and waiting for the
Because of its advantages over earlier methods, paging in its various forms is used in most much to program pages is a significant challenge. Some systems allocate a fixed percentage of disk to rotate.
operating systems. Traditionally, support for paging has been handled by hardware. However, memory for I/0 buffers, whereas others allow both user processes and the I/0 subsystem to File Systems provide efficient and convenient access to the disk by allowing data to be stored,
recent designs have implemented paging by closely integrating the hardware and operating compete for all system memory. Over-allocation of memory manifests itself as follows. While a located, and retrieved easily. A file system poses two quite different design problems. The first
system, especially on 64-bit microprocessors. user process is executing, a page fault occurs. The operating system determines where the problem is defining how the file system should look to the user. This task involves defining a file
Segmentation:Segmentation is a memory-management scheme that supports this user view of desired page is residing on the disk but then finds that there are no free frames on the and its attributes, the operations allowed on a file, and the directory structure for organizing files.
memory. A logical address space is a collection of segments. Each segment has a name and a free-frame list; all memory is in use.The operating system could instead swap out a process, The second problem is creating algorithms and data structures to map the logical file system
length. The addresses specify both the segment name and the offset within the segment. The freeing all its frames and reducing the level of multiprogramming. types: onto the physical secondary-storage devices. The file system itself is generally composed of
user therefore specifies each address by two quantities: a segment name and an offset. *Basic Page Replacement *FIFO Page Replacement *Optimal Page Replacement many different levelsThe basic file system needs only to issue generic commands to the
(Contrast this scheme with the paging scheme, in which the user specifies only a single *LRU Page Replacement *LRU-Approximation Page Replacement *Counting-Based Page appropriate device driver to read and write physical blocks on the disk. Each physical block is
address, which is partitioned by the hardware into a page number and an offset, all invisible to Replacement identified by its numeric disk addressbefore the transfer of a disk block can occur. When the
the programmer.)An. important aspect of memory management that became unavoidable with buffer is full, the buffer manager must find more buffer memory or free up buffer space to allow a
paging is the separation of the user's view of memory from the actual physical memory. The requested I/O to complete.*The file organization module knows about files and their logical
user's view of memory is not the same as the actual physical memory. The user's view is blocks, as well as physical blocks. By knowing the type of file allocation used and the location of
mapped onto physical memory. This mapping allows differentiation between logical memory and the file, the file-organization module can translate logical block addresses to physical block
physical memory. Usersprefer to view memory as a collection of variable-sized segments, with addresses for the basic file system to transfer.The file-organization module also includes the
no necessary ordering among segments. free-space manager, which tracks unallocated blocks and provides these blocks to the
file-organization module when requested.*The logical file system manages metadata
information. Metadata includes all of the file-system structure except the actual data (or contents
of the files). The logical file system manages the directory structure to provide the file
organization module with the information the latter needs, given a symbolic file name. It
maintains file structure via file-control blocks.
Disk Scheduling:One of the responsibilities of the operating system is to use the hardware
efficiently. For the disk drives, meeting this responsibility entails having fast access time and
large disk bandwidth. The access time has two major components. The seek time is the time for
the disk arm to move the heads to the cylinder containing the desired sector. The rotational
latency is the additional time for the disk to rotate the desired sector to the disk head. The disk
bandwidth is the total number of bytes transferred, divided by the total time between the first
request for service and the completion of the last transfer. Whenever a process needs I/O to or
from the disk, it issues a system call to the operating system. The request specifies several
pieces of information:*Whether this operation is input or output *What the disk address for the
transfer is *What the memory address for the transfer is*What the number of sectors to be
transferred is
File OperationsA file is an abstract data type. To define a file properly, it has to consider the
operations that can be performed on files. The operating system can provide system calls to
create the following six basic file operations.
Creating a file. Two steps are necessary to create a file. First, space in the file system must be
found for the file, and an entry for the new file must be made in the directory.
Writing a file. To write a file, we make a system call specifying both the name of the file and the
information to be written to the file. Given the name of the file, the system searches the directory
to find the file's location.
Reading a file. To read from a file, we use a system call that specifies the name of the file and
where (in memory) the next block of the file should be put.
Repositioning within a file. The directory is searched for the appropriate entry, and the
current-file- position pointer is repositioned to a given value. Repositioning within a file need not
involve any actual I/0. This file operation is also known as a file seek.
Deleting a file. To delete a file, we search the directory for the named file. Having found the
associated directory entry, we release all file space, so that it can be reused by other files, and
erase the directory entry.
Truncating a file. The user may want to erase the contents of a file but keep its attributes.
Counting:Another approach takes advantage of the fact that, generally, several contiguous
blocks may be allocated or freed simultaneously, particularly when space is allocated with the
contiguous- allocation algorithm or through clustering. Thus, rather than keeping a list of n free
disk addresses, we can keep the address of the first free block and the number (n) of free
contiguous blocks that follow the first block. Each entry in the free-space list then consists of a
disk address and a count. Although each entry requires more space than would a simple disk
address, the overall list is shorter, as long as the count is generally greater than 1. Note that this
method of tracking free space is similar to the extent method of allocating blocks. These entries
can be stored in a B-tree, rather than a linked list for efficient lookup, insertion, and deletion.
Space Maps:Sun's ZFS ( Zettabyte File System ) was designed to encompass huge numbers of
files, directories, and even file systems .the resulting data structures could have been large and
inefficient if they had not been designed and implemented properly. On these scales, metadata
I/0 can have a large performance impact.
File Attribute
Name,Identifier,Type.,Location,Size,Protection.Time, date, and user identification.