Unit 2 Os
Unit 2 Os
3. OS as a resource allocator
➢ OS keeps track of the status of each resource and decides who gets a resource, for
how long and when.
➢ OS makes sure that different programs and users running at the same time do not
interfere with each other.
➢ It is also responsible for security, ensuring that unauthorized users do not access
the system.
➢ The primary objective of OS is to increase productivity of a processing resource
such as computer hardware or user.
➢ The OS is the first program run on a computer when the computer boots up.
The OS acts as a manager of these resources and allocates them to specific programs
and user as necessary for tasks.
The different types of system view for operating system can be explained as follows:
• The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc.
that are required by processes for execution. It is the duty of the operating system
to allocate these resources judiciously to the processes so that the computer
system can run as smoothly as possible.
• The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there
are no errors. It makes sure that the I/O devices work in a proper manner without
creating problems.
• Operating systems can also be viewed as a way to make using hardware easier.
• Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed
to easily communicate with the hardware.
• An operating system can also be considered as a program running at all times in
the background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is
generally followed.
Names of OS
DOS, windows 3,windows 95/98, windows NT/2000, Unix, Linux etc.
5. Classification of OS
OS classified into two types:
1. Character user interface (CUI) / Single user OS The user
can interact with computer through commands.
➢ We cannot use mouse here,
➢ It is not user friendly,
➢ If user knows the command only the user can interact with computer. For
example: DOS, Unix, Linux.
GOALS OF OS
➢ Simplify the execution of user programs and make solving user problems
easier.
➢ Use computer hardware efficiently.
➢ Make application software portable and versatile.
➢ Provide isolation, security and protection among user programs. Improve
overall system reliability.
The salient points about the above figure displaying Computer System
Organisation is −
➢ Computer consists of processor, memory, and I/O components, with one or
more modules of each type. These modules are connected through
interconnection network.
➢ I/O devices and the CPU can execute concurrently.
➢ Each device controller is in charge of a particular device type.
➢ Each device controller has a local buffer.
➢ CPU moves data from/to main memory to/from local buffers.
➢ I/O is from the device to local buffer of controller.
➢ Device controller informs CPU that it has finished its operation by causing an
interrupt.
8. Interrupt Handling
An interrupt is a necessary part of Computer System Organisation as it is
triggered by hardware and software parts when they need immediate attention.
An interrupt can be generated by a device or a program to inform the operating
system to halt its current activities and focus on something else.
Types of interrupts
Hardware and software interrupts are
two types of interrupts. Hardware interrupts
are triggered by hardware peripherals while
software interrupts are triggered by software
function calls.
Hardware interrupts are of further
two types. Maskable interrupts can be
ignored or disabled by the CPU while this is not
possible for non-maskable interrupts.
9. Types of Operating Systems
There are several types of Operating Systems which are mentioned below.
➢ Batch Operating System
➢ Multi-Programming System
➢ Multi-Processing System
➢ Multi-Tasking Operating System
➢ Time-Sharing Operating System
➢ Personal Computers
➢ Parallel Operating System
➢ Distributed Operating System
➢ Network Operating System
➢ Real-Time Operating System
Advantages
➢ Processors of the batch systems know how long the job would be when it is in the
queue.
➢ Multiple users can share the batch systems.
➢ The idle time for the batch system is very less.
➢ It is easy to manage large work repeatedly in batch systems.
Disadvantages
➢ The computer operators should be well known with batch systems.
➢ Batch systems are hard to debug.
➢ It is sometimes costly.
➢ The other jobs will have to wait for an unknown time if any job fails.
➢ It is very difficult to guess or know the time required for any job to complete.
Examples
Payroll Systems, Bank Statements, etc.
2. Multi-Programming Operating System
Multiprogramming Operating
Systems can be simply illustrated as more
than one program is present in the main
memory and any one of them can be kept in
execution. This is basically used for better
execution of resources.
Advantages of Multi-Programming Operating System Multi
Programming increases the Throughput of the System.
It helps in reducing the response time.
Advantages
➢ It increases the throughput of the system.
➢ As it has several processors, so, if one processor fails, we can proceed with
another processor.
Cooperative Multi-Tasking
The operating system never initiates context switching from the running process
to another process. A context switch occurs only when the processes voluntarily yield
control periodically or when idle or logically blocked to allow multiple applications to
execute simultaneously. Also, in this multitasking, all the processes cooperate for the
scheduling scheme to work.
Advantages
➢ Multiple Programs can be executed simultaneously in Multi-Tasking Operating
System.
➢ It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
The system gets heated in case of heavy programs multiple times.
Advantages
a. Each task gets an equal opportunity.
b. Fewer chances of duplication of software.
c. CPU idle time can be reduced.
d. Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripherals, reducing
the cost of hardware and increasing efficiency.
e. Improved Productivity: Time-sharing allows users to work concurrently,
thereby reducing the waiting time for their turn to use the computer. This
increased productivity translates to more work getting done in less time.
f. Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in real
time, providing a better user experience than batch processing.
Disadvantages
g. Reliability problem.
h. One must have to take care of the security and integrity of user programs
and data.
i. Data communication problem.
j. High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and
other overheads that come with supporting multiple users.
➢
Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of
bugs and errors.
k. Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of
user access, authentication, and authorization to ensure the security of
data and software.
Examples
l. IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that
was first introduced in 1972. It is still in use today, providing a virtual
machine environment that allows multiple users to run their own instances
of operating systems and applications.
m. TSO (Time Sharing Option): TSO is a time-sharing operating system
that was first introduced in the 1960s by IBM for the IBM System/360
mainframe computer. It allowed multiple users to access the same
computer simultaneously, running their own applications.
n. Windows Terminal Services: Windows Terminal Services is a
timesharing operating system that allows multiple users to access a
Windows server remotely. Users can run their own applications and
access shared resources, such as printers and network storage, in real-
time.
Disadvantages
b. Limited Scalability: Parallel systems have limited scalability as the
number of processors or cores in a single computer is finite.
c. Complexity: Parallel systems are more complex to program and debug
compared to single processor systems.
d. Synchronization Overhead: Synchronization between processors in a
parallel system can add overhead and impact performance.
Advantages
➢ Failure of one will not affect the other network communication, as all systems are
independent of each other.
➢ Electronic mail increases the data exchange speed.
➢ Since resources are being shared, computation is highly fast and durable.
➢ Load on host computer reduces.
➢ These systems are easily scalable as many systems can be easily added to the
network.
➢ Delay in data processing reduces.
Disadvantages
➢ Failure of the main network will stop the entire communication.
➢ To establish distributed systems the language is used not well-defined yet.
➢ These types of systems are not readily available as they are very expensive. Not
only that the underlying software is highly complex and not understood well yet.
Example: LOCUS
Advantages
a. Highly stable centralized servers.
b. Security concerns are handled through servers.
New technologies and hardware up-gradation are easily integrated into the
system. ➢ Server access is possible remotely from different locations and types of
systems.
Disadvantages
c. Servers are costly.
d. User has to depend on a central location for most operations. ➢
Maintenance and updates are required regularly.
Examples
Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux,
Mac OS X, Novell NetWare, BSD, etc.
Advantages
➢ Maximum Consumption: Maximum utilization of devices and systems, thus
more output from all the resources.
➢
➢ Task Shifting: The time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds in shifting from
one task to another, and in the latest systems, it takes 3 microseconds.
➢ Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
➢ Real-time operating system in the embedded system: Since the size of programs
is small, RTOS can also be used in embedded systems like in transport and others.
➢ Error Free: These types of systems are error-free.
➢ Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages
➢ Limited Tasks: Very few tasks run at the same time and their concentration is
very less on a few applications to avoid errors.
➢ Use heavy system resources: Sometimes the system resources are not so good
and they are expensive as well.
➢ Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
➢ Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
➢ Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.
Examples
Scientific experiments, medical imaging systems, industrial control systems,
weapon systems, robots, air traffic control systems, etc.
Mainframe Systems
➢ First commercial systems: Enormous, expensive and slow.
➢ I/O: Punch cards and line printers.
➢ Single operator/programmer/user runs and debugs interactively:
➢ Standard library with no resource coordination
➢ Monitor that is always resident
➢ Inefficient use of hardware: poor throughput and poor utilization
➢ They initially executed one program at a time and were known as batch systems.
Throughput: Amount of useful work done per hour Utilization:
keeping all devices busy
Another set of OS functions exists for ensuring the efficient operation of the
system itself via resource sharing
➢ Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them. Many types of resources such as
CPU cycles, main memory, and file storage may have special allocation code,
others such as I/O devices may have general request and release code.
➢ Accounting - To keep track of which users use how much and what kinds of
computer resources
➢ Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other. Protection involves
ensuring that all access to system resources is controlled. Security of the system
from outsiders requires user authentication, extends to defending external I/O
➢
devices from invalid access attempts. If a system is to be protected and secure,
precautions must be instituted throughout it. A chain is only as strong as its
weakest link.
System Calls
➢ A system call is a way for a user program to interface with the operating system.
The program requests several services, and the OS responds by invoking a series
of system calls to satisfy the request.
➢ A system call can be written in assembly language or a high-level language like
C, C++ or Pascal.
➢ System calls are predefined functions that the operating system may directly
invoke if a high-level language is used.
➢ A system call is a method for a computer program to request a service from the
kernel of the operating system on which it is running.
➢ A system call is a method of interacting with the operating system via programs.
➢ A system call is a request from computer software to an operating system's kernel.
➢ A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call, such as
connecting to a network device, may take a few seconds. Most operating systems
launch a distinct kernel thread for each system call to avoid bottlenecks. Modern
operating systems are multi-threaded, which means they can handle various
system calls at the same time.
➢ The Application Program Interface (API) connects the operating system's
functions to user programs. It acts as a link between the operating system and a
process, allowing user-level programs to request operating system services. The
kernel system can only be accessed using system calls. System calls are required
for any programs that use resources.
➢ When computer software needs to access the operating system's kernel, it makes a
system call. The system call uses an API to expose the operating system's services to
user programs. It is the only method to access the kernel system. All programs or
processes that require resources for execution must use system calls, as they serve
as an interface between the operating system and user programs.
Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
File Management 1
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device attributes, release
device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information.
There are some examples of information maintenance, including getting system data, set
time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections, send,
receive messages, etc.
open()
The open() system call allows you to access a file on a file system. It allocates
resources to the file and provides a handle that the process may refer to. Many processes
can open a file at once or by a single process only. It's all based on the file system and
structure. read()
It is used to obtain data from a file on the file system. It accepts three arguments in
general:
➢ A file descriptor.
➢ A buffer to store read data. ➢ The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it using
open() before reading. wait()
In some systems, a process may have to wait for another process to complete its
execution before proceeding. When a parent process makes a child process, the parent
process execution is suspended until the child process is finished. The wait() system call
is used to suspend the parent process. Once the child process has completed its execution,
control is returned to the parent process. write()
It is used to write data from a user buffer to a device like a file. This system call is one
way for a program to generate data. It takes three arguments in general:
➢ A file descriptor.
➢ A pointer to the buffer in which data is saved.
➢ The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of
the most common ways to create processes in operating systems. When a parent process
spawns a child process, execution of the parent process is interrupted until the child
process completes. Once the child process has completed its execution, control is returned
to the parent process. close()
It is used to end file system access. When this system call is invoked, it signifies
that the program no longer requires the file, and the buffers are flushed, the file
information is altered, and the file resources are de-allocated as a result. exec()
When an executable file replaces an earlier executable file in an already executing
process, this system function is invoked. As a new process is not built, the old process
identification stays, but the new process replaces data, stack, data, head, etc. exit()
The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful in multithreaded
environments. The operating system reclaims resources spent by the process following the
use of the exit() system function.
19. Processes
The term “process” was first used by the designers of the multics system in the
1960’s. A process is a program in execution and process execution must progress in
sequential fashion.
Process exits in a limited span of time.
Two or more process could be executing the same program, each using their own data
and resource.
The process memory is divided into four sections for efficient operation:
➢ The text category is composed of integrated program code, which is read from
fixed storage when the program is launched.
➢ The data class is made up of global and static variables, distributed and executed
before the main action.
➢ Heap is used for flexible, or dynamic memory allocation and is managed by calls
to new, delete, malloc, free, etc.
➢ The stack is used for local variables. The space in the stack is reserved for local
variables when it is announced.
20. Process State
When process executes, it changes state. Process state is defined as the current
activity of the process. Process state contains five states. Each process is one of the
following states.
new: The process is being created. running:
Instructions are being executed.
waiting: The process is waiting for some event to occur. ready: The
process is waiting to be assigned to a process. terminated: The
process has finished execution.
Dispatching
The assignment of the CPU to the first process on the ready list is called
dispatching and is performed by a system entity called the Dispatcher.
Pointer
Pointer points to the another PCB. Pointer is used for maintaining the scheduling
list.
Process state
Process state may be new, ready, running, waiting and so on.
Program counter
It indicates the address of the next instruction to be executed.
CPU register
It includes general purpose register, stack pointer, and accumulators etc.
Memory management Informtion locations including value of base and limit registers,
page tables and other
virtual memory information.
Accounting information the amount of CPU and real time used, time limits, account
numbers, job or
process numbers etc.
I/O status information
List of I/O devices allocated to this process, a list of open files, and so on.
1. job queue -The processes enter the system, they are put into a job queue. This
Queue consists of all processes in the system.
2. Ready queue – set of all processes residing in main memory, and are ready and
waiting to execute are kept on a list is called Ready Queue. This queue is
generally stored as linked list. A ready queue header contains two pointers.( Head,
Tail).
3. Device queues – set of processes waiting for an I/O device. Each device has its
own queue.
A new process is initially put in the ready queue. It waits in the ready queue until it is
selected for execution. Once a process is allocated CPU, the following events may occur.
➢ A process could issue an I/O request, and then be placed in an I/O queue. A
process could create a new process
➢ The process could be removed forcibly from CPU, as a result of an interrupt and
put back in the ready queue.
➢ When process terminates, it is removed from all queues.
➢ PCB and its other resources are de-allocated.
Schedulers
A process migrates between the various scheduling queues throughout its lifetime.
The OS must select, for scheduling process. The selection process is carried out by a
scheduler.
Long-term scheduler (or job scheduler)
➢ Selects which processes from this pool and loads them into memory for execution.
➢ It may take long time.
➢ The long-term scheduler executes less frequently.
➢ The long-term scheduler controls degree of multiprogramming.
Short-term scheduler (or CPU scheduler)
➢ Selects which process should be ready to execute and allocates the CPU.
➢ The STS must select a new process for the CPU frequently.
➢ STS is executed at least once every 100 milliseconds.
➢ The STS must be fast.
➢ If it takes 10 milliseconds to decide to execute a process for 100 ms, then 9 % of
CPU is used (or wasted) simply for scheduling work.
Medium-term scheduler
Some OS introduced a medium-term scheduler using swapping. It can be
advantageous, to remove the processes from the memory and reduce the
multiprogramming. At some later time, the process can be reintroduced into main
memory and its execution can be continued when it left off. This scheme is called
“Swapping”.
Swapping improves the process mix (I/O and CPU), when main memory is
unavailable.
❖ The long-term scheduler should make a careful selection. Because of the longer
interval b/w executions, the LTS can afford to take more time to select a process
for execution.
❖ The processes are either I/O bound or CPU bound.
❖ An I/O bound process spends more time doing I/O than it spends doing
computation.
❖ A CPU bound process spends most of the time doing computation.
❖ The LT scheduler should select a good process mix of I/O-bound and CPUbound
processes.
❖ If all the processes are I/O bound, the ready queue will be empty.
❖ If all the processes are CPU bound, the I/O queue will be empty, the devices will
go unused and the system will be unbalanced.
❖ Best performance by best combination of CPU-bound and I/O-bound process.
Context Switch
➢ Context switch is a task of switching the CPU to another process by saving the
state of old process and loading the saved state for the new process.
➢ When a context switch occurs, the kernal saves the context of the old process in its
PCB and loads the saved context of the new process scheduled to run.
➢ Context-switch time is overhead; the system does no useful work while switching.
➢ Context-switch time is highly dependent on hardware support.
➢ Typical range from 1 to 1000 microseconds.
Operations on Processes
The processes in the system can execute concurrently. The OS must provide a
mechanism for creation and termination.
(i)Process creation
➢ A process may create several new processes.
➢ Processes are created and deleted dynamically.
➢ Process which creates another process is called a parent process; the created
process is called a child process.
➢ Child process may create another sub process.
➢ Syntax for creating new process is: CREATE (process ID, attributes).
➢ A process needs certain resources (CPU time, memory, files, I/O devices) to
accomplish its task.
➢ When a process creates a sub processes, that sub process may be to obtain its
resource directly from the OS, or it may be constrained to a subset of resources of
the parent process.
➢ When a process creates a new process, two possibilities in terms of execution and
resource sharing.
Resource sharing possibilities
❖ Parent and children share all resources.
❖ Children share subset of parent’s resources. Parent and child share no
resources.
Execution possibilities
❖ Parent and children execute concurrently.
❖ Parent waits until children terminate.
There are also two possibilities in terms of the address space of the new process:
The child process is a duplicate of the parent process. Child
process has a program loaded into it.
Example
In UNIX:
• Each process is identified by its process identifier. fork system call creates new
process.
• exec system call used after a fork to replace the process’ memory space with a
new program.
• The new process is a copy of the original process.
• The exec system call is used after a fork by one of the two processes to replace
the process memory space with a new program.
DEC VMS:
• Creates a new process, loads a specified program into that process, and starts it
running.
WINDOWS NT supports both models:
• Parent address space can be duplicated or
• parent can specify the name of a program for the OS to load into the address space
of the new process.
(ii)Process Termination
❖ Process executes last statement and asks the operating system to decide it (exit).
❖ Output data from child to parent (via wait).
❖ Process’ resources are de-allocated by operating system.
❖ Parent may terminate the execution of children processes (abort).
❖ Child has exceeded allocated resources.
❖ Task assigned to child is no longer required.
❖ Parent is exiting, Operating system does not allow child to continue if its parent
terminates.
❖ Cascading termination. (All children terminated).
COOPERATING PROCESSES
▪ The concurrent process executing in the OS may be either independent process or
cooperating process.
▪ Independent process cannot affect or be affected by the execution of another
process.
▪ Cooperating process can affect or be affected by the execution of another process.
Advantages of process cooperation
1. Information sharing: several users may be interest in the same piece of
information.
2. Computation speed-up: If we want a particular task to run faster, we must break
it into subtasks and run in parallel.
3. Modularity: Constructing the system in modular fashion, dividing the system
functions into separate process.
4. Convenience: User will have many tasks to work in parallel (Editing, compiling,
printing).
Indirect Communication
• The messages are sent and received from mailboxes (also referred to as ports).
• A mailbox is an object Process can place messages.
• Process can remove messages.
• Two processes can communicate only if they have a shared mailbox.
• Primitives are defined as: send (A, message) – send a message to mailbox A
receive (A, message) – receive a message from mailbox A.
• A mailbox may be owned either by a process or by the OS.
• If the mailbox is owned by a process, then we distinguish b/w the owner (who can
only receive msg through this mailbox) and the user (who can only send msg to
the mailbox).
• A mailbox may be owned by the OS is independent and provide a mechanism, o
create a mailbox o receive messages through mailbox o destroy a mail box.
Synchronization
Buffering
• A link has some capacity that determines the number of messages that can reside
in it temporarily. • Queue of messages is attached to the link; implemented in one
of three ways.
Zero capacity o The link cannot have any messages in it. o
Sender must wait for receiver.
Bounded capacity o finite length of n
messages o Sender must wait if link
full.
Unbounded capacity o infinite length
o Sender never waits.
CPU scheduling
CPU scheduling is the process of deciding which process will own the CPU to
use while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding
processes then most of the time, the CPU remains an idle. The function of an effective
program is to improve resource utilization.
If most operating systems change their status from performance to waiting then
there may always be a chance of failure in the system. So in order to minimize this
excess, the OS needs to schedule tasks in order to make full use of the CPU and avoid the
possibility of deadlock.
Terminologies
➢ Arrival Time: Time at which the process arrives in the ready queue.
➢ Completion Time: Time at which process completes its execution.
➢ Burst Time: Time required by a process for CPU execution.
➢ Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
➢ Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
UNIT 2
CPU SCHEDULING AND DEADLOCKS
2. Waiting Time:
The Scheduling algorithm does not affect the time required to complete the
process once it has started performing. It only affects the waiting time of the process
i.e. the time spent in the waiting process in the ready queue. Response Time:
In a collaborative system, turn around time is not the best option. The process may
produce something early and continue to computing the new results while the
previous results are released to the user. Therefore another method is the time taken in
the submission of the application process until the first response is issued. This
measure is called response time.
3. Types of CPU Scheduling Algorithms
There are mainly two types of scheduling methods:
Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to
ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling:
Non-Preemptive scheduling is used when a process terminates , or when a process
switches from running state to waiting state.
Characteristics:
➢ Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
➢ It is associated with each task as a unit of time to complete.
➢ It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
Advantages:
➢ As SJF reduces the average waiting time thus, it is better than the first come first
serve scheduling algorithm.
➢ SJF is generally used for long term scheduling Disadvantages:
➢ One of the demerit SJF has is starvation.
➢ Many times it becomes complicated to predict the length of the upcoming CPU
request 3. Longest Job First(LJF) Scheduling:
This is just opposite of shortest job first (SJF), as the name suggests this algorithm
is based upon the fact that the process with the largest burst time is processed first.
Longest Job First is non-preemptive in nature.
Characteristics:
➢ Among all the processes waiting in a waiting queue, CPU is always assigned to
the process having largest burst time.
➢ If two processes have the same burst time then the tie is broken using FCFS i.e.
the process that arrived first is processed first.
➢ LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages:
➢
No other task can schedule until the longest job or process executes completely.
➢ All the jobs or processes finish at the same time approximately.
Disadvantages:
➢ Generally, the LJF algorithm gives a very high average waiting time and average
turn-around time for a given set of processes.
➢ This may lead to convoy effect.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of
CPU scheduling algorithm that works based on the priority of a process. In this
algorithm, the editor sets the functions to be as important, meaning that the most
important process must be done first. In the case of any conflict, that is, where there are
more than one processor with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS Characteristics:
➢ Schedules tasks based on priority.
➢ When the higher priority work arrives while a task with less priority is executed,
the higher priority work takes the place of the less priority one and ➢ The latter
is suspended until the execution is complete.
➢ Lower is the number assigned, higher is the priority level of a process.
Advantages:
➢ The average waiting time is less than FCFS
➢ Less complex
Disadvantages:
➢ One of the most common demerits of the Preemptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has to
wait for a longer amount of time to get scheduled into the CPU. This condition is
called the starvation problem. 5. Round Robin Scheduling:
Round Robin is a CPU scheduling algorithm where each process is cyclically
assigned a fixed time slot. It is the preemptive version of First come First Serve CPU
Scheduling algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing
technique. Characteristics:
➢ It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
➢ One of the most widely used methods in CPU scheduling as a core.
➢
➢ It is considered preemptive as the processes are given to the CPU for a very
limited time.
Advantages:
Round robin seems to be fair as every process gets an equal share of CPU. The
newly created process is added to the end of the ready queue. 6. Shortest Remaining 5.
SRTF is the preemptive version of the Shortest job first which we have discussed
earlier where the processor is allocated to the job closest to completion. In SRTF the
process with the smallest amount of time remaining until completion is selected to
execute.
Characteristics:
➢ SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given
it’s overhead charges are not counted.
➢ The context switch is done a lot more times in SRTF than in SJF and consumes
the CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages:
➢ In SRTF the short processes are handled very fast.
➢ The system also requires very little overhead since it only makes a decision when
a process completes or a new process is added.
Disadvantages:
➢ Like the shortest job first, it also has the potential for process starvation.
➢ Long processes may be held off indefinitely if short processes are continually
added. 7. Longest Remaining Time First:
The longest remaining time first is a preemptive version of the longest job first
scheduling algorithm. This scheduling algorithm is used by the operating system to
program incoming processes for use in a systematic way. This algorithm schedules those
processes first which have the longest processing time remaining for completion.
Characteristics:
➢ Among all the processes waiting in a waiting queue, the CPU is always assigned
to the process having the largest burst time.
➢
➢ If two processes have the same burst time then the tie is broken using FCFS i.e.
the process that arrived first is processed first.
➢ LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages:
➢ No other process can execute until the longest task executes completely.
➢ All the jobs or processes finish at the same time approximately.
Disadvantages:
This algorithm gives a very high average waiting time and average turnaround
time for a given set of processes. This may lead to a convoy effect. 8. Highest
Response Ratio Next:
➢ System Processes: The CPU itself has its process to run, generally termed as System
Process.
➢ Interactive Processes: An Interactive Process is a type of process in which there
should be the same type of interaction.
➢ Batch Processes: Batch processing is generally a technique in the Operating system
that collects the programs and data together in the form of a batch before the
processing starts.
Advantages:
➢ The main merit of the multilevel queue is that it has a low scheduling overhead.
Disadvantages:
➢ Starvation problem
➢ It is inflexible in nature
8. Example 1 (FCFS)
P1 79 79 79 0
P2 2 81 81 79
P3 3 84 84 81
P4 1 85 85 84
P5 25 110 110 85
Chart:
Process Burst
ID Time
Arrival Completion Turn Waiting
Time Time Around Time
Time WT=CT–BT
TAT=CT–AT
P0 1 3 5 4 1
P1 2 6 20 18 12
P2 0 2 2 2 0
P3 3 7 27 24 17
P4 2 4 9 7 4
P5 5 5 14 10 5
P0 1 3 5 4 1
P1 2 6 17 15 9
P2 0 2 2 2 0
P3 3 7 24 21 14
P4 2 4 11 9 5
P5 6 2 8 2 0
Solution:
Gantt Chart:
Priority
Turn Around Waiting
Process Arrival Burst Completion
Time Time
Id Time Time Time
TAT=CT-AT WT=TAT-BT
P1 0 5 5 5 5 0
P2 1 6 4 27 26 20
P3 2 2 0 7 5 3
P4 3 1 2 15 12 11
P5 4 7 1 14 10 3
P6 4 6 3 21 17 11
Time Quantum = 1 ms
P0 1 3
P1 0 5
P2 3 2
P3 4 3
P4 2 1
Solution:
Gantt Chart:
P0 1 3 5 4 1
P1 0 5 14 14 9
2 7
P2
3 4 2
P3 4 3 10 6 3
P4 2 1 3 1 0
Avg Turn Around Time = (4+14+4+6+1) / 5 = 5.8 ms
Avg Waiting Time = (1+9+2+3+0) / 5 = 3 ms
8. DEADLOCK
A process in operating system uses resources in the following way.
(i) Requests a resource
(ii) Use the resource
(iii) Releases the resource
A deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track
and there is only one track, none of the trains can move once they are in front of each other.
A similar situation occurs in operating systems
when there are two or more processes that hold some
resources and wait for resources held by other(s). For
example, in the below diagram, Process1 is holding
Resource1 and waiting for Rsource2 which is acquired
by Process2, and Process2 is waiting for Resource1.
Examples of Deadlock
1. The system has 2 tape drives. P1 and P2 each
hold one tape drive and each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in P0 P1
deadlock as follows:
P0 executes wait(A) and preempts. wait(A); wait(B)
P1 executes wait(B).
Now P0 and P1 enter in deadlock. wait(B); wait(A)
Request Request
80KB; 70KB;
Request Request
60KB; 80KB;
P0 P1
3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.
➢ If no cycle is being formed, then system is not in a deadlock state.
Rule-02: In a Resource Allocation Graph where all the resources are NOT single instance,
➢ If a cycle is being formed, then system may be in a deadlock state.
➢ Banker’s Algorithm is applied to confirm whether system is in a deadlock state or
not.
➢ If no cycle is being formed, then system is not in a deadlock state.
➢ Presence of a cycle is a necessary but not a sufficient condition for the occurrence of
deadlock.
Allow pre-emption
Preempt resources from the process when resources are required by other high-priority
processes.
AVOIDANCE
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to
make an assumption. We need to ensure that all information about resources that the
process will need is known to us before the execution of the process.
Resource Allocation Graph
The resource allocation graph (RAG) is used to visualize the system‟s current
state as a graph. The Graph includes all processes, the resources that are assigned to
them, as well as the resources that each Process requests. Sometimes, if there are fewer
processes, we can quickly spot a deadlock in the system by looking at the graph rather
than the tables we use in Banker‟s algorithm.
Banker’s Algorithm
Bankers‟s Algorithm is a resource allocation and deadlock avoidance algorithm
which test all the request made by processes for resources, it checks for the safe state,
and after granting a request system remains in the safe state it allows the request, and if
there is no safe state it doesn‟t allow the request made by the process.
3) Deadlock ignorance:
If a deadlock is very rare, then let it happen and reboot the system. This is the
approach that both Windows and UNIX take. We use the ostrich algorithm for deadlock
ignorance.
In Deadlock, ignorance performance is better than the above two methods but not the
correctness of data.
SAFE STATE
A safe state can be defined as a state in which there is no deadlock. It is achievable
if:
➢ If a process needs an unavailable resource, it may wait until the same has been
released by a process to which it has already been allocated. if such a sequence does
not exist, it is an unsafe state.
➢ All the requested resources are allocated to the process.
4. BANKER'S ALGORITHM
It is a banker algorithm used to avoid deadlock and allocate resources safely to
each process in the computer system. The 'S-State' examines all possible tests or
activities before deciding whether the allocation should be allowed to each process. It
also helps the operating system to successfully share the resources between all the
processes.
The banker's algorithm is named because it checks whether a person should be
sanctioned a loan amount or not to help the bank system safely simulate allocation
resources.
Suppose the number of account holders in a particular bank is 'n', and the total
money in a bank is 'T'. If an account holder applies for a loan; first, the bank subtracts the
loan amount from full cash and then estimates the cash difference is greater than T to
approve the loan amount. These steps are taken because if another person applies for a
loan or withdraws some amount from the bank, it helps the bank manage and operate all
things without any restriction in the functionality of the banking system.
Similarly, it works in an operating system. When a new process is created in a
computer system, the process must provide all types of information to the operating
system like upcoming processes, requests for their resources, counting them, and delays.
Based on these criteria, the operating system decides which process sequence
should be executed or waited so that no deadlock occurs in a system. Therefore, it is also
known as deadlock avoidance algorithm or deadlock detection in the operating
system.
When working with a banker's algorithm, it requests to know about three things:
1. How much each process can request for each resource in the system. It is denoted by
the [MAX] request.
2. How much each process is currently holding each resource in a system. It is denoted
by the [ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is
denoted by the [AVAILABLE] resource.
Following are the important data structures terms applied in the banker's algorithm as
follows:
Suppose n is the number of processes, and m is the number of each type of resource used
in a computer system.
1. Available: It is an array of length 'm' that defines each type of resource available
in the system. When Available[j] = K, means that 'K' instances of Resources type
R[j] are available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum
number of resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources
currently allocated to each process in the system. When Allocation [i, j] = K, it
means that process P[i] is currently allocated K instances of Resources type R[j]
in the system.
4. Need: It is an M x N matrix sequence representing the number of remaining
resources for each process. When the Need[i] [j] = k, then process P[i] may
require K more instances of resources type Rj to complete the assigned work.
Need[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false)
indicating whether the process has been allocated to the requested resources, and
all resources have been released after finishing its task.
The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock.
Safety Algorithm
It is a safety algorithm used to check whether or not a system is in a safe state or
follows the safe sequence in a banker's algorithm:
Step1:
There are two vectors Wok and Finish of length m and n in a safety algorithm.
Initialize: Work = Available
Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1. Step2:
Check the availability status for each type of resources [i], such as:
Need[i] <= Work Finish[i]
== false
If the i does not exist, go to step 4.
Step3:
Work = Work +Allocation(i) // to get new resource allocation Finish[i] =
true
Go to step2 to check the status of resource availability for the next process.
Step4:
If Finish[i] == true; it means that the system is safe for all processes.
6. Resource Request Algorithm
Let create a resource request array R[i] for each process P[i].
Step1:
When the number of requested resources of each type is less than the Need
resources, go to step2 and if the condition fails, which means that the process P[i]
exceeds its maximum claim for the resource. As the expression suggests:
If Request(i) <= Need, then go to step2, Else raise an error message. Step2:
And when the number of requested resources of each type is less than the available
resource for each process, go to step (3). As the expression suggests:
If Request(i) <= Available, then go to step3.
Else Process P[i] must wait for the resource.
Step3:
When the requested resource is allocated to the process by changing state:
Available = Available – Request
Allocation(i) = Allocation(i) + Request (i)
Needi = Needi - Requesti
When the resource allocation state is safe, its resources are allocated to the
process P(i). And if the new state is unsafe, the Process P (i) has to wait for each type of
Request R(i) and restore the old resource-allocation state.
Example:
Consider a system that contains five processes P1, P2, P3, P4, P5 and the three
resource types A, B and C. Following are the resources types: A has 10, B has 5 and the
resource type C has 7 instances.
Answer the following questions using the banker's algorithm:
1. What is the reference of the need matrix?
2. Determine if the system is safe or not.
3. What will happen if the resource request (1, 0, 2) for process P1 can the system
accept this request immediately?
4. What will happen if the resource request (3, 3, 0) for process P5?
5. What will happen if the resource request (0, 2, 0) for process P1?
Ans.1:
Context of the need matrix is as Need [i] = Max [i] - Allocation [i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0 Process Need
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1
Ans.2: Apply the Banker's Algorithm:
Available Resources of A, B and C are 3, 3, and 2.
Now we check if each type of resource request is available for each process. Step 1:
For Process P1:
Need <= Available
7, 4, 3 <= 3, 3, 2 condition is false.
So, we examine another process, P2.
Step 2:
For Process P2: Need <=
Available
1, 2, 2 <= 3, 3, 2 condition true
New available = available + Allocation
(3, 3, 2) + (2, 0, 0) => 5, 3, 2
Similarly, we examine another process P3.
Step 3:
For Process P3:
P3 Need <= Available
6, 0, 0 < = 5, 3, 2 condition is false.
Similarly, we examine another process, P4.
Step 4:
For Process P4:
P4 Need <= Available
0, 1, 1 <= 5, 3, 2 condition is true
New Available resource = Available + Allocation
5, 3, 2 + 2, 1, 1 => 7, 4, 3
Similarly, we examine another process P5.
Step 5:
For Process P5:
P5 Need <= Available
4, 3, 1 <= 7, 4, 3 condition is true
New available resource = Available + Allocation
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1
and P3. Step 6:
For Process P1:
P1 Need <= Available
7, 4, 3 <= 7, 4, 5 condition is true
New Available Resource = Available + Allocation
7, 4, 5 + 0, 1, 0 => 7, 5, 5
So, we examine another process P2.
Step 7:
For Process P3:
P3 Need <= Available
6, 0, 0 <= 7, 5, 5 condition is true
New Available Resource = Available + Allocation
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe sequence like
P2, P4, P5, P1 and P3.
Ans. 3:
For granting the Request (1, 0, 2), first we have to check that Request
<= Available, that is (1, 0, 2) <= (3, 3, 2),
Since the condition is true, the process P2 may get the request immediately.
Allocation for P2 is (3,0,2) and new Available is (2,
3, 0)
Process Need
Context of the need matrix is as follows: Need [i] = Max [i] A B C -
Allocation [i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
P1 7 4 3
Need for P2: (3, 2, 2) - (3, 0, 2) = 0, 2, 0
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0 P2 0 2 0
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1
P3 6 0 0
P4 0 1 1
Apply the Banker's Algorithm:
Available Resources of A, B and C are 2, 3, and 0.
Now we check if each type of resource request is
available for each process. P5 4 3 1
Step 1:
For Process P1:
Need <= Available
7, 4, 3 <= 2, 3, 0 condition is false.
So, we examine another process, P2.
Step 2:
For Process P2: Need <=
Available
1, 2, 2 <= 2, 3, 0 condition true
New available = available + Allocation
(2, 3, 0) + (3, 0, 2) => 5, 3, 2
Similarly, we examine another process P3.
Step 3:
For Process P3:
P3 Need <= Available
6, 0, 0 < = 5, 3, 2 condition is false.
Similarly, we examine another process, P4.
Step 4:
For Process P4: P4 Need <=
Available
0, 1, 1 <= 5, 3, 2 condition is true
New Available resource = Available + Allocation
5, 3, 2 + 2, 1, 1 => 7, 4, 3
Similarly, we examine another process P5.
Step 5:
For Process P5: P5 Need <=
Available
4, 3, 1 <= 7, 4, 3 condition is true
New available resource = Available + Allocation
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine for processes P1 and P3.
Step 6:
For Process P1: P1 Need <=
Available
7, 4, 3 <= 7, 4, 5 condition is true
New Available Resource = Available + Allocation
7, 4, 5 + 0, 1, 0 => 7, 5, 5
So, we examine another process P2.
Step 7:
For Process P3:
P3 Need <= Available
6, 0, 0 <= 7, 5, 5 condition is true
New Available Resource = Available + Allocation
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, P2 granted immediately and the safe sequence like P2, P4, P5, P1 and P3.
7. For granting the Request (3, 3, 0) by P5, first we have to check that
Request <= Available, that is (3, 3, 0) <= (2, 3, 0),
Since the condition is false. So the request for (3, 3, 0) by process P5 cannot be granted.
Ans. 5:
For granting the Request (0, 2, 0) by P1, first we have to check that Request
<= Available, that is (0, 2, 0) <= (2, 3, 0),
Since the condition is true. So the request for (0, 2, 0) by
process P1 may be granted. Process
Need
Allocation for P1 is (0, 3, 0)
A B
Context of the need matrix is as follows:
C
Need [i] = Max [i] - Allocation [i]
Need for P1: (7, 5, 3) - (0, 3, 0) = 7, 2, 3
P1 7 2 3
8. DEADLOCK DETECTION
If a system does not employ either a deadlock prevention or deadlock avoidance algorithm
then a deadlock situation may occur. In this case-
➢ Apply an algorithm to examine the system‟s state to determine whether deadlock has
occurred.
➢ Apply an algorithm to recover from the deadlock.
1. In this, Work = [0, 0, 0] & Finish = [false, false, false, false, false]
10. i=4 is selected as both Finish[4] = false and [0, 0, 2]<=[7, 2, 4].
11. Work =[7, 2, 4]+[0, 0, 2] =>[7, 2, 6] & Finish = [true, true, true, true, true].
12. Since Finish is a vector of all true it means there is no deadlock in this example.
There are several algorithms for detecting deadlocks in an operating system, including:
1. Wait-For Graph:
A graphical representation of the system‟s processes and resources. A directed edge is
created from a process to a resource if the process is waiting for that resource. A cycle in the
graph indicates a deadlock.
2. Banker’s Algorithm:
A resource allocation algorithm that ensures that the system is always in a safe state,
where deadlocks cannot occur.
5. Timestamping:
Each process is assigned a timestamp, and the system checks to see if any process is waiting
for a resource that is held by a process with a lower timestamp.
These algorithms are used in different operating systems and systems with different
resource allocation and synchronization requirements. The choice of algorithm depends on
the specific requirements of the system and the trade-offs between performance, complexity
and accuracy.
9. RECOVERY FROM DEADLOCK
The OS will use various recovery techniques to restore the system if it encounters any
deadlocks. When a Deadlock Detection Algorithm determines that a deadlock has occurred in
the system, the system must recover from that deadlock.
UNIT 3
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
2. Race Condition
A race condition is a condition when there are many processes and every process
shares the data with each other and accessing the data concurrently and the output of
execution depends on a particular sequence in which they share the data and access. (OR)
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output or the
value of the shared variable is wrong so for that all the processes doing the race to say that
my output is correct. This con2dition is known as race condition.
Several processes access and process the manipulations over the same data concurrently,
then the outcome depends on the particular order in which the access takes place.
Example:
Let‟s say there are two processes P1 and P2 which share common variable (shared=10),
both processes are present in ready – queue and waiting for its turn to be execute.
int X = shared int Y = shared
X++ Y--
sleep(1) sleep(1)
shared = X shared = Y
Process 1 Process 2
Suppose, Process P1 first come under execution, initialized as X=10 and increment
it by 1 (ie.X=11), after then when CPU read line sleep(1), it switches from current process
P1 to process P2 present in ready-queue. The process P1 goes in waiting state for 1 second.
Now CPU execute the Process P2, initialized Y=10 and decrement Y by
1(ie.Y=9), after then when CPU read sleep(1), the current
process P2 goes in waiting state and CPU remains idle for
sometime as there is no process in ready-queue.
After completion of 1 second of process P1 when it comes in ready-queue, CPU takes the
process P1 under execution and execute the remaining line of code and shared=11.
After completion of 1 second of Process P2, when process P2 comes in ready-queue, CPU
start executing the further remaining line of Process P2 and shared=9.
Note:
We are assuming the final value of common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable by 1 and Process P2
decrement variable by 1 and finally it becomes shared=10). But we are getting undesired
value due to lack of proper synchronization.
4. PETERSON’S SOLUTION
Peterson‟s Solution is a classical software-based solution to the critical section problem. In
Peterson‟s solution, we have two shared variables:
➢ boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
➢ int turn: The process whose turn is to enter the critical section.
In the solution, i represents the Producer and j represents the Consumer. Initially, the
flags are false. When a process wants to execute it‟s critical section, it sets its flag to true and
turn into the index of the other process. This means that the process wants to execute but it
will allow the other process to run first. The process performs busy waiting until the other
process has finished it‟s own critical section. After this, the current process enters its critical
section and adds or removes a random number from the shared buffer. After completing the
critical section, it sets it‟s own flag to false, indicating it does not wish to execute anymore.
5. SEMAPHORES
Semaphore is a Hardware Solution. This Hardware solution is written or given to
critical section problem. The Semaphore is just a normal integer. The Semaphore cannot be
negative. The least value for a Semaphore is zero (0). The Maximum value of a Semaphore
can be anything. The Semaphores usually have two operations. The two operations have the
capability to decide the values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
At any instant, the current value of empty represents the number of empty slots in the buffer
and full represents the number of occupied slots in the buffer.
} while(TRUE);
➢ Looking at the above code for a producer, we can see that a producer first waits until there is
atleast one empty slot.
➢ Then it decrements the empty semaphore because, there will now be one less empty slot,
since the producer is going to insert data in one of those slots.
➢ Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until
producer completes its operation.
➢ After performing the insert operation, the lock is released and the value of full is
incremented because the producer has just filled a slot in the buffer.
} while(TRUE);
➢ The consumer waits until there is atleast one full slot in the buffer.
➢ Then it decrements the full semaphore because the number of occupied slots will be
decreased by one, after the consumer completes its operation.
➢ After that, the consumer acquires lock on the buffer.
➢ Following that, the consumer completes the removal operation so that the data from one of
the full slots is removed.
➢ Then, the consumer releases the lock.
➢ Finally, the empty semaphore is incremented by 1, because the consumer has just removed
data from an occupied slot, thus making it empty.
Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers. There is one chopstick
between each philosopher. A philosopher may eat if he can pickup the two chopsticks
adjacent to him. One chopstick may be picked up by any one of its adjacent followers but
not both. This problem involves the allocation of limited resources to a group of processes
in a deadlock-free and starvation-free manner.
The design of the problem was to illustrate the challenges of avoiding deadlock, a
deadlock state of a system is a state in which no progress of system is possible. Consider a
proposal where each philosopher is instructed to behave as follows:
➢ The philosopher is instructed to think till the left fork is available, when it is available, hold
it.
➢ The philosopher is instructed to think till the right fork is available, when it is available, hold
it.
➢ The philosopher is instructed to eat when both forks are available.
➢ then, put the right fork down first then, put the left fork down next repeat from the
beginning.
The drawback of the above solution of the dining philosopher problem No two
neighbouring philosophers can eat at the same point in time.
➢ This solution can lead to a deadlock condition. This situation happens if all the philosophers
pick their left chopstick at the same time, which leads to the condition of deadlock and none
of the philosophers can eat.
Not Allowed
Case 2 Writing Reading
Reader process
➢ Reader requests the entry to critical section.
➢ If allowed:
❖ it increments the count of number of readers inside the critical section. If this
reader is the first reader entering, it locks the wrt semaphore to restrict the
entry of writers if any reader is inside.
❖ It then, signals mutex as any other reader is allowed to enter while others are
already reading.
❖ After performing reading, it exits the critical section. When exiting, it checks
if no more reader is inside, it signals the semaphore “wrt” as now, writer can
enter the critical section. If not allowed, it keeps on waiting.
} while(true);
Writer process
1. Writer requests the entry to critical section.
2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not
allowed, it keeps on waiting.
3. It exits the critical section.
…perform WRITING
} while(true);
Thus, the semaphore „wrt„ is queued on both readers and writers in a manner such
that preference is given to readers if writers are also there. Thus, no reader is waiting simply
because a writer has requested to enter the critical section.
7. MONITOR
It is a synchronization technique that enables threads to mutual exclusion and the
wait() for a given condition to become true. It is an abstract data type. It has a shared
variable and a collection of procedures executing on the shared variable. A process may not
directly access the shared data variables, and procedures are required to allow several
processes to access the shared data variables simultaneously.
At any particular time, only one process may be active in a monitor. Other processes
that require access to the shared variables must queue and are only granted access after the
previous process releases the shared variables.
Syntax:
monitor
{
//shared variable declarations
data variables; Procedure P1() {
... }
Procedure P2() { ... }
.
.
.
Procedure Pn() { ... }
Initialization Code() { ... } }
Advantages
➢ Mutual exclusion is automatic in monitors.
➢ Monitors are less difficult to implement than semaphores.
➢ Monitors may overcome the timing errors that occur when semaphores are used.
➢ Monitors are a collection of procedures and condition variables that are combined in a
special type of module.
Disadvantages
➢ Monitors must be implemented into the programming language.
➢ The compiler should generate code for them.
➢ It gives the compiler the additional burden of knowing what operating system features is
available for controlling access to crucial sections in concurrent processes.
Comparison between the Semaphore and Monitor
Features Semaphore Monitor
Action The semaphore's value shows the The Monitor type includes
number of sharedresources shared variables as well as a set
available in the system. of procedures that operate on them.
Main Memory is a large array of words or bytes, ranging in size from hundreds of thousands
to billions.
Main memory is a repository of rapidly available information shared by the CPU and I/O
devices.
Main memory is the place where programs and information are kept when the processor
is effectively utilizing them.
Main memory is associated with the processor, so moving instructions and information
into and out of the processor is extremely fast.
Main memory is also known as RAM (Random Access Memory). This memory is volatile.
RAM loses its data when a power interruption occurs.
Memory Management
In a multiprogramming computer, the Operating System resides in a part of memory, and
the rest is used by multiple processes.
The task of subdividing the memory among different processes is called Memory
Management.
Memory management is a method in the operating system to manage operations between
main memory and disk during process execution.
The main aim of memory management is to achieve efficient utilization of memory.
SWAPPING
When a process is executed it must have
resided in memory. Swapping is a process of
swapping a process temporarily
into a secondary memory from the main memory,
which is fast compared to secondary memory. A
swapping allows more processes to be run and can
be fit into memory at one time.
The main part of swapping is transferred
time and the total time is directly proportional to the
amount of memory swapped.
Swapping is also known as roll-out, or roll
because if a higher priority process arrives and
wants service, the memory manager can swap out
the lower priority process and then load and execute
the higher priority process. After finishing higher
priority work, the lower priority process swapped
back in memory and continued to the execution process.
Advantages
➢ If there is low main memory so some processes may has to wait for much long but by using
swapping process do not have to wait long for execution on CPU.
➢ It utilize the main memory.
➢ Using only single main memory, multiple process can be run by CPU using swap partition.
➢ The concept of virtual memory start from here and it utilize it in better way.
➢ This concept can be useful in priority based scheduling to optimize the swapping process.
Disadvantages
➢ If there is low main memory resource and user is executing too many processes and
suddenly the power of system goes off there might be a scenario where data get erase of the
processes which are took parts in swapping. ➢ Chances of number of page faults occur ➢
Low processing performance Example:
Suppose the user process's size is 2048KB and is a standard hard disk where swapping
has a data transfer rate of 1Mbps. Calculate how long it will take to transfer from main
memory to secondary memory.
User process size is 2048Kb
Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024 = 2 seconds or 2000 milliseconds
Now taking swap-in and swap-out time, the process will take 4000 ms or 4 seconds.
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several
fixedsized partitions and each partition contains exactly one process.
Thus, the degree of multiprogramming is obtained by the number of partitions.
Multiple partition allocation
A process is selected from the input queue and loaded into the free partition. When the
process terminates, the partition becomes available for other processes.
While allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There are some solutions
to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
Here, in this diagram, a 40 KB memory block is the first available free hole that can
store process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best Fit
In the Best Fit, allocate the
smallest hole that is big enough to
process requirements. For this, we
search the entire list, unless the list
is ordered by size.
Here in this example, first,
we traverse the complete list and
find the last hole 25KB is the best
suitable hole for Process A(size
25KB).
In this method,
memory utilization is maximum as compared to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the
largest leftover hole.
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB.
Inefficient memory utilization is a major issue in the worst fit.
2. FRAGMENTATION
Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process.
To achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problems. In the operating systems two types of fragmentation:
Internal fragmentation
Internal fragmentation occurs when memory blocks are allocated to the process more
than their requested size. Due to this some unused space is left over and creating an
internal fragmentation problem.
Example: Suppose there is a fixed partitioning used for memory allocation and the
different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4
of size 2MB c2omes and demands a block of memory. It gets a memory block of 3MB
but 1MB block of memory is a waste, and it can not be allocated to other processes too.
This is called internal fragmentation.
External fragmentation
In External Fragmentation, we have a free memory block, but we can not assign it to a
process because blocks are not contiguous.
Example: Suppose (consider the above example) three processes p1, p2, and p3 come
with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB,
6MB, and 7MB allocated respectively. After allocating the process p1 process and the p2
process left 1MB and 2MB. Suppose a new process p4 comes and demands a 3MB block
of memory, which is available, but we can’t assign it because free memory space is not
contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation.
To overcome the external fragmentation problem Compaction is used. In the compaction
technique, all free memory space combines and makes one large block. So, this space can be
used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be non-contiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
.3. PAGING
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a process
to be non-contiguous.
The3 mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.
➢ The Physical Address Space is conceptually divided into several fixed-size blocks, called
frames.
➢ The Logical Address Space is also split into fixed-size blocks, called pages. Page Size =
Frame Size
The address generated by the CPU is divided into:
Page Number(p)
Number of bits required to represent the pages in Logical Address Space or Page
number
Page Offset(d)
Number of bits required to represent a particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Frame Number(f)
Number of bits required to represent the frame of Physical Address Space or Frame
number frame
Frame Offset(d)
Number of bits required to represent a particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
When a logical address is generated by the CPU, its page number is presented to the
TLB. If the page number is found, its frame number is immediately available and is used to
access memory.
If the page number is not in the TLB (known as a TLB miss), a memory reference to
the page table must be made. When the frame number is obtained, we can use it to access
memory. In addition, we add the page number and frame number to the TLB, so that they will
be found quickly on the next reference. If the TLB is already full of entries, the operating
system must select one for replacement. Replacement policies range from least recently used
(LRU) to random. Furthermore, some TLBs allow entries to be wired down, meaning that
they cannot be removed from the TLB. Typically, TLB entries for kernel code are often wired
down.
The percentage of times that a particular page number is found in the TLB is called
the hit ratio. An 80-percent hit ratio means that we find the desired page number in the TLB
80 percent of the time. If it takes 20 nanoseconds to search the TLB, and 100 nanoseconds to
access memory, then a mapped memory access takes 120 nanoseconds when the page number
is in the TLB. If we fail to find the page number in the TLB (20 nanoseconds), then we must
first access memory for the page table and frame number (100 nanoseconds), and then access
the desired byte in memory (100 nanoseconds), for a total of 220 nanoseconds.
To find the effective memory-access time, we must weigh each case by its probability:
(Where P is Hit ratio)
EAT(effective access time) = P x hit memory time + (1-P) x miss memory time.
= 0.80 x 120 + 0.20 x 220
= 140 nanoseconds.
In this example, we suffer a 40-percent slowdown in memory access time (from 100 to
140 ns).
For a 98-percent hit ratio, we have
EAT(effective access time)= P x hit memory time + (1-P) x miss memory time.
= 0.98 x 120 + 0.02 x 220
= 122 nanoseconds.
This increased hit rate produces only a 22-percent slowdown in access time.
Example:
What will be the EAT if hit ratio is 70%, time for TLB is 30ns and access to main memory is
90ns?
P = 70% = 70/100 = 0.7
Hit memory time = 30ns + 90ns = 120ns
Miss memory time = 30ns + 90ns + 90ns = 210ns Therefore,
EAT = P x Hit + (1-P) x Miss
= 0.7 x 120 + 0.3 x 210
=840 + 63.0
=147 ns
5. SEGMENTATION
A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments. Segmentation gives
the user’s view of the process which paging does not provide.
Here the user’s view is mapped to physical memory. There is no simple relationship between
logical addresses and physical addresses in segmentation.
A table stores the information about all such segments and is called Segment
Table. It maps a two-dimensional Logical address into a one-dimensional Physical
address. It’s each table entry has: Base Address:
It contains the starting physical address where the segments reside in memory.
Segment Limit:
Also known as segment offset. It specifies the length of the segment.
The address generated by the CPU is divided into:
Segment number (s):
Number of bits required to represent the segment.
Segment offset (d):
Number of bits required to represent the size of the segment.
The Segment number is mapped to the segment table. The limit of the
respective segment is compared with the offset. If the offset is less than the limit then the
address is valid otherwise it throws an error as the address is invalid. In the case of valid
addresses, the base address of the segment is added to the offset to get the physical
address of the actual word in the main memory.
Example of Segmentation
Let us assume we have five segments namely: Segment-0, Segment-1, Segment-
2, Segment-3, and Segment-4. Initially, before the execution of the process, all the
segments of the process are stored in the physical memory space. We have a segment
table as well. The segment table contains the beginning entry address of each segment
(denoted by base). The segment table also contains the length of each of the segments
(denoted by limit).
As shown in the image below, the base address of Segment-0 is 1400 and its
length is 1000, the base address of Segment-1 is 6300 and its length is 400, the base
address of Segment-2 is 4300 and its length is 400, and so on.
The pictorial representation of the above segmentation with its segment table is shown
below.
6. SEGMENTATION WITH PAGING
Pure segmentation is not very popular and not being used in many of the operating
systems. However, Segmentation can be combined with Paging to get the best features out
of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments which are further
divided into fixed size pages.
➢ Pages are smaller than segments.
➢ Each Segment has a page table which means every program has multiple page tables.
➢ The logical address is represented as Segment Number (base address), Page number and
page offset.
Each Page Table contains, the various information about every page of the
segment. The Segment Table contains the information about every segment. Each
segment table entry points to a page table entry and every page table entry is mapped to
one of the page within a segment.
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment
Number and Segment Offset. The Segment Offset must be less than the segment limit.
Offset is further divided into Page number and Page Offset. To map the exact page
number in the page table, the page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the
desired word in the page of the certain segment of the process.
Advantages of Segmented Paging
It reduces memory usage.
➢ Page table size is limited by the segment size.
➢ Segment table has only one entry corresponding to one actual segment.
➢ External Fragmentation is not there.
➢ It simplifies memory allocation.
7. DEMAND PAGING
Demand paging can be described as a memory management technique that is used
in operating systems to improve memory usage and system performance. Demand
paging is a technique used in virtual memory systems where pages enter main memory
only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program into memory at
the start.
A page fault occurred when the program needed to access a page that is not
currently in memory. The operating system then loads the required pages from the disk
into memory and updates the page tables accordingly. This process is transparent to the
running program and it continues to run as if the page had always been in memory.
8. VIRTUAL MEMORY
Virtual Memory is a storage allocation scheme in which secondary memory can
be addressed as though it were part of the main memory. The addresses a program may
use to reference memory are distinguished from the addresses the memory system uses to
identify physical storage sites and program-generated addresses are translated
automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer
system and the amount of secondary memory available not by the actual number of main
storage locations.
It is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical addresses in
computer memory.
All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of the main memory such that it occupies different places in the main memory
at different times during the course of execution.
A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of dynamic
run-time address translation and the use of a page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the required
pages need to be loaded into memory whenever required. Virtual memory is implemented
using Demand Paging or Demand Segmentation.
➢ Efficient use of physical memory: Query paging allows for more efficient use
because only the necessary pages are loaded into memory at any given time.
➢ Support for larger programs: Programs can be larger than the physical memory
available on the system because only the necessary pages will be loaded into
memory.
➢ Faster program start: Because only part of a program is initially loaded into
memory, programs can start faster than if the entire program were loaded at once.
➢ Reduce memory usage: Query paging can help reduce the amount of memory
a program needs, which can improve system performance by reducing the amount
of disk I/O required.
A Page Fault happens when you access a page that has been marked as invalid. The
paging hardware would notice that the invalid bit is set while translating the address across
the page table, which will cause an operating system trap. The trap is caused primarily by the
OS's failure to load the needed page into memory.
Page Miss
If the needed page has not existed in the main memory (RAM), it is known as "Page Miss"
or “Page Fault”.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults. When 3 comes, it is already in memory so No Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1.
When 6 comes, it is also not available in memory so it replaces the oldest page slot
i.e 3 and its cause Page Fault. Finally, when 3 come it is not available so it replaces 0
ie page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots, 4 Page
faults. 0 is already there, so No Page fault. When 3 came it will take the place of 7
because it is least recently used and its cause Page fault. 0 is already in memory, so No
Page fault. 4 will takes place of 1 and its cause Page Fault. Now for the further page
reference string No Page fault because they are already available in the memory.
Belady’s Anomaly
Generally, on increasing the number of frames to a process virtual memory, its
execution becomes faster as fewer page faults occur. Sometimes the reverse happens, i.e.
more page faults occur when more frames are allocated to a process. This most
unexpected result is termed Belady’s Anomaly.
Belady’s Anomaly is the name given to the phenomenon where increasing the
number of page frames results in an increase in the number of page faults for a given
memory access pattern.
Question 1:
Consider the following page reference string:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Calculate the number of page faults related to LRU, FIFO and optimal page replacement
algorithms. Assume 5 page frames and all frames are initially empty.
In LRU:
In FIFO:
1. FILE
A file can be defined as a data structure which stores the sequence of
records. Files are stored in a file system, which may exist on a disk or in the main
memory. Files can be simple (plain text) or complex (speciallyformatted).
The collection of files is known as Directory. The collection of
directories at the different levels, is known as File System.
2. Identifier
Along with the name, Each File has its own extension which identifies the
type of the file. For example, a text file has the extension .txt, A video file
can have the extension .mp4.
3. Type
In a File System, the Files are classified in different types such as video files, audio files,
text files, executable files, etc.
4. Location
In the File System, there are several locations on which, the files can be stored. Each file
carries its location as its attribute.
5. Size
The Size of the File is one of its most important attribute. By size of the file, we mean the
number of bytes acquired by the file in the memory.
6. Protection
The Admin of the computer may want the different protections for the different files.
Therefore each file carries its own set of permissions to the different group of Users.
7. Time and Date
Every file carries a time stamp which contains the time and date on which the file is last
modified.
1. Create
This operation is used to create a file in the file system. It is the most widely used
operation performed on the file system. To create a new file of a particular type the associated
application program calls the file system. This file system allocates space to the file. As the
file system knows the format of directory structure, so entry of this new file is made into the
appropriate directory.
2. Open
This operation is the common operation performed on the file. Once the file is
created, it must be opened before performing the file processing operations. When the user
wants to open a file, it provides a file name to open the particular file in the file system. It
tells the operating system to invoke the open system call and passes the file name to the file
system.
3. Write
This operation is used to write the information into a file. A system call write is issued
that specifies the name of the file and the length of the data has to be written to the file.
Whenever the file length is increased by specified value and the file pointer is repositioned
after the last byte written.
4. Read
This operation reads the contents from a file. A Read pointer is maintained by the OS,
pointing to the position up to which the data has been read.
5. Re-position or Seek
The seek system call re-positions the file pointers from the current position to a
specific place in the file i.e. forward or backward depending upon the user's requirement.
This operation is generally performed with those file management systems that support direct
access files.
6. Delete
Deleting the file will not only delete all the data stored inside the file it is also used so that
disk space occupied by it is freed. In order to delete the specified file the directory is searched.
When the directory entry is located, all the associated file space and the directory entry is
released.
7. Truncate
Truncating is simply deleting the file except deleting attributes. The file is not completely
deleted although the information stored inside the file gets replaced.
8. Close
When the processing of the file is complete, it should be closed so that all the changes
made permanent and all the resources occupied should be released. On closing it deallocates
all the internal descriptors that were created when the file was opened. 9. Append
This operation adds data to the end of the file.
10. Rename
This operation is used to rename the existing file.
File Type
Usual extension
Archive arc, zip, tar Related files grouped into one compressed file
Print or View gif, pdf, jpg It is a format for printing or viewing an ASCII or
binary file.
1. Sequential Access
The operating system reads the file word by word in sequential access method of file
accessing. A pointer is made, which first links to the file's base address. If the user wishes to read the
first word of the file, the pointer gives it to them and raises its value to the next word.
This procedure continues till the file is finished. It is the most basic way of file access.
The data in the file is evaluated in the
order that it appears in the file and that is why
it is easy and simple to access a file's data
using sequential access mechanism. For
example, editors and compilers frequently use
this method to check the validity of the code.
Advantages
➢ The sequential access mechanism is very easy to implement.
➢ It uses lexicographic order to enable quick access to the next entry.
Disadvantages
➢ Sequential access will become slow if the next file record to be retrieved is not present
next to the currently pointed record.
➢ Adding a new record may need relocating a significant number of records of the file.
Advantages
➢ The files can be retrieved right away with direct access mechanism, reducing the average
access time of a file.
➢ There is no need to traverse all of the blocks that come before the required block to
access the record.
Disadvantages
➢ The direct access mechanism is typically difficult to implement due to its complexity.
➢ Organizations can face security issues as a result of direct access as the users may
access/modify the sensitive information. As a result, additional security processes must
be put in place.
Advantages
➢ If the index table is appropriately arranged, it accesses the records very quickly.
➢ Records can be added at any position in the file quickly.
DIRECTORY STRUCTURE
Directory can be defined as the listing of the related files on the disk. The directory may
store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard
disk can be divided into the number of partitions of different sizes. The partitions are also
called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the partition
can be listed. A directory entry is maintained for each file in the directory which stores all the
information related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch of
files. Every Directory supports a number of common operations on the file: ➢ File
Creation
➢ Search for the file File deletion
➢ Renaming the file
➢ Traversing Files
➢ Listing of files
The simplest method is to have one big list of all the files on the disk. The entire
system will contain only one directory which is supposed to mention all the files present in
the file system. The directory contains one entry per each file present on the file system.
Advantages
Disadvantages
➢ Naming problem: Users cannot have the same name for two files.
➢ Grouping problem: Users cannot group files according to their needs.
TWO-LEVEL DIRECTORY
In two level directory systems, we can create a separate directory for each user. There is
one master directory which contains separate directories dedicated to each user. For each
user, there is a different directory present at the second level, containing group of user's file.
The system doesn't let a user to enter in the other user's directory without permission.
Path name: Due to two levels there is a path name for every file to locate that file.
Advantage
➢ we can have the same file name for different users.
➢ Searching is efficient in this method.
GENERAL-GRAPH DIRECTORY
This is an extension to the acyclic-graph directory. In the general-graph directory,
there can be a cycle inside a directory.
In the above image, we can see that a cycle is formed in the user 2 directory. Although
it provides greater flexibility, it is complex to implement this structure. Advantages
Disadvantages
➢ It costs more than alternative solutions.
➢ Garbage collection is an essential step here.
Types of Access
The files which have direct access of the any user have the need of protection. The
files which are not accessible to other users doesn’t require any kind of protection.
The mechanism of the protection provide the facility of the controlled access by just
limiting the types of access to the file. Access can be given or not given to any user depends
on several factors, one of which is the type of access required.
Several different types of operations can be controlled:
➢ Read – Reading from a file.
➢ Write – Writing or rewriting the file.
➢ Execute – Loading the file and after loading the execution process starts.
➢ Append – Writing the new information to the already existing file, editing must be end
at the end of the existing file.
➢ Delete – Deleting the file which is of no use and using its space for the another data.
➢ List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be
controlled. There are many protection mechanism. each of them mechanism have different
advantages and disadvantages and must be appropriate for the intended application.
Access Control
There are different methods used by different users to access any file. The general
way of protection is to associate identity-dependent access with all the files and directories a
list called access-control list (ACL) which specify the names of the users and the types of
access associate with each of the user.
The main problem with the access list is their length. If we want to allow everyone to
read a file, we must list all the users with the read access. This technique has two undesirable
consequences:
Constructing such a list may be tedious and unrewarding task, especially if we do not know
in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes to the
variable size which results in the complicates space management. These
problems can be resolved by use of a condensed version of the access list. To condense
the length of the access-control list, many systems recognize three classification of users
in connection with each file:
➢ Owner – Owner is the user who has created the file.
➢ Group – A group is a set of members who has similar needs and they are
sharing the same file.
➢ Universe – In the system, all other users are under the category called
universe. The most common recent approach is to combine access-control
lists with the normal general owner, group, and universe access control
scheme. For example: Solaris uses the three categories of access by default
but allows access-control lists to be added to specific files and directories
when more finegrained access control is desired.
Contiguous Allocation
A single continuous set of blocks is
allocated to a file at the time of file creation.
Thus, this is a pre-allocation strategy, using
variable size portions. The file allocation
table needs just a single entry for each file,
showing the starting block and the length of
the file. This method is best from the point
of view of the individual sequential file.
Multiple blocks can be read in at a
time to improve I/O performance for
sequential processing. It is also easy to
retrieve a single block. For example, if a file
starts at block b, and the ith block of the file
is wanted, its location on secondary storage
is simply b+i-1.
Disadvantage
➢ External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
➢ Also, with pre-allocation, it is necessary to declare the size of the file at the time
of creation.
Disadvantage
➢ Internal fragmentation exists in the last disk block of the file.
➢ There is an overhead of maintaining the pointer in every disk block.
➢ If the pointer of any disk block is lost, the file will be truncated. It supports only
the sequential access of files.
Indexed Allocation
It addresses many of the problems of
contiguous and chained allocation. In this case,
the file allocation table contains a separate one-
level index for each file: The index has one entry
for each block allocated to the file.
The allocation may be on the basis of
fixed-size blocks or variable-sized blocks.
Allocation by blocks eliminates external
fragmentation, whereas allocation
by variable-size blocks improves locality.
This allocation technique supports both sequential and direct access to the file and
thus is the most popular form of file allocation.
3. Grouping
The grouping technique is also called the "modification of a linked list
technique". In this method, first, the free block of memory contains the addresses of the
n-free blocks. And the last free block of these n free blocks contains the addresses of the
next n free block of memory and this keeps going on. This technique separates the empty
and occupied blocks of space of memory.
4. Counting
In memory space, several files are created and deleted at the same time. For
which memory blocks are allocated and de-allocated for the files. Creation of files
occupy free blocks and deletion of file frees blocks.
When there is an entry in the free space, it consists of two parameters "address of
first free disk block (a pointer)" and "a number 'n'".
Return Value
➢ return first unused file descriptor (generally 3 when first creating use in the
process because 0, 1, 2 fd are reserved)
➢ return -1 when an error
2. open
The open() function in C is used to open the file for reading, writing, or both. It is
also capable of creating the file if it does not exist. It is defined inside <unistd.h> header
file and the flags that are passed as arguments are defined inside <fcntl.h> header file.
Syntax of open() in C int open (const char* Path, int flags);
Parameters
➢ Path: Path to the file which we want to open.
o Use the absolute path beginning with “/” when you are not working in
the same directory as the C source file.
o Use relative path which is only the file name with extension, when you
are working in the same directory as the C source file.
➢ flags: It is used to specify how you want to open the file. We can use the
following flags.
Flags Description
3. close
The close() function in C tells the operating system that you are done with a file
descriptor and closes the file pointed by the file descriptor. It is defined inside
<unistd.h> header file. Syntax of close() in C int close(int fd);
4. read
From the file indicated by the file descriptor fd, the read() function reads the
specified amount of bytes cnt of input into the memory area indicated by buf. The read()
function is also defined inside the <unistd.h> header file. Syntax of read() in C size_t
read (int fd, void* buf, size_t cnt);
Parameters
➢ fd: file descriptor of the file from which data is to be read.
➢ buf: buffer to read data from
➢ cnt: length of the buffer
Return Value
➢ return Number of bytes read on success
➢ return 0 on reaching the end of file
➢ return -1 on error
➢ return -1 on signal interrupt
5. write
Writes cnt bytes from buf to the file or socket associated with fd. cnt should not
be greater than INT_MAX (defined in the limits.h header file). If cnt is zero, write()
simply returns 0 without attempting any other action.
The write() is also defined inside <unistd.h> header file. Syntax of write()
in C
size_t write (int fd, void* buf, size_t cnt);
Parameters
➢ fd: file descriptor ➢ buf: buffer to write data to.
➢ cnt: length of the buffer.
Return Value
➢ returns the number of bytes written on success.
➢ return 0 on reaching the End of File.
➢ return -1 on error.
➢ return -1 on signal interrupts.
6. ioctl
➢ ioctl() is referred to as Input and Output Control.
➢ ioctl is a system call for device-specific input/output operations and other
operations which cannot be expressed by regular system calls.
7. fork
➢ A new process is created by the fork() system call.
➢ A new process may be created with fork() without a new program being runthe
new sub-process simply continues to execute exactly the same program that the
first (parent) process was running.
➢ It is one of the most widely used system calls under process management.
8. exit
➢ The exit() system call is used by a program to terminate its execution.
➢ The operating system reclaims resources that were used by the process after the
exit() system call.
9. exec
➢ A new program will start executing after a call to exec()
➢ Running a new program does not require that a new process be created first: any
process may call exec() at any time. The currently running program is
immediately terminated, and the new program starts executing in the context of
the existing process.
10. wait
The wait() system call suspends execution of the current process until one of its
children terminates. The call wait(&status) is equivalent to: waitpid(-1, &status, 0);
11. waitpid
The waitpid() system call suspends execution of the current process until a child
specified by pid argument has changed state. By default, waitpid() waits only for
terminated children, but this behaviour is modifiable via the options argument, as
described below.
The value of pid can be:
Tag Description
meaning wait for any child process whose process group ID is equal
< -1 to the absolute value of pid.
-1 meaning wait for any child process.
meaning wait for any child process whose process group ID is equal
0 to that of the calling process.
meaning wait for the child whose process ID is equal to the value of
>0 pid.
Seek Time
Seek time is the time taken in locating the disk arm to a specified track where the
read/write request will be satisfied.
Rotational Latency
It is the time taken by the desired sector to rotate itself to the position from where
it can access the R/W heads.
Transfer Time
It is the time taken to transfer the data.
Disadvantages
➢ The scheme does not optimize the seek time.
➢ The request may come from different processes therefore there is the
possibility of inappropriate movement of the head.
Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21,
67, 90, 4, 50, 89, 52, 61, 87, 25. Head pointer starting at 50 and moving in left direction.
Find the number of head movements in cylinders using FCFS scheduling. Solution
Disadvantages
➢ It may cause starvation for some requests.
➢ Switching direction on the frequent basis slows the working of algorithm.
➢ It is not the most optimal algorithm.
Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21,
67, 90, 4, 89, 52, 61, 87, 25. Head pointer starting at 50. Find the number of head
movements in cylinders using SSTF scheduling.
SCAN Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into
a particular direction till the end, satisfying all the requests coming in its path and then it
turns back and moves in the reverse direction satisfying requests coming in its path.
It works in the way an elevator works, elevator moves in a direction completely
till the last floor of that direction and then turns back.
Example
Consider the following disk request sequence for a disk with 100 tracks 98, 137,
122, 183, 14, 133, 65, 78. Head pointer starting at 54 and moving in left direction. Find
the number of head movements in cylinders using SCAN scheduling.
C-SCAN algorithm
In C-SCAN algorithm, the arm of the disk moves in a particular direction
servicing requests until it reaches the last cylinder, then it jumps to the last cylinder of the
opposite direction without servicing any request then it turns back and start moving in
that direction servicing the remaining requests.
Example
Consider the following disk request sequence for a disk with 100 tracks
98, 137, 122, 183, 14, 133, 65, 78. Head pointer starting at 54 and moving in left
direction.
LOOK Scheduling
It is like SCAN scheduling Algorithm to some extant except the difference that, in
this scheduling algorithm, the arm of the disk stops moving inwards (or outwards) when
no more request in that direction exists. This algorithm tries to overcome the overhead of
SCAN algorithm which forces disk arm to move in one direction till the end regardless of
knowing if any request exists in the direction or not.
Example
Consider the following disk request sequence for a disk with 100 tracks
direction.
98, 137, 122, 183, 14, 133, 65, 78. Head pointer starting at 54 and moving in left
Number of cylinders crossed = 40 + 51 + 13 + +20 + 24 + 11 + 4 + 46 = 209
C LOOK Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this
algorithm, the arm of the disk moves outwards servicing requests until it reaches the
highest request cylinder, then it jumps to the lowest request cylinder without servicing
any request then it again start moving outwards servicing the remaining requests.
It is different from C SCAN algorithm in the sense that, C SCAN force the disk
arm to move till the last cylinder regardless of knowing whether any request is to be
serviced on that cylinder or not.
Example
Consider the following disk request sequence for a disk with 100 tracks 98,
137, 122, 183, 14, 133, 65, 78. Head pointer starting at 54 and moving in left direction.
Find the number of head movements in cylinders using C LOOK scheduling.
Number of cylinders crossed = 11 + 13 + 20 + 24 + 11 + 4 + 46 + 169 = 298