Operating System Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

OPERATING SYSTEM

(UNIT-II)

Process Management
There are different processes in the operating system. So there is a
need to manage them properly. Process management is a critical
component of any operating system. Process management is the
backbone of every operating system.
Processes are essential for managing system resources and ensuring
tasks are completed efficiently and effectively.Process management
refers to the activities involved in managing the execution of multiple
processes in an operating system. It includes creating, scheduling, and
terminating processes, as well as allocating system resources such as
CPU time, memory, and I/O devices.
If the operating system supports multiple users then services under this are
very important. In this regard, operating systems have to keep track of all the
completed processes, Schedule them, and dispatch them one after another.
Some of the systems call in this category is as follows.

1. Change the priority of the process


2. Block the process
3. Ready the process
4. Dispatch a process
5. Suspend a process
6. Resume a process
7. Delay a process
8. Terminate a process

Process
A process is an activity of executing a program. Basically, it is a program
under execution (A process is a program in the execution phase.). A process
is a running program that serves as the foundation for all computation. A
process is created, executed, and terminated. A Process goes through these
states.
The process is not the same as computer code, although it is very similar.
In contrast to the program, which is often regarded as some 'passive' entity, a
process is an 'active' entity.
Processes usually fall into two categories-
1. System processes
System processes are started by the operating system, and they usually
have something to do with running the OS itself.
2. User processes
The operating system though handles various system processes, but it
mainly handles the user code execution.
BCA, 3rd Semester Page 1
OPERATING SYSTEM
(UNIT-II)

Explanation of Process
Text Section: A Process, sometimes known as the Text
Section, also includes the current activity represented by
the value of the Program Counter.
Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process
during its run time.

Attributes or Characteristics of a Process


A process has the following attributes.
Process Id: A unique identifier assigned by the operating system
Process State: Can be ready, running, etc.
CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
I/O status information: For example, devices allocated to the process, open
files, etc
CPU scheduling information: For example, Priority (Different processes may
have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the
process. Every process has its own process control block(PCB), i.e. each
process will have a unique PCB. All of the above attributes are part of the
PCB.

States of Process
A process is in one of the following states:
New: Newly Created Process (or)
being-created process.
Ready: After the creation process moves to
the Ready state, i.e. the process is ready for
execution.
Run: Currently running process in CPU (only
one process at a time can be under
execution in a single processor)
Wait (or Block): When a process requests I/O
access.
Complete (or terminated): The process completed its execution.

BCA, 3rd Semester Page 2


OPERATING SYSTEM
(UNIT-II)

Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
Suspended Block: When the waiting queue becomes full.

Operation on a Process

The execution of a process is a


complex activity. It involves
various operations. Following are
the operations that are
performed while execution of a
process:

Creation
This is the initial step of the process execution activity. Process creation
means the construction of a new process for execution. This might be
performed by the system, the user, or the old process itself. There are several
events that lead to the process creation. Some of the such events are the
following:
1. When we start the computer, the system creates several background
processes.
3. A user may request to create a new process.
5. A process can create a new process itself while executing.
7. The batch system takes initiation of a batch job.

Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to
run. It means the operating system puts the process from the ready state into
the running state. Dispatching is done by the operating system when the
resources are free or the process has higher priority than the ongoing process.
There are various other cases in which the process in the running state is
preempted and the process in the ready state is dispatched by the operating
system.

Blocking
When a process invokes an input-output system call that blocks the process,
and operating system is put in block mode. Block mode is basically a mode
where the process waits for input-output. Hence on the demand of the process
itself, the operating system blocks the process and dispatches another process

BCA, 3rd Semester Page 3


OPERATING SYSTEM
(UNIT-II)

to the processor. Hence, in process-blocking operations, the operating system


puts the process in a ‘waiting’ state.

Preemption
When a timeout occurs that means the process hadn’t been terminated in the
allotted time interval and the next process is ready to execute, then the
operating system preempts the process. This operation is only valid where CPU
scheduling supports preemption. Basically, this happens in priority scheduling
where on the incoming of high priority process the ongoing process is
preempted. Hence, in process preemption operation, the operating system puts
the process in a ‘ready’ state.

Process Termination
Process termination is the activity of ending the process. In other words,
process termination is the relaxation of computer resources taken by the
process for the execution. Like creation, in termination also there may be
several events that may lead to the process of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it
has finished.
3. The operating system itself terminates the process due to service errors.
5. There may be a problem in hardware that terminates the process.
7. One process can be terminated by another process.

Advantages of Process Management


1. Improved Efficiency: Process management can help organizations
identify bottlenecks and inefficiencies in their processes, allowing them to
make changes to streamline workflows and increase productivity.
3. Cost Savings: By identifying and eliminating waste and inefficiencies,
process management can help organizations reduce costs associated with
their business operations.
5. Improved Quality: Process management can help organizations improve
the quality of their products or services by standardizing processes and
reducing errors.
7. Increased Customer Satisfaction: By improving efficiency and quality,
process management can enhance the customer experience and increase
satisfaction.
9. Compliance with Regulations: Process management can help
organizations comply with regulatory requirements by ensuring that
processes are properly documented, controlled, and monitored.

BCA, 3rd Semester Page 4


OPERATING SYSTEM
(UNIT-II)

Disadvantages of Process Management


1. Time and Resource Intensive: Implementing and maintaining process
management initiatives can be time-consuming and require significant
resources.
3. Resistance to Change: Some employees may resist changes to
established processes, which can slow down or hinder the implementation
of process management initiatives.
5. Overemphasis on Process: Overemphasis on the process can lead to a
lack of focus on customer needs and other important aspects of business
operations.
7. Risk of Standardization: Standardizing processes too much can limit
flexibility and creativity, potentially stifling innovation.
9. Difficulty in Measuring Results: Measuring the effectiveness of process
management initiatives can be difficult, making it challenging to determine
their impact on organizational performance.

Process Schedulers in Operating System


Process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
system. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.

Scheduling falls into one of two categories:


● Non-preemptive: In this case, a process’s resource cannot be taken before
the process has finished running. When a running process finishes and
transitions to a waiting state, resources are switched.
● Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to
ready state or from waiting for state to ready state during resource allocation.
This switching happens because the CPU may give other processes priority
and substitute the currently active process for the higher priority process.
There are three types of process schedulers.

I) Long Term or Job Scheduler


It brings the new process to the ‘Ready State’. It controls the
, i.e., the number of processes present in a ready state at any
point in time. It is important that the long-term scheduler make a careful
selection of both I/O and CPU-bound processes. I/O-bound tasks are which use
BCA, 3rd Semester Page 5
OPERATING SYSTEM
(UNIT-II)

much of their time in input and output operations while CPU-bound processes
are which spend their time on the CPU. The job scheduler increases efficiency
by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.

II) Short-Term or CPU Scheduler


It is responsible for selecting one process from the ready state for scheduling it
on the running state. Note: Short-term scheduler only selects the process to
schedule it doesn’t load the process on running. Here is when all the scheduling
algorithms are used. The CPU scheduler is responsible for ensuring no
starvation due to high burst time processes. is responsible for
loading the process selected by the Short-term scheduler on the CPU (Ready to
Running State) Context switching is done by the dispatcher only.

1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.

III) Medium-Term Scheduler


It is responsible for suspending and resuming the process. It mainly does
swapping (moving processes from main memory to disk and vice versa).
Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring memory
to be freed up. It is helpful in maintaining a perfect balance between the I/O
bound and the CPU bound. It reduces the degree of multiprogramming.

Comparison among Scheduler


Long Term Scheduler Short term scheduler Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser than Speed is the fastest
short and long-term
short term scheduler among all of them.
schedulers.

It gives less control over


It controls the degree of how much It reduces the degree of
multiprogramming multiprogramming is multiprogramming.
done.

It is barely present or It is a minimal It is a component of systems

BCA, 3rd Semester Page 6


OPERATING SYSTEM
(UNIT-II)

Long Term Scheduler Short term scheduler Medium Term Scheduler

nonexistent in the time-sharing time-sharing system. for time sharing.


system.

It can re-enter the process into It selects those It can re-introduce the
memory, allowing for the processes which are process into memory and
continuation of execution. ready to execute execution can be continued.

Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule
processes. Here are some commonly used timing algorithms:
1. First-come, first-served (FCFS): This is the simplest scheduling algorithm,
where the process is executed on a first-come, first-served basis. FCFS is
non-preemptive, which means that once a process starts executing, it
continues until it is finished or waiting for I/O.
3. Shortest Job First (SJF): SJF is a proactive scheduling algorithm that
selects the process with the shortest burst time. The burst time is the time
a process takes to complete its execution. SJF minimizes the average
waiting time of processes.
5. Round Robin (RR): RR is a proactive scheduling algorithm that reserves a
fixed amount of time in a round for each process. If a process does not
complete its execution within the specified time, it is blocked and added to
the end of the queue. RR ensures fair distribution of CPU time to all
processes and avoids starvation.
7. Priority Scheduling: This scheduling algorithm assigns priority to each
process and the process with the highest priority is executed first. Priority
can be set based on process type, importance, or resource requirements.
9. Multilevel queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes
are queued based on their priority, and each queue uses its own scheduling
algorithm. This scheduling algorithm is useful in scenarios where different
types of processes have different priorities.

Inter Process Communication (IPC)

A process can be of two types:


● Independent process.
BCA, 3rd Semester Page 7
OPERATING SYSTEM
(UNIT-II)

● Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running independently,
will execute very efficiently, in reality, there are many situations when
co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can communicate
with each other through both:

1. Shared Memory
2. Message passing

Figure below shows a basic structure of communication between processes via


the shared memory method and via the message passing method.

An operating system can implement both methods of communication.

Shared memory method:


Communication between processes using shared memory requires processes
to share some variable, and it completely depends on how the programmer will
implement it. One way of communication using shared memory can be
imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from
another process. Process1 generates information about certain computations
or resources being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in the record stored
in shared memory and take note of the information generated by process1 and
act accordingly. Processes can use shared memory for extracting information
BCA, 3rd Semester Page 8
OPERATING SYSTEM
(UNIT-II)

as a record from another process as well as for delivering any specific


information to other processes.

In this method, processes communicate with each other without using any
kind of shared memory. If two processes p1 and p2 want to communicate with
each other, they proceed as follows:

● Establish a communication link (if a link already exists, no need to


establish it again.)

● Start exchanging messages


using basic primitives.
We need at least two primitives:
– send(message, destination)
or send(message)
– receive(message, host)
or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it
is easy for an OS designer but complicated for a programmer and if it is of
variable size then it is easy for a programmer but complicated for the OS
designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains

BCA, 3rd Semester Page 9


OPERATING SYSTEM
(UNIT-II)

information like what to do if runs out of buffer space, sequence number,


priority. Generally, message is sent using FIFO style.

Thread in Operating System


Within a program, a Thread is a separate execution path. It is a lightweight
process that the operating system can schedule and run concurrently with other
threads. The operating system creates and manages threads, and they share
the same memory and resources as the program that created them. This
enables multiple threads to collaborate and work efficiently within a single
program.
A thread is a single sequence stream within a process. Threads are also called
lightweight processes as they possess some of the properties of processes.
Each thread belongs to exactly one process. In an operating system that
supports multithreading, the process can consist of many threads. But threads
can be effective only if CPU is more than 1 otherwise two threads have to
context switch for that single CPU.

Need Of Thread
● Threads run in parallel improving the application performance. Each such
thread has its own CPU state and stack, but they share the address space of
the process and the environment.
● Threads can share common data so they do not need to use interprocess
communication. Like the processes, threads also have states like ready,
executing, blocked, etc.
● Priority can be assigned to the threads just like the process, and the
highest priority thread is scheduled first.
● Each thread has its own Thread Control Block (TCB). Like the process, a
context switch occurs for the thread, and register contents are saved in
(TCB). As threads share the same address space and resources,
synchronization is also required for the various activities of the thread.

Multi-Threading
A thread is also known as a lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple threads:
one thread to format the text, another thread to process inputs, etc. More
advantages of multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the
performance and responsiveness of computer systems. Multithreading allows
multiple threads (i.e., lightweight processes) to share the same resources of a
single process, such as the CPU, memory, and I/O devices.
BCA, 3rd Semester Page 10
OPERATING SYSTEM
(UNIT-II)

Difference between Process and Thread


The primary difference is that threads within
the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one another like
processes are, and as a result, threads share with
other threads their code section, data section, and
OS resources (like open files and signals). But, like
a process, a thread has its own program counter
(PC), register set, and stack space.

Advantages of Thread
● Responsiveness If the process is divided into
multiple threads, if one thread completes its execution, then its output can be
immediately returned.
● Faster context switch Context switch time between threads is lower
compared to the process context switch. Process context switching requires
more overhead from the CPU.
● Effective utilization of multiprocessor system If we have multiple threads
in a single process, then we can schedule multiple threads on multiple
processors. This will make process execution faster.
● Resource sharing Resources like code, data, and files can be shared
among all threads within a process. Note: Stacks and registers can’t be
shared among the threads. Each thread has its own stack and registers.
● Communication Communication between multiple threads is easier, as
the threads share a common address space. While in the process we have to
follow some specific communication techniques for communication
between the two processes.
● Enhanced throughput of the system If a process is divided into multiple
threads, and each thread function is considered as one job, then the number
of jobs completed per unit of time is increased, thus increasing the
throughput of the system.

Introduction of Deadlock in Operating System


A process in operating system uses resources in the following way.
1. Requests a resource
3. Use the resource
5. Releases the resource
is a situation where a set of processes are blocked because each
process is holding a resource and waiting for another resource acquired by
some other process.
BCA, 3rd Semester Page 11
OPERATING SYSTEM
(UNIT-II)

Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they
are in front of each other. A similar situation occurs in operating systems when
there are two or more processes that hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2,
and process 2 is waiting for resource 1.

Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and
each needs another one.
3. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as
follows:
● P0 executes wait (A) and preempts.
● P1 executes wait (B).
● Now P0 and P1 enter in deadlock.
P0 P1

wait(A); wait(B)

wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the
following sequence of events occurs.

P0 P1

Request 80KB; Request 70KB;

Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.


BCA, 3rd Semester Page 12
OPERATING SYSTEM
(UNIT-II)

Deadlock can arise if the following four conditions hold simultaneously


(Necessary Conditions)

Two or more resources are non-shareable (Only one process


can use at a time)
A process is holding at least one resource and waiting for
resources.
A resource cannot be taken from a process unless the process
releases the resource.
A set of processes waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance:

Prevention:
The idea is to not let the system into a deadlock state. This system will make
sure that above mentioned four conditions will not arise. These techniques are
very costly so we use this in cases where our priority is making a system
deadlock-free.
One can zoom into each category individually; Prevention is done by negating
one of the above-mentioned necessary conditions for deadlock. Prevention can
be done in four different ways:
1. Eliminate mutual exclusion
2. Allow preemption
3. Solve hold and Wait
4. Circular wait Solution

1. Eliminate Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource can
never be used by more than one process simultaneously which is fair enough
but that is the main reason behind the deadlock. If a resource could have been
used by more than one process at the same time then the process would have
never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually


exclusive manner then the deadlock can be prevented.

BCA, 3rd Semester Page 13


OPERATING SYSTEM
(UNIT-II)

2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for
some other resource to complete its task. Deadlock occurs because there can
be more than one process which are holding one resource and waiting for other
in the cyclic order.

However, we have to find out some mechanism by which a process either


doesn't hold any resource or doesn't wait. That means, a process must be
assigned all the necessary resources before the execution starts. A process
must not wait for any resource once the execution has been started.

This can be implemented practically if a process declares all the resources


initially. However, this sounds very practical but can't be done in the computer
system because a process can't determine necessary resources initially.

Process is the set of instructions which are executed by the CPU. Each of the
instruction may demand multiple resources at the multiple times. The need
cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some
process may hold a resource for a very long time.

3. No Preemption

BCA, 3rd Semester Page 14


OPERATING SYSTEM
(UNIT-II)

Deadlock arises due to the fact that a process can't be stopped once it starts.
However, if we take the resource away from the process which is causing
deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.

4. Circular Wait

To violate circular wait, we can assign a


priority number to each of the resource. A
process can't request for a lesser priority
resource. This ensures that not a single
process can request a resource which is
being utilized by some other process and
no cycle will be formed.

Among all the methods, violating Circular


wait is the only approach that can be implemented practically.

Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the
resulting state of the system doesn't cause deadlock in the system. The state of
the system will continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of
resources a process can request to complete its execution.

The simplest and most useful approach states that the process should declare
the maximum number of resources of each type it may ever need. The Deadlock
avoidance algorithm examines the resource allocations so that there can never
be a circular wait condition.

Safe and Unsafe States

BCA, 3rd Semester Page 15


OPERATING SYSTEM
(UNIT-II)

The resource allocation state of a system can be defined by the instances of


available and allocated resources, and the maximum instance of the resources
demanded by the processes.

A state of a system recorded at some random time is shown below.

Resources Assigned
Process Type 1 Type 2 Type 3 Type 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0

Resources still needed


Process Type 1 Type 2 Type 3 Type 4
A 1 1 0 0
B 0 1 1 2
1. E = (7 6 8 4)
C 1 2 1 0
2. P = (6 2 8 3) D 2 1 1 2
3. A = (1 4 0 1)

Above tables and vector E, P and A describes the resource allocation state of a
system. There are 4 processes and 4 types of the resources in a system. Table
1 shows the instances of each resource assigned to each process.
Table 2 shows the instances of the resources, each process still needs. Vector E
is the representation of total instances of each resource in the system.

Vector P represents the instances of resources that have been assigned to


processes. Vector A represents the number of resources that are not in use.

A state of the system is called safe if the system can allocate all the resources
requested by all the processes without entering into deadlock.

If the system cannot fulfill the request of all processes then the state of the
system is called unsafe.

The key of Deadlock avoidance approach is when the request is made for
resources then the request must only be approved in the case if the resulting
state is also a safe state.

In prevention and avoidance, user gets the correctness of data but performance
decreases.

BCA, 3rd Semester Page 16


OPERATING SYSTEM
(UNIT-II)

Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to


make an assumption. We need to ensure that all information about resources
that the process will need is known to us before the execution of the process.

Deadlock detection and recovery: If Deadlock prevention or avoidance is not


applied to the software then we can handle this by deadlock detection and
recovery, which consist of two phases:
1. In the first phase, examine the state of the process and check whether
there is a deadlock or not in the system.
3. If found deadlock in the first phase then apply the algorithm for recovery
of the deadlock.
In Deadlock detection and recovery, user gets the correctness of data but
performance decreases.

BCA, 3rd Semester Page 17

You might also like