Operating System Unit 2
Operating System Unit 2
Operating System Unit 2
(UNIT-II)
Process Management
There are different processes in the operating system. So there is a
need to manage them properly. Process management is a critical
component of any operating system. Process management is the
backbone of every operating system.
Processes are essential for managing system resources and ensuring
tasks are completed efficiently and effectively.Process management
refers to the activities involved in managing the execution of multiple
processes in an operating system. It includes creating, scheduling, and
terminating processes, as well as allocating system resources such as
CPU time, memory, and I/O devices.
If the operating system supports multiple users then services under this are
very important. In this regard, operating systems have to keep track of all the
completed processes, Schedule them, and dispatch them one after another.
Some of the systems call in this category is as follows.
Process
A process is an activity of executing a program. Basically, it is a program
under execution (A process is a program in the execution phase.). A process
is a running program that serves as the foundation for all computation. A
process is created, executed, and terminated. A Process goes through these
states.
The process is not the same as computer code, although it is very similar.
In contrast to the program, which is often regarded as some 'passive' entity, a
process is an 'active' entity.
Processes usually fall into two categories-
1. System processes
System processes are started by the operating system, and they usually
have something to do with running the OS itself.
2. User processes
The operating system though handles various system processes, but it
mainly handles the user code execution.
BCA, 3rd Semester Page 1
OPERATING SYSTEM
(UNIT-II)
Explanation of Process
Text Section: A Process, sometimes known as the Text
Section, also includes the current activity represented by
the value of the Program Counter.
Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process
during its run time.
States of Process
A process is in one of the following states:
New: Newly Created Process (or)
being-created process.
Ready: After the creation process moves to
the Ready state, i.e. the process is ready for
execution.
Run: Currently running process in CPU (only
one process at a time can be under
execution in a single processor)
Wait (or Block): When a process requests I/O
access.
Complete (or terminated): The process completed its execution.
Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
Suspended Block: When the waiting queue becomes full.
Operation on a Process
Creation
This is the initial step of the process execution activity. Process creation
means the construction of a new process for execution. This might be
performed by the system, the user, or the old process itself. There are several
events that lead to the process creation. Some of the such events are the
following:
1. When we start the computer, the system creates several background
processes.
3. A user may request to create a new process.
5. A process can create a new process itself while executing.
7. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to
run. It means the operating system puts the process from the ready state into
the running state. Dispatching is done by the operating system when the
resources are free or the process has higher priority than the ongoing process.
There are various other cases in which the process in the running state is
preempted and the process in the ready state is dispatched by the operating
system.
Blocking
When a process invokes an input-output system call that blocks the process,
and operating system is put in block mode. Block mode is basically a mode
where the process waits for input-output. Hence on the demand of the process
itself, the operating system blocks the process and dispatches another process
Preemption
When a timeout occurs that means the process hadn’t been terminated in the
allotted time interval and the next process is ready to execute, then the
operating system preempts the process. This operation is only valid where CPU
scheduling supports preemption. Basically, this happens in priority scheduling
where on the incoming of high priority process the ongoing process is
preempted. Hence, in process preemption operation, the operating system puts
the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words,
process termination is the relaxation of computer resources taken by the
process for the execution. Like creation, in termination also there may be
several events that may lead to the process of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it
has finished.
3. The operating system itself terminates the process due to service errors.
5. There may be a problem in hardware that terminates the process.
7. One process can be terminated by another process.
much of their time in input and output operations while CPU-bound processes
are which spend their time on the CPU. The job scheduler increases efficiency
by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
It can re-enter the process into It selects those It can re-introduce the
memory, allowing for the processes which are process into memory and
continuation of execution. ready to execute execution can be continued.
Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule
processes. Here are some commonly used timing algorithms:
1. First-come, first-served (FCFS): This is the simplest scheduling algorithm,
where the process is executed on a first-come, first-served basis. FCFS is
non-preemptive, which means that once a process starts executing, it
continues until it is finished or waiting for I/O.
3. Shortest Job First (SJF): SJF is a proactive scheduling algorithm that
selects the process with the shortest burst time. The burst time is the time
a process takes to complete its execution. SJF minimizes the average
waiting time of processes.
5. Round Robin (RR): RR is a proactive scheduling algorithm that reserves a
fixed amount of time in a round for each process. If a process does not
complete its execution within the specified time, it is blocked and added to
the end of the queue. RR ensures fair distribution of CPU time to all
processes and avoids starvation.
7. Priority Scheduling: This scheduling algorithm assigns priority to each
process and the process with the highest priority is executed first. Priority
can be set based on process type, importance, or resource requirements.
9. Multilevel queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes
are queued based on their priority, and each queue uses its own scheduling
algorithm. This scheduling algorithm is useful in scenarios where different
types of processes have different priorities.
● Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running independently,
will execute very efficiently, in reality, there are many situations when
co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can communicate
with each other through both:
1. Shared Memory
2. Message passing
In this method, processes communicate with each other without using any
kind of shared memory. If two processes p1 and p2 want to communicate with
each other, they proceed as follows:
The message size can be of fixed size or of variable size. If it is of fixed size, it
is easy for an OS designer but complicated for a programmer and if it is of
variable size then it is easy for a programmer but complicated for the OS
designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains
Need Of Thread
● Threads run in parallel improving the application performance. Each such
thread has its own CPU state and stack, but they share the address space of
the process and the environment.
● Threads can share common data so they do not need to use interprocess
communication. Like the processes, threads also have states like ready,
executing, blocked, etc.
● Priority can be assigned to the threads just like the process, and the
highest priority thread is scheduled first.
● Each thread has its own Thread Control Block (TCB). Like the process, a
context switch occurs for the thread, and register contents are saved in
(TCB). As threads share the same address space and resources,
synchronization is also required for the various activities of the thread.
Multi-Threading
A thread is also known as a lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple threads:
one thread to format the text, another thread to process inputs, etc. More
advantages of multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the
performance and responsiveness of computer systems. Multithreading allows
multiple threads (i.e., lightweight processes) to share the same resources of a
single process, such as the CPU, memory, and I/O devices.
BCA, 3rd Semester Page 10
OPERATING SYSTEM
(UNIT-II)
Advantages of Thread
● Responsiveness If the process is divided into
multiple threads, if one thread completes its execution, then its output can be
immediately returned.
● Faster context switch Context switch time between threads is lower
compared to the process context switch. Process context switching requires
more overhead from the CPU.
● Effective utilization of multiprocessor system If we have multiple threads
in a single process, then we can schedule multiple threads on multiple
processors. This will make process execution faster.
● Resource sharing Resources like code, data, and files can be shared
among all threads within a process. Note: Stacks and registers can’t be
shared among the threads. Each thread has its own stack and registers.
● Communication Communication between multiple threads is easier, as
the threads share a common address space. While in the process we have to
follow some specific communication techniques for communication
between the two processes.
● Enhanced throughput of the system If a process is divided into multiple
threads, and each thread function is considered as one job, then the number
of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they
are in front of each other. A similar situation occurs in operating systems when
there are two or more processes that hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2,
and process 2 is waiting for resource 1.
Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and
each needs another one.
3. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as
follows:
● P0 executes wait (A) and preempts.
● P1 executes wait (B).
● Now P0 and P1 enter in deadlock.
P0 P1
wait(A); wait(B)
wait(B); wait(A)
3. Assume the space is available for allocation of 200K bytes, and the
following sequence of events occurs.
P0 P1
Prevention:
The idea is to not let the system into a deadlock state. This system will make
sure that above mentioned four conditions will not arise. These techniques are
very costly so we use this in cases where our priority is making a system
deadlock-free.
One can zoom into each category individually; Prevention is done by negating
one of the above-mentioned necessary conditions for deadlock. Prevention can
be done in four different ways:
1. Eliminate mutual exclusion
2. Allow preemption
3. Solve hold and Wait
4. Circular wait Solution
Mutual section from the resource point of view is the fact that a resource can
never be used by more than one process simultaneously which is fair enough
but that is the main reason behind the deadlock. If a resource could have been
used by more than one process at the same time then the process would have
never been waiting for any resource.
Hold and wait condition lies when a process holds a resource and waiting for
some other resource to complete its task. Deadlock occurs because there can
be more than one process which are holding one resource and waiting for other
in the cyclic order.
Process is the set of instructions which are executed by the CPU. Each of the
instruction may demand multiple resources at the multiple times. The need
cannot be fixed by the OS.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts.
However, if we take the resource away from the process which is causing
deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.
4. Circular Wait
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the
resulting state of the system doesn't cause deadlock in the system. The state of
the system will continuously be checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of
resources a process can request to complete its execution.
The simplest and most useful approach states that the process should declare
the maximum number of resources of each type it may ever need. The Deadlock
avoidance algorithm examines the resource allocations so that there can never
be a circular wait condition.
Resources Assigned
Process Type 1 Type 2 Type 3 Type 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
Above tables and vector E, P and A describes the resource allocation state of a
system. There are 4 processes and 4 types of the resources in a system. Table
1 shows the instances of each resource assigned to each process.
Table 2 shows the instances of the resources, each process still needs. Vector E
is the representation of total instances of each resource in the system.
A state of the system is called safe if the system can allocate all the resources
requested by all the processes without entering into deadlock.
If the system cannot fulfill the request of all processes then the state of the
system is called unsafe.
The key of Deadlock avoidance approach is when the request is made for
resources then the request must only be approved in the case if the resulting
state is also a safe state.
In prevention and avoidance, user gets the correctness of data but performance
decreases.