Assignment 1
Assignment 1
(PCO604C) Engineering
UNIT-2
PROCESS MANAGEMENT
PROCESS CONCEPT
■ A question that arises in discussing operating systems involves what to call all the CPU
activities. A batch system executes jobs, whereas a time-shared system has user programs, or
tasks. Even on a single-user system such as Microsoft Windows, a user may be able to run
several programs at one time: a word processor, a web browser, and an e-mail package. Even if
the user can execute only one program at a time, the operating system may need to support its
own internal programmed activities, such as memory management. In many respects, all these
activities are similar, so we call all of them processes. The terms job and process are used
almost interchangeably in this text. Although we personally prefer the term process, much of
operating-system theory and terminology was developed during a time when the major activity
of operating systems was job processing. It would be misleading to avoid the use of commonly
accepted terms that include the word job (such as job scheduling) simply because process has
superseded job.
THE PROCESS
■ Informally, as mentioned earlier, a process is a program in execution. A process is
more than the program code, which is sometimes known as the text section. It also
includes the current activity, as represented by the value of the program counter and
the contents of the processor’s registers. A process generally also includes the
process stack, which contains temporary data (such as function parameters, return
addresses, and local variables), and a data section, which contains global variables.
Fig:2.1
EXPLANATION OF PROCESS
1. Text Section: A Process, sometimes known as the Text Section, also includes the
current activity represented by the value of the Program Counter.
2. Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
3. Data Section: Contains the global variable.
4. Heap Section: Dynamically allocated memory to process during its run time.
ATTRIBUTES OR CHARACTERISTICS OF A PROCESS
A process has the following attributes.
Process Id: A unique identifier assigned by the operating system
Process State: Can be ready, running, etc.
CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
I/O status information: For example, devices allocated to the process, open files,
etc
CPU scheduling information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the
shortest job first scheduling) .
All of the above attributes of a process are also known as the context of the process.
Every process has its own process control block(PCB), i.e. each process will have a
unique PCB. All of the above attributes are part of the PCB.
Fig 2.2
STATES OF PROCESS
A process is in one of the following states:
1. New: Newly Created Process (or) being-created process.
2. Ready: After the creation process moves to the Ready state, i.e. the process is ready
for execution.
3. Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
4. Wait (or Block): When a process requests I/O access.
5. Complete (or Terminated): The process completed its execution.
6. Suspended Ready: When the ready queue becomes full, some processes are moved
to a suspended ready state
7. Suspended Block: When the waiting queue becomes full.
Fig 2.3
CRITICAL SECTION PROBLEM
Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure,
CPU or any IO device.
The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.
The critical section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can execute in its
critical section at a time. All the other processes have to wait to execute in their
critical sections.
A diagram that demonstrates the critical section is as follows –
In the above diagram, the entry section handles the entry into the critical section. It
acquires the resources needed for execution by the process. The exit section handles
the exit from the critical section. It releases the resources and also informs the other
processes that the critical section is free.
SYNCHRONIZATION
Synchronization is the way by which processes that share the same memory space
are managed in an operating system. It helps maintain the consistency of data by
using variables or hardware so that only one process can make changes to the shared
memory at a time. There are various solutions for the same such as semaphores,
muted locks, synchronization hardware, etc.
An operating system is a software that manages all applications on a device and
basically helps in the smooth functioning of our computer. Because of this reason,
the operating system has to perform many tasks, and sometimes simultaneously. This
isn’t usually a problem unless these simultaneously occurring processes use a
common resource.
For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you
take out some amount of money from your bank account, and at the same time,
someone tries to look at the amount of money stored in your account. As you are
taking out some money from your account, after the transaction, the total balance left
will be lower than x. But, the transaction takes time, and hence the person reads x as
your account balance which leads to inconsistent data. If in some way, we could
make sure that only one process occurs at a time, we could ensure consistent data
Fig 2.5
In the above image, if Process1 and Process2 happen at the same time, user 2 will get
the wrong account balance as Y because of Process1 being transacted when the
balance is X.
Inconsistency of data can occur when various processes share a common resource in
a system which is why there is a need for process synchronization in the operating
system
For example, If a process1 is trying to read the data present in a memory location
while another process2 is trying to change the data present at the same location, there
is a high chance that the data read by the process1 will be incorrect.
Fig 2.6
Let us look at different elements/sections of a program:
Entry Section: The entry Section decides the entry of a process.
Critical Section: Critical section allows and makes sure that only one process is
modifying the shared data.
Exit Section: The entry of other processes in the shared data after the execution
of one process is handled by the Exit section.
Remainder Section: The remaining part of the code which is not categorized as
above is contained in the Remainder section.
SEMAPHORES
• Semaphores are just normal variables used to coordinate the activities of multiple
processes in a computer system. They are used to enforce mutual exclusion, avoid
race conditions and implement synchronization between processes.
• The process of using Semaphores provides two operations: wait (P) and signal (V).
The wait operation decrements the value of the semaphore, and the signal operation
increments the value of the semaphore. When the value of the semaphore is zero, any
process that performs a wait operation will be blocked until another process performs
a signal operation.
• Semaphores are used to implement critical sections, which are regions of code that
must be executed by only one process at a time. By using semaphores, processes can
coordinate access to shared resources, such as shared memory or I/O devices.
SEMAPHORES ARE OF TWO TYPES:
BINARY SEMAPHORE –
This is also known as a mute lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
COUNTING SEMAPHORE –
Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.
Fig 2.7
Some points regarding P and V operation:
P operation is also called wait, sleep, or down operation, and V operation is also
called signal, wake-up, or up operation.
Both operations are atomic and semaphore(s) is always initialized to one. Here
atomic means that variable on which read, modify and update happens at the
same time/moment with no pre-emption i.e. in-between read, modify and
update no other operation is performed that may change the variable.
A critical section is surrounded by both operations to implement process
synchronization. See the below image. The critical section of Process P is in
between P and V operation.
Fig 2.8
PROCESS GENERATION
Process generation typically refers to the creation or instantiation of new processes
within an operating system. Processes are fundamental units of execution in a
computer system, and they represent running programs. Here’s an overview of the
process generation in process management:
1. Request for a New Process: The process generation begins when a user or
application requests the execution of a new program or task. This request could
be initiated through various means, such as opening a new application or
launching a background service.
2. Process Creation: Once a request is received, the operating system’s process
manager creates a new process. This involves allocating necessary resources
and setting up data structures to manage the new process.
3. Resource Allocation: The operating system allocates various resources to the
new process, including CPU time, memory space, file handles, and
input/output devices. These resources are essential for the proper execution of
the process.
3. Initialization: The newly created process is initialized with the program code,
data, and other necessary information. This includes setting the program
counter to the starting point of the program and initializing data structures.
4. Process Scheduling: Depending on the scheduling algorithm used by the
operating system, the newly created process may be added to a queue for
execution. The scheduler decides when and for how long each process will run
on the CPU.
5. Execution: The process is now in the “ready to run” state and can execute its
code on the CPU when it’s its turn, based on the scheduling algorithm.
6. Termination: Once the process completes its task or is terminated by the user or
system, its resources are de allocated, and the process is removed from the
system.
7. Clean up: Any remaining resources or data associated with the terminated
process are cleaned up to prevent memory leaks and resource wastage.
Process generation is a crucial part of process management as it ensures that multiple
tasks or programs can run concurrently on a computer system without interfering
with each other. Properly managing the generation, execution, and termination of
processes is essential for efficient and stable system operation.
PROCESS SCHEDULING
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
CATEGORIES IN SCHEDULING
Scheduling falls into one of two categories:
1. Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions
to a waiting state, resources are switched.
2. Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to ready
state or from waiting for state to ready state during resource allocation. This
switching happens because the CPU may give other processes priority and
substitute the currently active process for the higher priority process.
There are three types of process schedulers.
LONG TERM OR JOB SCHEDULER
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes.
I/O-bound tasks are which use much of their time in input and output operations
while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two. They
operate at a high level and are typically used in batch-processing systems.
SHORT-TERM OR CPU SCHEDULER
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used.
The CPU scheduler is responsible for ensuring no starvation due to high burst time
processes. The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only.
A dispatcher does the following:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded program.
MEDIUM-TERM SCHEDULER
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
Some Other Schedulers
I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use
various algorithms to determine the order in which I/O operations are executed,
such as FCFS (First-Come, First-Served) or RR (Round Robin).
Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize
and schedule tasks using various algorithms such as EDF (Earliest Deadline
First) or RM (Rate Monotonic).
PROCESS SCHEDULING QUEUES
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of
a process is changed, its PCB is unlinked from its current queue and moved to its
new state queue.
The Operating System maintains the following important process scheduling queues
–
Job queue – This queue keeps all the processes in the system.
Ready queue – This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues – The processes which are blocked due to unavailability of an
I/O device constitute this queue.
Fig 2.9
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.
CPU SCHEDULING
CPU scheduling is a process that allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to unavailability of any
resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is
to make the system efficient, fast, and fair.
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler). The scheduler selects from among the
processes in memory that are ready to execute and allocates the CPU to one of them.
CPU SCHEDULING: DISPATCHER
Another component involved in the CPU scheduling function is the Dispatcher. The
dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program from
where it left last time.
The dispatcher should be as fast as possible, given that it is invoked during every
process switch. The time taken by the dispatcher to stop one process and start another
process is known as the Dispatch Latency. Dispatch Latency can be explained using
the below figure:
Fig 2.10
TYPES OF CPU SCHEDULING
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O
request or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates .
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
1. Mutual Exclusion: Two or more resources are non-shareable (Only one process
can use at a time)
2. Hold and Wait: A process is holding at least one resource and waiting for
resources.
3. No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
4. Circular Wait: A set of processes waiting for each other in circular form.
DEADLOCK PREVENTION:
■ The idea is to not let the system into a deadlock state. This system will make sure
that above mentioned four conditions will not arise. These techniques are very costly
so we use this in cases where our priority is making a system deadlock-free.
■ One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in
four different ways:
MUTUAL EXCLUSION
Mutual section from the resource point of view is the fact that a resource can never
be used by more than one process simultaneously which is fair enough but that is the
main reason behind the deadlock. If a resource could have been used by more than
one process at the same time then the process would have never been waiting for any
resource.
However, if we can be able to violate resources behaving in the mutually exclusive
manner then the deadlock can be prevented.
SPOOLING
For a device like printer, spooling can work. There is a memory associated with the
printer which stores jobs from each of the process into it. Later, Printer collects all
the jobs and print each one of them according to FCFS. By using this mechanism, the
process doesn’t have to wait for the printer and it can continue whatever it was
doing. Later, it collects the output when it is produced.
Fig 2.12
Although, Spooling can be an effective approach to violate mutual exclusion but it
suffers from two kinds of problems.
1. This cannot be applied to every resource.
2. After some point of time, there may arise a race condition between the
processes to get space in that spool.
We cannot force a resource to be used by more than one process at the same time
since it will not be fair enough and some serious problems may arise in the
performance. Therefore, we cannot violate mutual exclusion for a process practically.
Eliminate Hold and wait: Allocate all required resources to the process before the
start of its execution, this way hold and wait condition is eliminated but it will lead to
low device utilization. For example, if a process requires a printer at a later time and
we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for
resources after releasing the current set of resources. This solution may lead to
starvation.
Fig 2.13
Eliminate No Preemption : Preempts resources from the process when resources are
required by other high-priority processes.
Eliminate Circular Wait : Each resource will be assigned a numerical number. A
process can request the resources to increase/decrease. Order of numbering. For
Example, if the P1 process is allocated R5 resources, now next time if P1 asks for
R4, R3 lesser than R5 such a request will not be granted, only a request for resources
more than R5 will be granted.
Fig 2.14
DEADLOCK DETECTION AND RECOVERY
■ Deadlock detection and recovery is the process of detecting and resolving deadlocks
in an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.
■ There are two main approaches to deadlock detection and recovery:
Prevention: The operating system takes steps to prevent deadlocks from
occurring by ensuring that the system is always in a safe state, where deadlocks
cannot occur. This is achieved through resource allocation algorithms such as
the Banker’s Algorithm.
Detection and Recovery: If deadlocks do occur, the operating system must
detect and resolve them. Deadlock detection algorithms, such as the Wait-For
Graph, are used to identify deadlocks, and recovery algorithms, such as the
Rollback and Abort algorithm, are used to resolve them. The recovery algorithm
releases the resources held by one or more processes, allowing the system to
continue to make progress.
Difference Between Prevention and Detection/Recovery: Prevention aims to avoid
deadlocks altogether by carefully managing resource allocation, while detection and
recovery aim to identify and resolve deadlocks that have already occurred.
Deadlock detection and recovery is an important aspect of operating system design
and management, as it affects the stability and performance of the system. The choice
of deadlock detection and recovery approach depends on the specific requirements of
the system and the trade-offs between performance, complexity, and risk tolerance.
The operating system must balance these factors to ensure that deadlocks are
effectively detected and resolved.
In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.
DEADLOCK DETECTION :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the
cycle in the Resource Allocation Graph. The presence of a cycle in the graph is
a sufficient condition for deadlock.
2. If there are multiple instances of resources –
Detection of the cycle is necessary but not a sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies
according to different situations.
3. Wait-For Graph Algorithm –
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect
deadlocks in a system where resources can have multiple instances. The
algorithm works by constructing a Wait-For Graph, which is a directed graph
that represents the dependencies between processes and resources.
DEADLOCK RECOVERY :
A traditional operating system such as Windows doesn’t deal with deadlock recovery
as it is a time and space-consuming process. Real-time operating systems use
Deadlock recovery.
Killing the process
Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by
one helps a system to break circular wait conditions.
Resource Preemption
Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from the deadlock. In this case, the system goes into
starvation.
Concurrency Control
Concurrency control mechanisms are used to prevent data inconsistencies in systems
with multiple concurrent processes. These mechanisms ensure that concurrent
processes do not access the same data at the same time, which can lead to
inconsistencies and errors. Deadlocks can occur in concurrent systems when two or
more processes are blocked, waiting for each other to release the resources they need.
This can result in a system-wide stall, where no process can make progress.
Concurrency control mechanisms can help prevent deadlocks by managing access to
shared resources and ensuring that concurrent processes do not interfere with each
other.
ADVANTAGES OR DISADVANTAGES:
Advantages of Deadlock Detection and Recovery in Operating Systems:
Improved System Stability: Deadlocks can cause system-wide stalls, and detecting
and resolving deadlocks can help to improve the stability of the system.
Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.
Disadvantages of Deadlock Detection and Recovery in Operating Systems:
Performance Overhead: Deadlock detection and recovery algorithms can introduce
a significant overhead in terms of performance, as the system must regularly check
for deadlocks and take appropriate action to resolve them.
Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Time stamping.
False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
Risk of Data Loss: In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.
Overall, the choice of deadlock detection and recovery approach depends on the
specific requirements of the system, the trade-offs between performance, complexity,
and accuracy, and the risk tolerance of the system. The operating system must
balance these factors to ensure that deadlocks are effectively detected and resolved.
DEADLOCK AVOIDANCE
In complex systems involving multiple processes and shared resources, the potential
for deadlocks arises when processes wait for each other to release resources, causing
a standstill. The resulting deadlocks can cause severe issues in computer systems,
such as performance degradation and even system crashes.
To prevent such problems, the technique of deadlock avoidance is employed. It
entails scrutinizing the requests made by processes for resources and evaluating the
available resources to determine if the grant of such requests would lead to a
deadlock.
In cases where granting a request would result in a deadlock, the system denies the
request. Deadlock avoidance is a crucial aspect of operating system design and plays
an indispensable role in upholding the dependability and steadiness of computer
systems.
Safe State and Unsafe State
A safe state refers to a system state where the allocation of resources to each process
ensures the avoidance of deadlock. The successful execution of all processes is
achievable, and the likelihood of a deadlock is low. The system attains a safe state
when a suitable sequence of resource allocation enables the successful completion of
all processes.
Conversely, an unsafe state implies a system state where a deadlock may occur. The
successful completion of all processes is not assured, and the risk of deadlock is
high. The system is insecure when no sequence of resource allocation ensures the
successful execution of all processes.
Deadlock Avoidance Algorithms
When resource categories have only single instances of their resources, Resource-
Allocation Graph Algorithm is used. In this algorithm, a cycle is a necessary and
sufficient condition for deadlock.
When resource categories have multiple instances of their resources, Banker’s
Algorithm is used. In this algorithm, a cycle is a necessary but not a sufficient
condition for deadlock.
Resource-Allocation Graph Algorithm
Resource Allocation Graph (RAG) is a popular technique used for deadlock
avoidance. It is a directed graph that represents the processes in the system, the
resources available, and the relationships between them. A process node in the RAG
has two types of edges, request edges, and assignment edges. A request edge
represents a request by a process for a resource, while an assignment edge represents
the assignment of a resource to a process.
To determine whether the system is in a safe state or not, the RAG is analyzed to
check for cycles. If there is a cycle in the graph, it means that the system is in an
unsafe state, and granting a resource request can lead to a deadlock. In contrast, if
there are no cycles in the graph, it means that the system is in a safe state, and
resource allocation can proceed without causing a deadlock.
Fig 2.15
The RAG technique is straightforward to implement and provides a clear visual
representation of the processes and resources in the system. It is also an effective
way to identify the cause of a deadlock if one occurs. However, one of the main
limitations of the RAG technique is that it assumes that all resources in the system
are allocated at the start of the analysis. This assumption can be unrealistic in
practice, where resource allocation can change dynamically during system operation.
Therefore, other techniques such as the Banker’s Algorithm are used to overcome
this limitation.
o The above steps are repeated for each resource request made by any process in the
system. Overall, the Banker’s algorithm is an effective way to avoid deadlocks in
resource constrained systems by carefully managing resource allocations and
predicting potential conflicts before they arise.
REFERENCE
https://www.google.com/amp/s/www.geeksforgeeks.org/introduction-of-process-man
agement/amp/
https://padakuu.com/process-concept-56-article
https://www.javatpoint.com/os-critical-section-problem
https://www.scaler.com/topics/operating-system/process-synchronization-in-os/
https://www.geeksforgeeks.org/semaphores-in-process-synchronization/
https://www.tutorialspoint.com/operating_system/os_process_scheduling.Html
https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
https://www.techtarget.com/whatis/definition/deadlock
https://www.geeksforgeeks.org/introduction-of-deadlock-in-operating-system/
https://www.geeksforgeeks.org/bankers-algorithm-in-operating-system-2/