0% found this document useful (0 votes)
25 views18 pages

Unit 2: Process Management

The document covers key concepts of process management, including the definition of processes, their states, and the role of process control blocks. It discusses process scheduling, types of schedulers, context switching, and operations on processes such as creation, execution, and termination. Additionally, it explains interprocess communication methods, including shared memory and message passing systems, emphasizing the importance of synchronization and buffering in process cooperation.

Uploaded by

Honey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views18 pages

Unit 2: Process Management

The document covers key concepts of process management, including the definition of processes, their states, and the role of process control blocks. It discusses process scheduling, types of schedulers, context switching, and operations on processes such as creation, execution, and termination. Additionally, it explains interprocess communication methods, including shared memory and message passing systems, emphasizing the importance of synchronization and buffering in process cooperation.

Uploaded by

Honey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit 2: Process Management

Process Concept
Process Scheduling
Operations
Inter Process Communication (IPC)
Multi Thread Programming Models
Process Scheduling Criteria
Algorithms and their Evaluation.

Process Concept:
Process
Process State
Process Control Block
Threads

Process:
-->A process is a program in execution.
-->A process is more than the program code, which is sometimes known as the text section.
-->It also includes the current activity, as represented by the value of the program counter and the
contents of the processor's registers.
-->A process generally also includes the process stack, which contains temporary data (such as
function parameters, return addresses, and local variables), and a data section, which contains
global variables.
-->A program is a passive entity, such as a file containing a list of instructions stored on disk
whereas a process is an active entity, with a program counter specifying the next instruction to
execute and a set of associated resources.
-->A program becomes a process when an executable file is loaded into memory.
Process State:
-->As a process executes, it changes state.The state of a process is defined in part by the current
activity of that process.
--> Each process may be in one of the following states:
• New:The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O completion)
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.

Process Control Block:


Each process is represented in the operating system by a process control block (PCB)—also called a
task control block. It contains many pieces of information associated with a specific process,
including these:
• Process state. The state may be new, ready, running, waiting, halted, and so on.
•Program counter. The counter indicates the address of the next instruction to be executed.
• CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information.
• CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such information as the value
of the base and limit registers, the page tables, or the segment tables, depending on the memory
system used by the operating system.
• Accounting information. This information includes the amount of CPU and real time used, time
limits, account members, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices allocated to the process, a
list of open files, and so on.
Threads:
-->The process model discussed so far has implied that a process is a program that performs a single
thread of execution.
-->For example, when a process is running a word-processor program, a single thread of
instructions is being executed.
-->This single thread of control allows the process to perform only one task at one time.
-->The user cannot simultaneously type in characters and run the spell checker within the same
process.
-->Many modern operating systems have extended the process concept to allow a process to have
multiplethreads of execution and thus to perform more than one task at a time.

Process Scheduling:
Scheduling Queues
Schedulers
Context Switch

Scheduling Queues:
-->As processes enter the system, they are put into a job queue, which consists of all processes in
the system.
-->The processes that are residing in main memory and are ready and waiting to execute are kept on
a list called the ready queue.
-->This queue is generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list.
-->Each PCB includes a pointer field that points to the next PCB in the ready queue.
-->The system also includes other queues. When a process is allocated the CPU, it executes for a
while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as
the completion of an I/O request.
-->Suppose the process makes an I/O request to a shared device, such as a disk.
-->Since there are many processes in the system, the disk may be busy with the I/O request of some
other process.
--> The process therefore may have to wait for the disk. The list of processes waiting for a particular
I/O device is called a device queue.

Schedulers:
Short Term Scheduler
Long Term Scheduler
Medium Term Scheduler

 Schedulers are special system software's which handles process scheduling in various ways.
 Their main task is to select the jobs to be submitted into the system and to decide which
process to run.
 Schedulers are of three types
• Long Term Scheduler
• Short Term Scheduler
• Medium Term Scheduler

Long Term Scheduler

• It is also called job scheduler.


• Long term scheduler determines which programs are admitted to the system for processing.
• Job scheduler selects processes from the queue and loads them into memory for execution.
• Process loads into the memory for CPU scheduling.
• On some systems, the long term scheduler may not be available or minimal.
• Time-sharing operating systems have no long term scheduler.
• When process changes the state from new to ready, then there is use of long term scheduler.

Short Term Scheduler

 It is also called CPU scheduler.


 Main objective is increasing system performance in accordance with the chosen set of
criteria.
 It is the change of ready state to running state of the process.
 CPU scheduler selects process among the processes that are ready to execute and allocates
CPU to one of them.
 Short term scheduler also known as dispatcher, execute most frequently and makes the fine
grained decision of which process to execute next.
 Short term scheduler is faster than long term scheduler.

Medium Term Scheduler

-->The key idea behind a medium-term scheduler is that sometimes it can be advantageous to
remove processes from memory (and from active contention for the CPU) and thus reduce the
degree of multiprogramming.
-->Later, the process can be reintroduced into memory, and its execution can be continued where it
left off. This scheme is called swapping.
-->The process is swapped out, and is later swapped in, by the medium-term scheduler.

Context Switch :
-->When an interrupt occurs, the system needs to save the current context of the process currently
running on the CPU so that it can restore that context when its processing is done, essentially
suspending the process and then resuming it.
-->The context is represented in the PCB of the process; it includes the value of the CPU registers,
the process state , and memory-management information.
-->Generically, we perform a state save of the current state of the CPU, be it in kernel or user
mode, and then a state restore to resume operations.
-->Switching the CPU to another process requires performing a state save of the current process and
a state restore of a different process. This task is known as a context switch.
-->When a context switch occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
-->Context-switch time is pure overhead, because the system does no useful work while switching.

Operations on Processes:
Process Creation
Process Execution
Process Termination

Process Creation:

-->A process may create several new processes, via a create-process system call, during the course
of execution.
-->The creating process is called a parent process, and the new processes are called the children of
that process.
-->Each of these new processes may in turn create other processes, forming a tree of processes. --
>Most operating systems identify processes according to a unique process identifier (or pid), which
is typically an integer number.

A tree of processes on a typical Solaris system

-->In general, a process will need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task.
-->When a process creates a sub process, that sub process may be able to obtain its resources
directly from the operating system, or it may be constrained to a subset of the resources of the
parent process.
-->The parent may have to partition its resources among its children, or it may be able to share
some resources (such as memory or files) among several of its children.
-->When a process creates a new process, two possibilities exist in terms of execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.

-->There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process
2. The child process has a new program loaded into it.
Process Execution:
#include <stdio.h>
#include <sys/types.h>
#define MAX_COUNT 10

void ChildProcess(void); /* child process prototype */


void ParentProcess(void); /* parent process prototype */

void main(void)
{
pid_t pid;
pid = fork();
if (pid == 0)
ChildProcess();
else
ParentProcess();
}
void ChildProcess(void)
{
int i;
for (i = 1; i <= MAX_COUNT; i++)
printf(" This line is from child, value = %d\n", i);
printf(" *** Child process is done ***\n");
}
void ParentProcess(void)
{
int i;
for (i = 1; i <= MAX_COUNT; i++)
printf("This line is from parent, value = %d\n", i);
printf("*** Parent is done ***\n");
}

Process Termination:
-->A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
• The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
-->A process terminates when it finishes executing its final statement and asks the operating system
to delete it by using the exit () system call.
-->At that point, the process may return a status value (typically an integer) to its parent process
(via the wait() system call).
-->All the resources of the process—including physical and virtual memory, open files, and I/O
buffers—are deallocated by the operating system.
Interprocess Communication:
Introduction
Shared Memory System
Message Passing System

Introduction:
-->Processes executing concurrently in the operating system may be either independent processes
or cooperating processes.
--> A process is independent if it cannot affect or be affected by the other processes executing in the
system.
-->Any process that does not share data with any other process is independent.
--> Clearly, any process that shares data with other processes is a cooperating process.

There are several reasons for providing an environment that allows process cooperation:
• Information sharing: Since several users may be interested in the same piece of information we
must provide an environment to allow concurrent access to such information.
•Computation speedup: If we want a particular task to run faster, we must break it into subtasks,
each of which will be executing in parallel with the others. Notice that such a speedup can
be achieved only if the computer has multiple processing elements (such as CPUs or I/O
channels).
•Modularity: We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
•Convenience: Even an individual user may work on many tasks at the same time. For instance, a
user may be editing, printing, and compiling in parallel.
-->Cooperating processes require an interprocess communication (IPC) mechanism that will allow
them to exchange data and information.
-->There are two fundamental models of interprocess communication:
(1) Shared Memory
(2) Message Passing.
-->In the shared-memory model, a region of memory that is shared by cooperating processes is
established. Processes can then exchange information by reading and writing data to the shared
region.
-->In the message- passing model, communication takes place by means of messages exchanged
between the cooperating processes.
-->In the below figure Fig 'a' represents Message Passing System and Fig 'b' represents Shared
Memory System.
Shared Memory System:
-->Interprocesscommunication using shared memory requires communicating processes to establish
a region of shared memory.
-->Typically, a shared-memory region resides in the address space of the process creating the
shared-memory segment.
-->Other processes that wish to communicate using this shared-memory segment must attach it to
their address space.
-->They can then exchange information by reading and writing data in the shared areas.
-->The form of the data and the location are determined by these processes and are not under the
operating system's control.
-->The processes are also responsible for ensuring that they are not writing to the same location
Simultaneously.
-->To illustrate the concept of cooperating processes, let's consider the producer-consumer problem,
which is a common paradigm for cooperating processes.
-->A producer process produces information that is consumed by a consumer process.
-->Let's look more closely at how the bounded buffer can be used to enable processes to share
memory.
-->The following variables reside in a region of memory shared by the producer and consumer
processes:
Shared data:

#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

Producer:

while (true) { /* Produce an item */

while (((in = (in + 1) % BUFFER SIZE count) == out)

; /* do nothing -- no free buffers */

buffer[in] = item;

in = (in + 1) % BUFFER SIZE }

Consumer:

while (true) {

while (in == out)

; // do nothing -- nothing to consume


// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}

Message-Passing Systems:
-->Message passing provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing the same address space.
-->It is useful in a distributed environment, where the communicating processes may reside on
different computers connected by a network.
-->For example, a chat program used on the World Wide Web could be designed so that chat
participants communicate with one another by exchanging messages.
-->A message-passing facility provides at least two operations: send(message) and
receive(message).
-->Messages sent by a process can be of either fixed or variable size.
-->If only fixed-sized messages can be sent, the system-level implementation is straightforward. --
>This restriction, however, makes the task of programming more difficult.
-->Conversely, variable-sized messages require a more complex system-level implementation, but
the programming task becomes simpler.
-->Here are several methods for logically implementing a link and the send() / receive ()
operations:
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering

Naming :
-->Processes that want to communicate must have a way to refer to each other. They can use either
direct or indirect communication.
-->Under direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication.
--> In this scheme, the send.0 and r e c e i v e ( ) primitives are defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
-->A communication link in this scheme has the following properties:
• A link is established automatically between every pair of processes that want to communicate.
• A link is associated with exactly two processes.
• Between each pair of processes, there exists exactly one link.
-->This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver
process must name the other to communicate.
-->A variant of this scheme employs asymmetry in addressing.
-->Here, only the sender names the recipient; the recipient is not required to name the sender.
-->In this scheme, the send() and receive () primitives are defined as follows:
send(P, message)—Send a message to process P.
receive( i d , message)—-Receive a message from any process;
-->The variable id is set to the name of the process with which communication has taken place.
-->With indirect communication, the messages are sent to and received from mailboxes, or ports.
-->Each mailbox has a unique identification. The sendC) and r e c e i v e () primitives are defined as
follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
-->The operating system then must provide a mechanism that allows a process to do the following:
• Create a new mailbox.
• Send and receive messages through the mailbox.
• Delete a mailbox.
Synchronization:
Communication between processes takes place through calls to sendO and receive () primitives.
There are different design options for implementing each primitive. Message passing may be either
blocking or nonblocking— also known as synchronous and asynchronous.
• Blocking send. The sending process is blocked until the message is received by the receiving
process or by the mailbox.
• Nonblocking send. The sending process sends the message and resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a null.

Buffering
Whether communication is direct or indirect, messages exchanged by communicating processes
reside in a temporary queue. Basically, such queues can be implemented in three ways:
• Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If the
queue is not full when a new message is sent, the message is placed in the queue (either the message
is copied or a pointer to the message is kept), and the sender can continue execution without
waiting. The links capacity is finite, however. If the link is full, the sender must block until space is
available in the queue.
• Unbounded capacity. The queues length is potentially infinite; thus, any number of messages can
wait in it. The sender never blocks.

Multithreading Programming Models:


Difference between User Level and Kernel Level Threads
Many to One Model
One to One Model
Many to Many Model

Difference between User Level & Kernel Level Thread:


User Level Threads Kernel Level Thread
Kernel level threads are slower to create and
User level threads are faster to create and manage.
manage.
Implementation is by a thread library at the user Operating system supports creation of Kernel
level. threads.
User level thread is generic and can run on any Kernel level thread is specific to the operating
operating system. system.
Multi-threaded application cannot take advantage Kernel routines themselves can be
of multiprocessing. multithreaded.

Many to one Model


The many-to-one model maps many user-level threads to one kernel thread. Thread management is
done by the thread library in user space, so it is efficient; but the entire process will block if a
thread makes a blocking system call. Also, because only one thread can access the kernel at a time,
multiple threads are unable to run in parallel on multiprocessors.
One to one Model.
The one-to-one model (Figure 4.3) maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a thread makes a
blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The only
drawback to this model is that creating a user thread requires creating the corresponding kernel
thread.

Many to Many Model:

The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads. Developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a blocking
system call, the kernel can schedule another thread for execution.
Process Scheduling Criteria:

Difference between Preemtive and Non Preemtive Scheduling


Introduction to Process Scheduling Criteria
Types of Process Scheduling

Difference between Preemtive and Non Preemtive Scheduling:


Non-Pre-emptive Scheduling:
Non-pre-emptive algorithms are designed in such a way that once a process enters the running
state, it is not removed from the processor until it has completed its service time
Example: FCFS, SJF, Priority

Pre-emptive Scheduling
Pre-emptive algorithms are driven by the notion of prioritized computation. The process with the
highest priority should always be the one currently using the processor. If a process is currently
using the processor and a new process with a higher priority enters the ready list, the process on
the processor moved to wait state and the new process with highest priority is executed.
Examples: SJF, Priority, RR

Introduction to Process Scheduling Criteria:

Many criteria have been suggested for comparing CPU scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in which algorithm is
judged to be best. The criteria include the following:

• CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU utilization can
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily used system).

• Throughput: If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short transactions, it may be 10 processes per
second.

• Turnaround time: From the point of view of a particular process, the important criterion is how
long it takes to execute that process. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing I/O.

•Waiting time: The CPU scheduling algorithm does not affect the amount of time during which a
process executes or does I/O; it affects only the amount of time that a process spends waiting in the
ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.

•Response time: Response time, is the time it takes to start responding, not the time it takes to
output the response. The turnaround time is generally limited by the speed of the output device.

Types of Process Scheduling:

FCFS (First Come First Serve Scheduling)


SJF(Shortest Job First Scheduling)
Priority Scheduling
RR (Round Robin Scheduling)
MQS (Multilevel Queue Scheduling)
MFQS (Multilevel Feedback-Queue Scheduling)

Turn Around Time=Completion Time-Arrival Time


Waiting Time=Turn Around Time- Burst Time
First Come First Serve(FCFS): Simplest scheduling algorithm that schedules according to arrival
times of processes. The process that requests the CPU first is allocated the CPU first. FCFS has largest
burst times. Convoy Effect is the problem with FCFS. The FCFS scheduling algorithm is non
preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases
the CPU, either by terminating or by requesting I/O.
Example: Consider all processes arrived at Time 0.
Process Burst Time
P1 24
P2 3
P3 3
Gantt Chart:
P1 P2 P3
0 24 27 30

Turn Around Time: Completion time- arrival time


P1=24
P2=27
P3=30
Average TAT= TAT’s of(P1+P2+P3)/3=(24+27+30)/3=27ms
Waiting Time: Turn Around Time-Burst Time
P1=24-24=0
P2=27-3=24
P3=30-3=27
Average W.T=W.T’s of (P1+P2+P3)/3=(0+24+27)/3= 17ms

Non Pre-emptive Shortest Job First


Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.
As you can see in the GANTT chart above, the process P4 will be picked up first as it has the
shortest burst time, then P2, followed by P3 and at last P1.

Problem with Non Pre-emptive SJF


If the arrival time for processes are different, which means all the processes are not available in the
ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes process
with short burst time have to wait for the current process's execution to finish, because in Non Pre-
emptive SJF, on arrival of a process with short duration, the existing job/process's execution is not
halted/stopped to execute the short job first.
This leads to the problem of Starvation, where a shorter process has to wait for a long time until
the current longer process gets executed. This happens if shorter jobs keep coming, but this can be
solved using the concept of aging.

Pre-emptive Shortest Job First(Shortest Remaining Time First)

In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but
as a process with short burst time arrives, the existing process is preempted or removed
from execution, and the shorter job is executed first.
As you can see in the GANTT chart above, as P1 arrives first, hence it's execution starts
immediately, but just after 1 ms, process P2 arrives with a burst time of 3 ms which is less
than the burst time of P1, hence the process P1(1 ms done, 20 ms left) is preemptied and
process P2 is executed.
As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time greater than that
of P2, hence execution of P2 continues. But after another millisecond, P4 arrives with a burst
time of 2 ms, as a result P2(2 ms done, 1 ms left) is preemptied and P4 is executed.
After the completion of P4, process P2 is picked up and finishes, then P2 will get executed and
at last P1.
The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any given
point of time, the job with the shortest remaining time is executed first.

Priority Scheduling
 Priority is assigned for each process.

 Process with highest priority is executed first and so on.

 Processes with same priority are executed in FCFS manner.

 Priority can be decided based on memory requirements, time requirements or any other resource requirement.
Round Robin Scheduling

 A fixed time is allotted to each process, called quantum, for execution.


 Once a process is executed for given time period that process is preemptied and other
process executes for given time period.
 Context switching is used to save states of preemptied processes.
Multilevel Queue Scheduling:

--A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.
--The processes are permanently assigned to one queue, generally based on some property of the
process, such as memory size, process priority, or process type. Each queue has its own scheduling
algorithm.
--For example, separate queues might be used for foreground and background processes.
--The foreground queue might be scheduled by an RR algorithm, while the background queue is
scheduled by an FCFS algorithm.
--Each queue has absolute priority over lower-priority queues.
--No process in the batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty.
--If an interactive editing process entered the ready queue while a batch process was running, the
batch process would be preempted.

Multilevel Feedback-Queue Scheduling

--The multilevel feedback-queue scheduling algorithm, allows a process to move between queues.
--The idea is to separate processes according to the characteristics of their CPU bursts.
--If a process uses too much CPU time, it will be moved to a lower-priority queue.
--This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
--A process entering the ready queue is put in queue 0. A process in queue 0 is given a time
quantum of 8 milliseconds.
--If it does not finish within this time, it is moved to the tail of queue 1.
--If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds.
-- If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on
an FCFS basis but are run only when queues 0 and 1 are empty.

You might also like