0% found this document useful (0 votes)
32 views11 pages

4-Process and IPC

Process and ipc

Uploaded by

musketeer.bgmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views11 pages

4-Process and IPC

Process and ipc

Uploaded by

musketeer.bgmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

PROCESS

 A process can be thought of as a program in execution.


 A process is the unit of the unit of work in a modern time-sharing system.
 A process generally includesthe process stack, which contains temporary
data (such as method parameters, return addresses, and local variables), and a
data section, which contains global variables.
Difference between program and process
 A program is a passive entity, such as the contents of a file stored on
disk, whereas a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources.

Process States:
 As a process executes, it changes state.
 The state of a process is defined in part by the current activity of that
process.
 Each process may be in one of the following states:
 New: The process is being created.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.
Process Control Block
 Each process is represented in the operating system by a process control
block (PCB)-also called a task control block.
 A PCB defines a process to the operating system.
 It contains the entire information about a process.
 Some of the information a PCB contans are:
 Process state: The state may be new, ready, running, waiting, halted,
and SO on.
 Program counter: The counter indicates the address of the next
instruction to be executed for this process.
 CPU registers: The registers vary in number and type, depending on
the computer architecture.
 CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other scheduling
parameters.
 Memory-management information: This information may include
such information as the value of the base and limit registers, the page
tables, or the segment tables, depending on the memory system used
by the operating system.
 Accounting information: This information includes the amount of
CPU and real time used, time limits, account numbers, job or process
numbers, and so on.
 Status information: The information includes the list of I/O devices
allocated to this process, a list of open files, and so on.
Process Scheduling
 The objective of multiprogramming is to have some process running at all
times, so as to maximize CPU utilization.

Scheduling Queues
There are 3 types of scheduling queues .They are :
1. Job Queue
2. Ready Queue
3. Device Queue
 As processes enter the system, they are put into a job queue.
 The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue.
The list of processes waiting for an I/O device is kept in a device queue for
that particular device.

 A new process is initially put in the ready queue. It waits in the ready queue until
it is selected for execution (or dispatched).
 Once the process is assigned tothe CPU and is executing, one of several events
could occur:
 The process could issue an I/O request, and then be placed in an I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready Queue.
 A common representation of process scheduling is a queueing diagram

Schedulers
 A process migrates between the various scheduling queues throughout its
lifetime.
 The operating system must select, for scheduling purposes, processes from
these queues in some fashion.
 The selection process is carried out by the appropriate scheduler. There are
three different types of schedulers.They are:

1. Long-term Scheduler or Job Scheduler


2. Short-term Scheduler or CPU Scheduler
3. Medium term Scheduler

 The long-term scheduler, or job scheduler, selects processes from this


pool and loads them into memory for execution. It is invoked very
infrequently.It controls the degree of multiprogramming.
 The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute, and allocates the CPU to one of
them. It is invoked very frequently.
 Processes can be described as either I/O bound or CPU bound.
 An I\O-bound process spends more of its time doing I/O than it spends
doing computations.
 A CPU-bound process, on the other hand, generates I/O requests
infrequently, using more of its time doing computation than an I/O-bound
process uses.
 The system with the best performance will have a combination of CPU-
bound and I/O-bound processes.
Medium term Scheduler
 Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling.

 The key idea is medium-term scheduler, removes processes from memory


and thus reduces the degree of multiprogramming.
At some later time, the process can be reintroduced into memory and its
execution can be continued where it left off. This scheme is called
swapping.

Context Switch
 Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process.
 This task is known as a context switch.
 Context-switch time is pure overhead, because the system does no useful work
while switching.
 Its speed varies from machine to machine, depending on the memory speed,
the number of registers that must be copied, and the existence of special
instructions.
Operations on Processes
1. Process Creation
 A process may create several new processes, during the course of execution.
 The creating process is called a parent process, whereas the new processes
are called the children of that process.
 When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have
terminated.
 There are also two possibilities in terms of the address space of the new
process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
 In UNIX, each process is identified by its process identifier, which is a
unique integer. A new process is created by the fork system call.

2. Process Termination

 A process terminates when it finishes executing its final statement and


asks the operating system to delete it by using the exit system call.
 At that point, the process may return data (output) to its parent process (via
the wait system call).
 A process can cause the termination of another process via an appropriate
system call.
 A parent may terminate the execution of one of its children for a variety
of reasons, such as these:
1. The child has exceeded its usage of some of the resources that it
has been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a
child to continue if its parent terminates. On such systems, if a
process terminates (either normally or abnormally), then all its
children must also be terminated. This phenomenon, referred to
as cascading termination, is normally initiated by the operating
system.
Cooperating Processes
 The concurrent processes executing in the operating system may be either
independent processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other
processes executing in the system.
 A process is cooperating if it can affect or be affected by the other
processes executing in the system.
 Benefits of Cooperating Processes
1. Information sharing
2. Computation speedup
3. Modularity
4. Convenience
Example:
Producer – Consumer Problem

 A producer process produces information that is consumed by a consumer


process.
 For example, a print program produces characters that are consumed by
the printer driver. A compiler may produce assembly code, which is
consumed by an assembler.
 To allow producer and consumer processes to run concurrently, we must
have available a buffer of items that can be filled by the producer and
emptied by the consumer.
• unbounded-buffer: places no practical limit on the size of the
buffer.
• bounded-buffer : assumes that there is a fixed buffer size.
Shared Data

#define
BUFFER_SIZE 10
typedef struct {
...
} item;
item
buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
The shared buffer is implemented as a circular array with two logical
pointers: in and out. The variable in points to the next free position in the
buffer; out points to the first full position in the buffer. The buffer is empty
when in == out; the buffer is full when ((in + 1) % BUFFERSIZE) == out.

Producer Process
While (1)
{
While (((in + 1) % BUFFER_SIZE) == out);
/* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}

Consumer process
while (1)
{
while (in == out);
/* do nothing */
nextConsumed = buffer [out];
out = (out + 1) % BUFFER_SIZE
}
Interprocess Communication

 Operating systems provide the means for cooperating processes to


communicate with each other via an interprocess communication (PC)
facility.
 IPC provides a mechanism to allow processes to communicate and to
synchronize their actions.IPC is best provided by a message passing
system.
Basic Structure:
 If processes P and Q want to communicate, they must send messages
to and receive messages from each other; a communication link must
exist between them.
 Physical implementation of the link is done through a hardware bus ,
network etc,
 There are several methods for logically implementing a link and the
operations:
1. Direct or indirect communication
2. Symmetric or asymmetric communication
3. Automatic or explicit buffering
4. Send by copy or send by reference
5. Fixed-sized or variable-sized messages
Naming
 Processes that want to communicate must have a way to refer to each
other
 They can use either direct or indirect communication.
1. Direct Communication
 Each process that wants to communicate must explicitly name the
recipient or sender of the communication.
 A communication link in this scheme has the following properties:
i. A link is established automatically between every pair of processes
that want to communicate. The processes need to know only each
other's identity to communicate.
ii. A link is associated with exactly two processes.
iii. Exactly one link exists between each pair of processes.
 There are two ways of addressing namely
 Symmetry in addressing
 Asymmetry in addressing
 In symmetry in addressing, the send and receive primitives are defined
as:
Send (P, message) Send a message to process P
receive(Q, message) Receive a message from Q
 In asymmetry in addressing , the send & receive primitives are defined
as:
send (p, message)  send a message to process p
receive(id, message)  receive message from any process, id is set
to the name of the process with which communication has taken
place

2. Indirect Communication

 With indirect communication, the messages are sent to and received


from mailboxes, or ports.
 The send and receive primitives are defined as follows:
send (A, message) Send a message to mailbox A.
receive (A, message) Receive a message from mailbox A.

 A communication link has the following properties:


i. A link is established between a pair of processes only if both
members of the pair have a shared mailbox.
ii. A link may be associated with more than two processes.
iii. A number of different links may exist between each pair of
communicating processes, with each link corresponding to one
mailbox
3. Buffering

 A link has some capacity that determines the number of message that
can reside in it temporarily. This property can be viewed as a queue of
messages attached to the link.
 There are three ways that such a queue can be implemented.
 Zero capacity: Queue length of maximum is 0. No message is waiting
in a queue. The sender must wait until the recipient receives the
message. (message system with no buffering)
 Bounded capacity: The queue has finite length n. Thus at most n
messages can reside in it.
 Unbounded capacity: The queue has potentially infinite length. Thus
any number of messages can wait in it. The sender is never delayed

4. Synchronization
 Message passing may be either blocking or non-blocking.
1. Blocking Send - The sender blocks itself till the message sent
by it is received by the receiver.
2. Non-blocking Send - The sender does not block itself after
sending the message but continues with its normal operation.
3. Blocking Receive - The receiver blocks itself until it receives
the message.
4. Non-blocking Receive – The receiver does not block itself.

Communication in client – server systems:


There are two levels of communication
 Low – level form of communication – eg. Socket
 High – level form of communcation – eg.RPC , RMI

You might also like