0% found this document useful (0 votes)
26 views11 pages

U2 - Ch1Process Concepts

it is about process conecepts

Uploaded by

sarvesht9328
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views11 pages

U2 - Ch1Process Concepts

it is about process conecepts

Uploaded by

sarvesht9328
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT II Process Management-I Ch1: Processes

Process concepts

Process: A process is a program in execution.


A process is a program in execution. Process is not as same as program code but a lot more
than it. A process is an 'active' entity as opposed to program which is considered to be a 'passive'
entity.

A single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple instances begin (multiple processes are
created).
Structure of a process
Process memory is divided into four sections for efficient working:
 The Text section is made up of the compiled program code, read in from non-volatile
storage when the program is launched.
 The Data section is made up the global and static variables, allocated and initialized
prior to executing the main.
 The Heap is used for the dynamic memory allocation, and is managed via calls to new,
delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local variables
when they are declared.

Process State

As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.

Page 1 of 11
UNIT II Process Management-I Ch1: Processes

Process Control Block


Each process is represented in the operating system by a process control block (PCB)—also
called a task control block. A PCB is shown in Figure 3.3. It contains many pieces of
information associated with a specific process, including these:

 Process state. The state may be new, ready, running, waiting, halted, and so on.
 Program counter. The counter indicates the address of the next instruction to be
executed for this process.
 CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program
counter, this state information must be saved when an interrupt occurs, to allow the
process to be continued correctly afterwards.
 CPU-scheduling information. This information includes a process priority, pointers
to scheduling queues, and any other scheduling parameters.
 Memory-management information. This information may include such information
as the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
 Accounting information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
 I/O status information. This information includes the list of I/O devices allocated to
the process, a list of open files, and so on.

Page 2 of 11
UNIT II Process Management-I Ch1: Processes

Threads

Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another but
gives the illusion as if they are executing in parallel.

The process model discussed so far has implied that a process is a program that performs a
single thread of execution. For example, when a process is running a word-processor program,
a single thread of instructions is being executed. This single thread of control allows the process
to perform only one task at a time.

A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors.

Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource Thread is light weight, taking lesser


intensive. resources than a process.

2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.

3 In multiple processing environments, All threads can share same set of open files,
each process executes the same code but child processes.
has its own memory and file resources.

4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process second thread in the same task can run.
is unblocked.

5 Multiple processes without using threads Multiple threaded processes use fewer
use more resources. resources.

Page 3 of 11
UNIT II Process Management-I Ch1: Processes

6 In multiple processes each process One thread can read, write or change
operates independently of the others. another thread's data.

Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.
Scheduling fell into one of the two general categories:

 Non-Pre-emptive Scheduling: When the currently executing process gives up the CPU
voluntarily.
 Pre-emptive Scheduling: When the operating system decides to favour another process,
pre-empting the currently executing process.

What are Scheduling Queues?

 All processes, upon entering into the system, are stored in the Job Queue.
 Processes in the Ready state are placed in the Ready Queue.
 Processes waiting for a device to become available are placed in Device Queues. There
are unique device queues available for each I/O device.

A new process is initially put in the Ready queue. It waits in the ready queue until it is selected
for execution (or dispatched). Once the process is assigned to the CPU and is executing, one
of the following several events can occur:

 The process could issue an I/O request, and then be placed in the I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.

Page 4 of 11
UNIT II Process Management-I Ch1: Processes

In the first two cases, the process eventually switches from the waiting state to the ready
state and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources
deallocated.

Schedulers

A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.

A scheduler is a type of system software that handles process scheduling.

Schedulers are of three types:

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

 Long Term Scheduler: Long term scheduler is also known as a job scheduler. This
scheduler regulates the program and select process from the queue and loads them into
memory for execution. It also regulates the degree of multi-programming. However,
the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor,
I/O jobs., that allows managing multiprogramming.
 Short Term Scheduler: Short term scheduling is also known as CPU scheduler. The
main goal of this scheduler is to boost the system performance according to set criteria.
This helps you to select from a group of processes that are ready to execute and allocates
CPU to one of them. The dispatcher gives control of the CPU to the process selected
by the short-term scheduler.
 Medium Term Scheduler: Medium-term scheduling is an important part of swapping.
It enables you to handle the swapped out-processes. In this scheduler, a running process

Page 5 of 11
UNIT II Process Management-I Ch1: Processes

can become suspended, which makes an I/O request. A running process can become
suspended if it makes an I/O request. A suspended process can't make any progress
towards completion. In order to remove the process from memory and make space for
other processes, the suspended process should be moved to secondary storage.

This medium-term scheduler is diagrammed in Figure 3.8. The key idea behind a
medium-term scheduler is that sometimes it can be advantageous to remove processes from
memory (and from active contention for the CPU) and thus reduce the degree of
multiprogramming. Later, the process can be reintroduced into memory, and its execution can
be continued where it left off. This scheme is called swapping. The process is swapped out,
and is later swapped in, by the medium-term scheduler. Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up.

Context Switch

When an interrupt occurs, the system needs to save the current context of the process running
on the CPU so that it can restore that context when its processing is done, essentially
suspending the process and then resuming it.

The context is represented in the PCB of the process; it includes the value of the CPU registers,
the process state and memory-management information.

Generically, we perform a state save of the current state of the CPU, be it in kernel or user
mode, and then a state restores to resume operations.

Switching the CPU to another process requires performing a state save of the current process
and a state restore of a different process. This task is known as a context switch.

Context switching can happen due to the following reasons:

 When a process of high priority comes in the ready state. In this case, the execution
of the running process should be stopped and the higher priority process should be
given the CPU for execution.

Page 6 of 11
UNIT II Process Management-I Ch1: Processes

 When an interruption occurs then the process in the running state should be stopped
and the CPU should handle the interrupt before doing something else.
 When a transition between the user mode and kernel mode is required then you have
to perform the context switching.

Context-switch time is pure overhead, because the system does no useful work while switching.
Its speed varies from machine to machine, depending on the memory speed, the number of
registers that must be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). Typical speeds are a few milliseconds.

Operations on Processes

Process Creation

A process may create several new processes, via a create-process system call, during the course
of execution.

The creating process is called a parent process, and the new processes are called the children
of that process.

Each of these new processes may in turn create other processes, forming a tree of processes.

When a process creates a new process, two possibilities exist for execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.

There are also two possibilities for the address space of the new process:
1. The child process is a duplicate of the parent process (it has the same program and data as
the parent).
2. The child process has a new program loaded into it.

Process Termination

A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit() system call.

At that point, the process may return a status value (typically an integer) to its parent process
(via the wait() system call).

All the resources of the process—including physical and virtual memory, open files, and I/O
buffers—are deallocated by the operating system.

Termination can occur in other circumstances as well

A process can cause the termination of another process via an appropriate system call. Usually,
such a system call can be invoked only by the parent of the process that is to be terminated.
Otherwise, users could arbitrarily kill each other’s jobs. Thus, when one process creates a new
process, the identity of the newly created process is passed to the parent.

Page 7 of 11
UNIT II Process Management-I Ch1: Processes

A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
• The child has exceeded its usage of some of the resources that it has been allocated. (To
determine whether this has occurred, the parent must have a mechanism to inspect the state of
its children.)
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.

Interprocess Communication

Inter Process Communication (IPC) refers to a mechanism, where the operating systems allow
various processes to communicate with each other. This involves synchronizing their actions
and managing shared data.

Communication can be of two types −


 Between related processes initiating from only one process, such as parent and child
processes.
 Between unrelated processes, or two or more different processes
There are several reasons for providing an environment that allows process cooperation:
• Information sharing. Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to
such information.
• Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others.
• Modularity. We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads.
• Convenience. Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.
There are 2 types of process –

 Independent Processes – Processes which do not share data with other processes.
 Cooperating Processes – Processes that shares data with other processes.

Cooperating process require Interprocess communication (IPC) mechanism.

There are two fundamental models of interprocess communication:

(1) Shared Memory: In the shared-memory model, a region of memory that is shared by
cooperating processes is established. Processes can then exchange information by
reading and writing data to the shared region.

(2) Message Passing: In the message passing model, communication takes place by means
of messages exchanged between the cooperating processes.

Page 8 of 11
UNIT II Process Management-I Ch1: Processes

Shared-Memory Systems: Interprocess communication using shared memory requires communicating


processes to establish a region of shared memory.

Typically, a shared-memory region resides in the address space of the process creating the shared-
memory segment. Other processes that wish to communicate using this shared-memory segment must
attach it to their address space. The operating system tries to prevent one process from accessing another
process's memory.

Shared memory requires that two or more processes agree to remove this restriction. They can then
exchange information by reading and writing data in the shared areas. The form of the data and the
location are determined by these processes and are not under the operating system's control. The
processes are also responsible for ensuring that they are not writing to the same location simultaneously.
To illustrate the concept of cooperating processes, let’s consider the producer–consumer
problem, which is a common paradigm for cooperating processes. A producer process
produces information that is consumed by a consumer process.

For example, a Web server produces (that is, provides) HTML files and images, which are
consumed (that is, read) by the client Web browser requesting the resource.

One solution to the producer–consumer problem uses shared memory. To allow producer and
consumer processes to run concurrently, we must have available a buffer of items that can be
filled by the producer and emptied by the consumer. This buffer will reside in a region of
memory that is shared by the producer and consumer processes. A producer can produce one
item while the consumer is consuming another item. The producer and consumer must be
synchronized, so that the consumer does not try to consume an item that has not yet been
produced.

Two types of buffers can be used.


- The unbounded buffer places no practical limit on the size of the buffer. The consumer
may have to wait for new items, but the producer can always produce new items.

- The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if
the buffer is empty, and the producer must wait if the buffer is full.

Page 9 of 11
UNIT II Process Management-I Ch1: Processes

Message-Passing Systems: Message passing provides a mechanism to allow processes to


communicate and to synchronize their actions without sharing the same address space and is
particularly useful in a distributed environment, where the communicating processes may
reside on different computers connected by a network.

For example, a chat program used on the World Wide Web could be designed so that chat
participants communicate with one another by exchanging messages.

A message-passing facility provides at least two operations: send (message) and


receive(message).

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for a
programmer but complicated for the OS designer.

If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them.

Logically implementing a link and the send ()/receive () operations:


• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering

NAMING: Processes that want to communicate must have a way to refer to each other. They
can use either direct or indirect communication.

Direct Communication links are implemented when the processes uses a specific process
identifier for the communication, but it is hard to identify the sender ahead of time. For
example: the print server.

Under direct communication, each process that wants to communicate must explicitly name
the recipient or sender of the communication. In this scheme, the send() and receive()
primitives are defined as:

• send(P, message)—Send a message to process P.


• receive(Q, message)—Receive a message from process Q.

A communication link in this scheme has the following properties:

• A link is established automatically between every pair of processes that want to communicate.
The processes need to know only each other’s identity to communicate.
• A link is associated with exactly two processes.
• Between each pair of processes, there exists exactly one link.

With indirect communication, the messages are sent to and received from mailboxes, or
ports. In-direct Communication is done via a shared mailbox (port), which consists of a
queue of messages. The sender keeps the message in mailbox and the receiver picks them up.

Two processes can communicate only if the processes have a shared mailbox, however. The
send() and receive() primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.

Page 10 of 11
UNIT II Process Management-I Ch1: Processes

• receive(A, message)—Receive a message from mailbox A.

In this scheme, a communication link has the following properties:

• A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, there may be a number of different links,
with each link corresponding to one mailbox.

Synchronization : Communication between processes takes place through calls to send() and
receive() primitives. There are different design options for implementing each primitive.

Message passing may be either blocking or nonblocking— also known as synchronous and
asynchronous.

• Blocking send. The sending process is blocked until the message is received by the receiving
process or by the mailbox.
• Nonblocking send. The sending process sends the message and resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a null.

Buffering: Whether communication is direct or indirect, messages exchanged by


communicating processes reside in a temporary queue. Basically, such queues can be
implemented in three ways:

• Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
• Bounded capacity. The queue has finite length n; thus, atmost n messages can reside in it. If
the queue is not full when a new message is sent, the message is placed in the queue (either the
message is copied or a pointer to the message is kept), and the sender can continue execution
without waiting. The link’s capacity is finite, however. If the link is full, the sender must block
until space is available in the queue.
• Unbounded capacity. The queue’s length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering; the
other cases are referred to as systems with automatic buffering.

Page 11 of 11

You might also like