0% found this document useful (0 votes)
25 views60 pages

3 Process Management

The document discusses process management in operating systems. It defines a process as the basic unit of work that is scheduled by the OS. It describes the different states a process can be in like ready, running, blocked. It also discusses process control blocks that contain information for the OS to manage processes, and mechanisms for process synchronization and communication like shared memory and message passing.

Uploaded by

pragati bedare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views60 pages

3 Process Management

The document discusses process management in operating systems. It defines a process as the basic unit of work that is scheduled by the OS. It describes the different states a process can be in like ready, running, blocked. It also discusses process control blocks that contain information for the OS to manage processes, and mechanisms for process synchronization and communication like shared memory and message passing.

Uploaded by

pragati bedare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 3

Process Management
Introduction:
 A running instance of a program is called as a process. A process is a smallest unit
of work that is scheduled by the operating system.
 A process needs resources, such as CPU time, memory, files and I/0 devices, to
accomplish its task. These resources are allocated either when the program is
created or when it is executing.
 The process concept helps to explain, understand and organize execution of
programs in an operating system.
 A user uses processes to achieve execution of programs in a sequential or concurrent
manner as desired.
 An OS uses processes to organize execution of programs. Use of the process
concept enables an OS to execute both sequential and concurrent programs equally
easily.
 Process management is the fundamental task of any modern operating system. The
OS must allocate resources to processes, enable processes to share and exchange
information, protect the resources of each process from other processes and enable
synchronization among processes.
 To meet these requirements, the Os must maintain a data structure for each process,
which describes the state and resource ownership of that process, and which enables
the OS to exert control over each process.
PROCESS:
 A process is defined as, "an entity which represents the
basic unit of work to be implemented in the system".
 A process is defined as, "a program under execution,
which competes for the CPU time and other resources.“
 A process is a program in execution. Process is also called
as job, task and unit of work.
 A process is an instance of an executing program,
including the current values of the program counter,
registers and variables.
 Logically each process has its separate virtual CPU. In
actual, the real CPU switches from one process to another.
 A process is an activity and it has a program, input, output
and a state.
Each process has following sections:
1.A Text section that contains the program code.
2. A Data section that contains global and static variables,
3. The heap is used for dynamic memory allocation, and is
managed via calls to new, delete, malloc, free, etc.
4. The stack is used for local variables. A process stack
which contains the temporary data (such as subroutine
parameters, return addresses, and temporary variables).
Space on the stack is reserved for local variables when
they are declared(at function entrance or elsewhere,
depending on the language), and the space is freed up
when the variables go out of scope.
5. A program counter that contains the contents of
processor's registers.
Process model:
Process states:
New (Create) – In this step, the process is about to be created but not yet created, it is
the program which is present in secondary memory that will be picked up by OS to
create the process.

Ready – New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready
to run and is waiting to get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue for ready processes.

Run – The process is chosen by CPU for execution and the instructions within the
process are executed by any one of the available CPU cores.

Blocked or wait – Whenever the process requests access to I/O or needs input from
the user or needs access to a critical region(the lock for which is already acquired) it
enters the blocked or wait state. The process continues to wait in the main memory and
does not require CPU. Once the I/O operation is completed the process goes to the
ready state.

Terminated or completed – Process is killed as well as PCB is deleted.


Process Control Block:
1.Process Number: Each process is identified by its process number, called
Process Identification Number (PID). Every process has a unique process-id
through which it is identified. The process-id is provided by the OS. The
process id of two processes could not be same because process-id is always
unique.

2. Priority: Each process is assigned a certain level of priority that


corresponds to the relative importance of the event that it services process
priority is the preference of the one process over other process for
execution. Priority may be given by the user/system manager or it may be
given internally by OS. This field stores the priority of a particular process.

3. Process State: This information is about the current state of the process.
The state may be new ready, running, and waiting, halted, and so on.
4. Program Counter: The counter indicates the address of the next
instruction to be executed for this process.
5.CPU Registers: The registers vary in number and type,
depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information. Along with the
program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly
afterward.

6. CPU Scheduling Information: This information includes a


process priority, pointers to scheduling queues, and any other
scheduling parameters.

7. Memory Management Information: This information may


include such information as the value of the base and limit
registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.
8. Accounting Information: This information
includes the amount of CPU and real time used,
time limits, account numbers , job or process
numbers, and soon.
9.I/O Status Information: This information
includes the list of 1/0 devices allocated to the
process, a list of open files, and so on.
10, File Management: it includes information
about all open files, access rights etc.
11. Pointer: Pointer points to another process
control block. Pointer is used for maintaining the
scheduling list.
Operations on Process:
1. Creation
• Once the process is created, it will be ready and come into the ready queue (main
memory) and will be ready for the execution.
2. Scheduling
• Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to be
executed next, is known as scheduling.

3. Execution
• Once the process is scheduled for the execution, the processor starts executing it.
Process may come to the blocked or wait state during the execution then in that
case the processor starts executing the other processes.

4. Deletion/killing
• Once the purpose of the process gets over then the OS will kill the process. The
Context of the process (PCB) will be deleted and the process gets terminated by the
Operating system.

5.Suspending and resuming:


Process scheduling:
There are three main parts of process scheduling:
1.Scheduling Queue.
2.Scheduler.
3.Context switch.
1.Sheduling queue:
a)Ready queue.
b)Device queue.
2. Scheduler:
a) Long term scheduler
b) Short term scheduler
c) Medium term scheduler
a) Long term scheduler
b)Short term scheduler:
c) Medium term scheduler
3.Context switch:
Inter Process Communication (IPC)
A process can be of two type:
• Independent process.
• Co-operating process.

Processes can communicate with each other using


these two ways:
• Shared Memory
• Message passing
1.Shared Memory:
 Communication between processes using shared memory requires
processes to share some variable and it completely depends on
how programmer will implement it.
 One way of communication using shared memory can be
imagined like this:
 Suppose process1 and process2 are executing simultaneously and
they share some resources or use some information from other
process, process1 generate information about certain computations
or resources being used and keeps it as a record in shared memory.
 When process2 need to use the shared information, it will check in
the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes
can use shared memory for extracting information as a record
from other process as well as for delivering any specific
information to other process.
Ex: Producer-Consumer problem:
 There are two processes: Producer and Consumer.
Producer produces some item and Consumer consumes
that item.
 The two processes shares a common space or memory
location known as buffer where the item produced by
Producer is stored and from where the Consumer
consumes the item if needed.
 There are two version of this problem: first one is known as
unbounded buffer problem in which Producer can keep on
producing items and there is no limit on size of buffer, the
second one is known as bounded buffer problem in which
producer can produce up to a certain amount of item and
after that it starts waiting for consumer to consume it.
 We will discuss the bounded buffer problem.
First, the Producer and the Consumer will share
some common memory, then producer will start
producing items. If the total produced item is
equal to the size of buffer, producer will wait to
get it consumed by the Consumer.
 Similarly, the consumer first check for the
availability of the item and if no item is
available, Consumer will wait for producer to
produce it. If there are items available, consumer
will consume it
Shared Data between the two Processes:
#define buff_max 25
#define mod %
  
    struct item{
  
        // diffrent member of the produced data 
        // or consumed data    
        ---------
    }
      
       int free_index = 0;
    int full_index = 0;
•    
Producer Process Code:
item nextProduced;
      
    while(1){
          
        // check if there is no space 
        // for production.
        // if so keep waiting.
        while((free_index+1) mod buff_max == full_index);
          
        shared_buff[free_index] = nextProduced;
        free_index = (free_index + 1) mod buff_max;
    }
Consumer Process Code:
tem nextConsumed;
      
    while(1){
          
        // check if there is an available 
        // item  for consumption. 
        // if not keep on waiting for 
        // get them produced.
        while((free_index == full_index);
          
        nextConsumed = shared_buff[full_index];
        full_index = (full_index + 1) mod buff_max;
    }
2) Messaging Passing Method:
• In this method, processes communicate with each
other without using any kind of shared memory. If
two processes p1 and p2 want to communicate
with each other, they proceed as follow:

1. Establish a communication link (if a link already


exists, no need to establish it again.)
2. Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message,destination) or send(message)
– receive(message, host) or receive(message)
 The message size can be of fixed size or of variable
size. if it is of fixed size, it is easy for OS designer
but complicated for programmer and if it is of
variable size then it is easy for programmer but
complicated for the OS designer.
 A standard message can have two parts: header
and body.
The header part is used for storing Message type,
destination id, source id, message length and
control information. The control information
contains information like what to do if runs out of
buffer space, sequence number, priority.
Generally, message is sent using FIFO style.
 While implementing the link, there are some questions which need to be
kept in mind like :
1. How are links established?
2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating
processes?
4. What is the capacity of a link? Is the size of a message that the link can
accommodate fixed or variable?
5. Is a link unidirectional or bi-directional?
A link has some capacity that determines the number of messages that can
reside in it temporarily for which Every link has a queue associated with it
which can be either of zero capacity or of bounded capacity or of
unbounded capacity. In zero capacity, sender wait until receiver inform
sender that it has received the message. In non-zero capacity cases, a process
does not know whether a message has been received or not after the send
operation. For this, the sender must communicate to receiver explicitly.
Implementation of the link depends on the situation, it can be either a Direct
communication link or an In-directed communication link.
Naming:
Direct and indirect
Direct: symmetry and asymmetry
• Direct Communication links are implemented
when the processes use specific process
identifier for the communication but it is hard to
identify the sender ahead of time.
For example: the print server.

In-directed Communication is done via a shred


mailbox (port), which consists of queue of
messages. Sender keeps the message in mailbox
and receiver picks them up.
Thread:
• A thread is a flow of execution through the process
code, with its own program counter that keeps
track of which instruction to execute next, system
registers which hold its current working variables,
and a stack which contains the execution history.
• A thread shares with its peer threads few
information like code segment, data segment and
open files. When one thread alters a code segment
memory item, all other threads see that.
• A thread is also called a lightweight process.
Threads provide a way to improve application
performance through parallelism.
• Each thread belongs to exactly one process and no
thread can exist outside a process.
• Each thread represents a separate flow of control
Advantages of Thread:
• Threads minimize the context switching time.
• Use of threads provides concurrency within a
process.
• Efficient communication.
• It is more economical to create and context
switch threads.
• Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
Types of Thread:
• Threads are implemented in following two ways
1. User Level Threads − User managed threads.
2. Kernel Level Threads − Operating System
managed threads acting on kernel, an
operating system core.
User Level Threads:
• In this case, the thread management kernel is not
aware of the existence of threads.
• The thread library contains code for creating and
destroying threads, for passing message and data
between threads, for scheduling thread execution
and for saving and restoring thread contexts.
• The application starts with a single thread.
Advantages:
• Thread switching does not require Kernel mode
privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user
level thread.
• User level threads are fast to create and manage.
Disadvantages:
• In a typical operating system, most system calls
are blocking.
• Multithreaded application cannot take advantage
of multiprocessing
Kernel Level Threads:
• In this case, thread management is done by the Kernel.
• There is no thread management code in the application area.
• Kernel threads are supported directly by the operating
system. Any application can be programmed to be
multithreaded.
• All of the threads within an application are supported
within a single process.
• The Kernel maintains context information for the process as
a whole and for individuals threads within the process.
• Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages:
• Kernel can simultaneously schedule multiple threads
from the same process on multiple processes.
• If one thread in a process is blocked, the Kernel can
schedule another thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages:
• Kernel threads are generally slower to create and
manage than the user threads.
• Transfer of control from one thread to another
within the same process requires a mode switch to
the Kernel.
Benefits of Multithreading in Operating System:
1. Responsiveness :
Multithreading in an interactive application may allow a program
to continue running even if a part of it is blocked or is performing a
lengthy operation, thereby increasing responsiveness to the user.
2. Resource Sharing :
Processes may share resources only through techniques such as-
 Message Passing
 Shared Memory

 Such techniques must be explicitly organized by programmer.


However, threads share the memory and the resources of the
process to which they belong by default.

 The benefit of sharing code and data is that it allows an application


to have several threads of activity within same address space.
3.Economy :
 Allocating memory and resources for process creation is a costly
job in terms of time and space.
Since, threads share memory with the process it belongs, it is more
economical to create and context switch threads. Generally much
more time is consumed in creating and managing processes than in
threads.
4.Utilization of multiprocessor architecture:
 The benefits of multi-programming greatly increase in case of
multiprocessor architecture, where threads may be running parallel
on multiple processors. If there is only one thread then it is not
possible to divide the processes into smaller tasks that different
processors can perform.
Single threaded process can run only on one processor regardless
of how many processors are available.
Multi-threading on a multiple CPU machine increases parallelism.
Multithreading Models
• Some operating system provide a combined user
level thread and Kernel level thread facility.
• Solaris is a good example of this combined approach.
• In a combined system, multiple threads within the
same application can run in parallel on multiple
processors and a blocking system call need not block
the entire process.
• Multithreading models are three types
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.
One to One Model:
• There is one-to-one relationship of user-level
thread to the kernel-level thread.
• This model provides more concurrency than the
many-to-one model.
• It also allows another thread to run when a thread
makes a blocking system call. It supports multiple
threads to execute in parallel on microprocessors.
• Disadvantage of this model is that creating user
thread requires the corresponding Kernel thread.
Example:OS/2, windows NT and windows 2000
use one to one relationship model.
Many to One Model:
• Many-to-one model maps many user level threads to
one Kernel-level thread.
• Thread management is done in user space by the
thread library.
• When thread makes a blocking system call, the entire
process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to
run in parallel on multiprocessors.
• If the user-level thread libraries are implemented in
the operating system in such a way that the system
does not support them, then the Kernel threads use
the many-to-one relationship modes.
Many to Many Model:
• The many-to-many model multiplexes any number of
user threads onto an equal or smaller number of kernel
threads.
• The following diagram shows the many-to-many
threading model where 6 user level threads are
multiplexing with 6 kernel level threads.
• In this model, developers can create as many user
threads as necessary and the corresponding Kernel
threads can run in parallel on a multiprocessor machine.
• This model provides the best accuracy on concurrency
and when a thread performs a blocking system call, the
kernel can schedule another thread for execution.
EXECUTE PROCESS COMMANDS
1. ps Command:
 When a program runs on the system, it's referred to as a process. Linux
is a multitasking operating system, which means that more than one
process can be active at once.
 To know the status of process ps command is used. The ps command is
used to display the characteristics of a process.

Syntax: ps [option]

Example: By default the ps command shows only the processes that


belong to the current user and that are running on the current terminal.
$ ps
PID TTY TIME cmd
30 01 0:03 sh
56 01 0:00 ps
2. Wait:
 wait is a built-in shell command which waits for a
given process to complete, and returns its exit status.
 wait waits for the process identified by process ID pid
and reports its termination status.
 If an ID is not given, wait waits for all currently active
child processes, and the return status is zero.
 If the ID is a job specification, wait waits for all
processes in the job's pipeline.
Syntax: wait pid
Example: wait 2112
Wait for process 2112 to terminate, and return its exit
status.
3. sleep Command:
 The sleep command is used to delay for a
specified amount of time.
 The sleep command pauses for an amount of time
defined by NUMBER. SUFFIX may be "s" for
seconds (the default), "m" for minutes, "h" for
hours, or "d" for days.
Syntax: sleep NUMBER( suffix)
Example: Sleep 10 Delay for 10 seconds.
4. exit Command:
 The exit command terminates a script, just as in a
C program.
 It can also return a value, which is available to the
script's parent process. Issuing the exit command at
the shell prompt will cause the shell to exit.
 Syntax: exit
5. Kill command:
 The kill command will kill a process using the kill
signal and PID given by the user.
 The kill command allows you to send signals to
processes based on their PID.
 The command uses one or more PID as its
arguments.
Syntax: kill [signal] PID
Kill pid1 pid2….
Example: kill 105

You might also like