Chapter 3 - Process Management
Chapter 3 - Process Management
Process Management
10 Hours
22 Marks
Stack
Heap
Data
Text
With each process an address space (in main memory) is associated which
can be read or written by that process. The address space associated with process
(also referred as process in memory) is shown in figure 3.1. This address space is
normally addressed from 0 to some maximum value. This address space contains
a text section which contains the executable program along with value of
program counter and contents of processor’s registers. The address space also
includes a process stack. The stack contains temporary data such as function
parameters, return addresses and local variables. Address space also contains
3-1
data section which contains global variables. Address space may also include
heap which is the memory area that is dynamically allocated during process run-
time.
Several users may be running different mail programs at a time or single
user may invoke multiple copies of browser at the same time. All these are
different processes. Such processes may have similar text sections but the other
sections (stack, data and heap) vary.
New Terminated
Scheduler
Admit dispatch Exit
Ready Running
Waiting/ Blocked
Process may undergo through almost all the states enlisted above. State
transition diagram is shown in figure 3.2
3.2.1 New
This is a state of a process when a process is being created.
3.2.2 Running
The process is in running state when processor is allocated to the process
and the instructions of the process are being executed by processor. In a single-
processor system, there can be only one process in running state at a time. (But in
3-2
multi-processor system there may be multiple processes in running state at a
time). When CPU time slice allotted to the running process ends or the running
process is preempted by another process, the state of process changes to ready
state.
3.2.3 Waiting/ Blocked
This state is also called as suspended state sometimes. When a running
process requests for some I/O operation to be performed or for some event to be
occurred, the process changes its state to waiting or blocked. The process cannot
continue its execution until the requested I/O operation gets completed or the
event is occurred. i.e. the process gets blocked. When the I/O operation gets
completed or event (for which process was blocked) is occurred, the process goes
to ready state. There can be multiple waiting/ blocked processes in the system at
a time.
3.2.4 Ready
A process which is waiting for allotment of processor to it and a process
which is not waiting for any I/O operation to be completed or external event to be
occurred is said to be in ready state. A process which is newly created enters in
this state immediately on its admission. Operating system maintains a list of all
the processes which are in ready state. Whenever processor becomes free, one of
the processes from this list gets selected and is dispatched for the execution.
Basically, this function is done by scheduler depending on various scheduling
criteria and scheduling policy or algorithm used. When the process is dispatched
and its execution starts, it enters in running state. There can be multiple ready
processes at a time.
3.2.5 Terminated
When process has finished its execution, it is said to be in terminated state
(sometimes also called as halted state). In Unix this state is called as Zombie
state. This state is also called as Dormant state sometimes.
3-3
Process ID
Process State
Program Counter
Process Priority
Memory Management
Information
Accounting information
List of open files
Pointer to other resources
Other information
Pointer to other PCB
3-4
3.3.11 Pointer to other PCB
This gives pointer to next PCB within a specific category. E.g. operating
system maintains list of ready processes. In this case this field holds address of
next PCB whose process state is ready. Similarly, operating system maintains
hierarchy of processes so that parent process can traverse to PCBs of all its child
processes.
Generally, each PCB has two such pointer fields for maintaining forward
chain and backward chain. In both cases, ‘*’ indicates end of chain.
Discussed in Chapter 4
3.4.1 Scheduling Queues
Discussed in Chapter 4
3.4.2 Scheduler
Discussed in Chapter 4
3.4.3 Context Switch
There are many situations where CPU has to switch from one process to
another. Some of the situations are
- Expiration of time slice allotted to a running process
- I/O request in running process
- Occurrence of interrupt
- Running process has to wait for some event (like completion of execution of
child process)
Process P0 Process P1
Executing
Executing
Idle
Idle
State Restore from PCB of P0
Executing
A newly created process may require certain resources (as CPU time,
memory, files, I/O devices etc) for its execution. The child process may obtain its
required resources directly from operating system or the child process may be
restricted to use subset of resources available with its parent. Second choice is
better as it prevents any process from overloading the system by creating too
many child processes. In this choice, parent process may partition its resources
among its multiple child processes.
When a child process is created, there are two possibilities regarding its
address space. They are
1. Child process is duplicate of parent process (i.e. it has same program
and data as parent process).
2. Child process has different program loaded in its address space.
When a parent process creates a child process, there are two possibilities
regarding their execution. They are
1. Parent process continues its execution concurrently with its child
process.
2. Parent process waits until some or all of its child processes have
terminated (i.e have completed their execution).
3-7
- Fatal Error (Involuntary)
- Killed by another process (Involuntary)
A process may terminate another process by using TerminateProcess
system call. Generally, parent process uses this system call to terminate its child
process. Parent process may terminate one of its child process various reasons
like
- Child process has exceeded usage of some resources allocated to it.
- Task assigned to child process is no longer needed.
- Parent is exiting and operating system does not allow continuing child
processes if parent process is terminated.
Processes which are run concurrently in the system are either independent
processes or cooperating processes. Independent processes are the process
which neither affect any other process nor get affected by any other process
executing in the system. Independent processes do not share any data with any
other process. On the other hand, cooperating processes are the process which
either affect other processes or get affected by other processes running in the
system. Any process that shares data with other process is a cooperative process.
Some of the reasons for allowing process cooperation are,
- Information sharing
Several users may need same piece of information. So, information
sharing must be provided.
- Computation speedup
For running a particular task faster, it must be broken into smaller
subtasks and they can be executed by cooperating with each other.
- Modularity
Modular systems can have different modules running concurrently.
- Convenience
A single user may work with different tasks at a time. E.g. user may
perform editing, printing, compiling etc in parallel.
Process 1
1
Shared memory
2
Process 2
Kernel
The concept of shared memory model can be illustrated with the example
of producer-consumer problem which is the best example for cooperating
processes. A producer process produces information that is consumed by
consumer process. It resembles to client-server architecture. Server represents a
producer process whereas client represents consumer process. E.g. web server
produces web pages and client (web browser) consumes those web pages.
A buffer is made available which is filled by producer and emptied by
consumer process. This buffer will reside in the memory which is shared by
producer process and consumer process. There should be proper synchronization
among the producer and consumer processes (so that consumer should not try to
consume an item that has not yet been produced by the producer).
The buffer can be of two types – unbounded buffer and bounded buffer.
Unbounded buffer do not have any restrictions on size of buffer. i.e. Producer
process can produce any number of new items without waiting for emptying the
buffer. Only, consumer process may have to wait for new items to be produced by
producer process. On the other hand, bounded buffer assumes fixed buffer size.
In this case, consumer must wait if buffer is empty and producer has to wait if
buffer is full.
Advantages
- It allows maximum speed.
3-9
- It allows convenience of communication.
- Once shared memory is established, no assistance form kernel is required.
Disadvantages
- It is difficult to implement when the communicating processes are on
different computers.
Process 1
Process 2
Kernel M
For performing message passing, at least two operations send and receive
must be provided. Messages can be sent with either fixed size or variable size.
Implementation of fixed-sized messages is simple as compared to variable-sized
messages.
When two processes want to communicate with each other, a
communication link must be established between them. Methods for logically
implementing communication link are,
- Direct or Indirect communication
- Synchronous or Asynchronous communication
- Automatic or Explicit buffering
3-11
The queue has finite length n. So, at the most n messages can reside
in the queue. If the queue is not full, the message sent is put in the queue
and operation is resumed by the sending process. If the queue is full,
sending process is blocked until space becomes available in the queue.
- Unbounded capacity
(Automatic Buffering, Non-blocking communication)
The queue length is infinite (until the limitations of memory). So,
any number of messages can be placed in it. Sender never blocks.
3.7 Threads
thread threads
In this model, many user level threads are mapped to a single kernel level
thread as shown in figure 3.11.
Advantage
- Thread management is done in user space. So this model is efficient.
Disadvantages
- But, only one user thread can access the kernel thread at a time. So,
multiple threads are unable to run in parallel.
- Also, if a thread is blocked, entire process gets blocked.
K K K K
3-14
Advantage
- This model provides more concurrency than many to one model. Even if a
thread gets blocked, it does not block the entire process. Instead, other
threads continue their execution
Disadvantage
- Creating a user thread requires creation of corresponding kernel thread.
Overhead of creating kernel threads may degrade the performance.
K K K
Advantages
- This model provides very fast and efficient thread management providing
better application performance and system throughput.
- User threads need not to worry about creation of kernel threads.
Disadvantage
- This model is complex to implement.
3-15