Process: Unit 2 Operating System
Process: Unit 2 Operating System
Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory –
1
Stack:-The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2
Heap:-This is dynamically allocated memory to a process during its run time.
3
Text:-This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4
Data:-This section contains the global and static variables.
Concurrency
Concurrency is the execution of the multiple instruction sequences at the same time. It happens in the
operating system when there are several process threads running in parallel. The running process
threads always communicate with each other through shared memory or message passing. Concurrency
results in sharing of resources result in problems like deadlocks and resources starvation.
It helps in techniques like coordinating execution of processes, memory allocation and execution
scheduling for maximizing throughput.
Principles of Concurrency :
Both interleaved and overlapped processes can be viewed as examples of concurrent processes, they
both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
• The activities of other processes
• The way operating system handles interrupts
• The scheduling policies of the operating system
Problems in Concurrency :
• Sharing global resources –
Sharing of global resources safely is difficult. If two processes both make use of a global variable
and both perform read and write on that variable, then the order iin which various read and
write are executed is critical.
• Optimal allocation of resources –
It is difficult for the operating system to manage the allocation of resources optimally.
• Locating programming errors –
It is very difficult to locate a programming error because reports are usually not reproducible.
• Locking the channel –
It may be inefficient for the operating system to simply lock the channel and prevents its use by
other processes.
Advantages of Concurrency :
• Running of multiple applications –
It enable to run multiple applications at the same time.
• Better resource utilization –
It enables that the resources that are unused by one application can be used for other
applications.
• Better average response time –
Without concurrency, each application has to be run to completion before the next one can be
run.
• Better performance –
It enables the better performance by the operating system. When one application uses only the
processor and another application uses only the disk drive then the time to run both
applications concurrently to completion will be shorter than the time to run each application
consecutively.
Drawbacks of Concurrency :
• It is required to protect multiple applications from one another.
• It is required to coordinate multiple applications through additional mechanisms.
• Additional performance overheads and complexities in operating systems are required for
switching among applications.
• Sometimes running too many applications concurrently leads to severely degraded
performance.
Issues of Concurrency :
• Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.
• Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a point
first.
• Blocking –
Processes can block waiting for resources. A process could be blocked for long period of time
waiting for input from a terminal. If the process is required to periodically update some data,
this would be very undesirable.
• Starvation –
It occurs when a process does not obtain service to progress.
• Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute.
Critical Section
The critical section is a code segment where the shared variables can be accessed. An atomic action is
required in a critical section i.e. only one process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −
In the above diagram, the entry section handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section handles the exit from the critical
section. It releases the resources and also informs the other processes that the critical section is free.
• Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any time. If
any other processes require the critical section, they must wait until it is free.
• Progress
Progress means that if a process is not using the critical section, then it should not stop any
other process from accessing it. In other words, any process can enter a critical section if it is
free.
• Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. It should not wait
endlessly to access the critical section.
Boy A decides upon some clothes to buy and heads to the changing room to try them out. Now, while
boy A is inside the changing room, there is an ‘occupied’ sign on it – indicating that no one else can
come in. Girl B has to use the changing room too, so she has to wait till boy A is done using the changing
room.
Once boy A comes out of the changing room, the sign on it changes from ‘occupied’ to ‘vacant’ –
indicating that another person can use it. Hence, girl B proceeds to use the changing room, while the
sign displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and girl B are two different processes, while
the sign outside the changing room indicates the process synchronization mechanism being used.
Producer-Consumer problem
The producer consumer problem is a synchronization problem. There is a fixed size buffer and the
producer produces items and enters them into the buffer. The consumer removes the items from the
buffer and consumes them.
A producer should not produce items into the buffer when the consumer is consuming an item from the
buffer and vice versa. So the buffer should only be accessed by the producer or consumer at a time.
The producer consumer problem can be resolved using semaphores. The codes for the producer and
consumer process are given as follows −
Producer Process
The code that defines the producer process is given below −
do {
. PRODUCE ITEM
wait(empty);
wait(mutex);
signal(mutex);
signal(full);
} while(1);
In the above code, mutex, empty and full are semaphores. Here mutex is initialized to 1, empty is
initialized to n (maximum size of the buffer) and full is initialized to 0.
The mutex semaphore ensures mutual exclusion. The empty and full semaphores count the number of
empty and full spaces in the buffer.
After the item is produced, wait operation is carried out on empty. This indicates that the empty space
in the buffer has decreased by 1. Then wait operation is carried out on mutex so that consumer process
cannot interfere.
After the item is put in the buffer, signal operation is carried out on mutex and full. The former indicates
that consumer process can now act and the latter shows that the buffer is full by 1.
Consumer Process
The code that defines the consumer process is given below:
do {
wait(full);
wait(mutex);
signal(mutex);
signal(empty);
. CONSUME ITEM
} while(1);
The wait operation is carried out on full. This indicates that items in the buffer have decreased by 1.
Then wait operation is carried out on mutex so that producer process cannot interfere.
Then the item is removed from buffer. After that, signal operation is carried out on mutex and empty.
The former indicates that consumer process can now act and the latter shows that the empty space in
the buffer has increased by 1.
Interprocess communication
Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know
that some event has occurred or transferring of data from one process to another.
A diagram that illustrates interprocess communication is as follows –
• All the processes that use the shared memory model need to make sure that they are not writing
to the same memory location.
• Shared memory model may create problems such as synchronization and memory protection that
need to be addressed.
Message Passing Model
Multiple processes can read and write data to the message queue without being connected to each other.
Messages are stored on the queue until their recipient retrieves them. Message queues are quite useful
for interprocess communication and are used by most operating systems.
1 . Process creation:
(i). When a new process is created, operating system assigns a unique Process Identifier (PID) to it and
inserts a new entry in primary process table.
(ii). Then the required the required memory space for all the elements of process such as program, data
and stack is allocated including space for its Process Control Block (PCB).
(iii). Next, the various values in PCB are initialized such as,
1. Process identification part is filled with PID assigned to it in step (1) and also its parent’s PID.
2. The processor register values are mostly filled with zeroes, except for the stack pointer and
program counter. Stack pointer is filled with the address of stack allocated to it in step (ii) and
program counter is filled with the address of its program entry point.
3. The process state information would be set to ‘New’.
4. Priority would be lowest by default, but user can specify any priority during creation.
In the beginning, process is not allocated to any I/O devices or files. The user has to request them
or if this is a child process it may inherit some resources from its parent.
(vi). Then the operating system will link this process to scheduling queue and the process state would be
changed from ‘New’ to ‘Ready’. Now process is competing for the CPU.
(v). Additionally, operating system will create some other data structures such as log files or accounting
files to keep track of processes activity.
2 . Process Deletion:
Processes are terminated by themselves when they finish’1 executing their last statement, then operating
system USES exit( ) system call to delete its context. Then all the resources held by that process like
physical and virtual memory, 10 buffers, open files etc., are taken back by the operating system. A
process P can be terminated either by operation system or by the parent process of P.
A parent may terminate a process due to one of the following reasons,
(i). When task given to the child is not required now.
(ii). When child has taken more resources than its limit.
(iii). The parent of the process is exiting, as a result all its children are deleted. This is called as cascaded
termination.