0% found this document useful (0 votes)
4 views

Process in Operating System

Uploaded by

dipali18
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Process in Operating System

Uploaded by

dipali18
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Process in Operating System

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.
A process is essentially running software. The execution of any process must occur in a specific
order. A process refers to an entity that helps in representing the fundamental unit of work that
must be implemented in any system.
In other words, we write the computer programs in the form of a text file, thus when we run
them, these turn into processes that complete all of the duties specified in the program.
A program can be segregated into four pieces when put into memory to become a process: stack,
heap, text, and data. The diagram below depicts a simplified representation of a process in the
main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description


1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

Process States in Operating System


From start to finish, the process goes through a number of stages. A minimum of five states is
required. Even though the process could be in one of these states during execution, the names of
the states are not standardized. Throughout its life cycle, each process goes through various
stages. They are:
New State

When a program in secondary memory is started for execution, the process is said to be in a new
state.

Ready State

After being loaded into the main memory and ready for execution, a process transitions from a
new to a ready state. The process will now be in the ready state, waiting for the processor to
execute it. Many processes may be in the ready stage in a multiprogramming environment.

Run State

After being allotted the CPU for execution, a process passes from the ready state to the run state.

Terminate State

When a process’s execution is finished, it goes from the run state to the terminate state. The
operating system deletes the process control box (or PCB) after it enters the terminate state.

Block or Wait State

If a process requires an Input/Output operation or a blocked resource during execution, it


changes from run to block or the wait state.

The process advances to the ready state after the I/O operation is completed or the resource
becomes available.

Suspend Ready State

If a process with a higher priority needs to be executed while the main memory is full, the
process goes from ready to suspend ready state. Moving a lower-priority process from the ready
state to the suspend ready state frees up space in the ready state for a higher-priority process.

Until the main memory becomes available, the process stays in the suspend-ready state. The
process is brought to its ready state when the main memory becomes accessible.

Suspend Wait State

If a process with a higher priority needs to be executed while the main memory is full, the
process goes from the wait state to the suspend wait state. Moving a lower-priority process from
the wait state to the suspend wait state frees up space in the ready state for a higher-priority
process.

The process gets moved to the suspend-ready state once the resource becomes accessible. The
process is shifted to the ready state once the main memory is available.

Process State Transition


Applications that have strict real-time constraints might need to prevent processes from being
swapped or paged out to secondary memory. Here's a simplified overview of UNIX process
states and the transitions between states:
Figure 3-3 Process State Transition Diagram

An active process is normally in one of the five states in the diagram. The arrows show how it
changes states.

 A process is running if it is assigned to a CPU. A process is preempted--that is, removed


from the running state--by the scheduler if a process with a higher priority becomes
runnable. A process is also preempted if it consumes its entire time slice and a process of
equal priority is runnable.

 A process is runnable in memory if it is in primary memory and ready to run, but is not
assigned to a CPU.

 A process is sleeping in memory if it is in primary memory but is waiting for a specific


event before it can continue execution. For example, a process is sleeping if it is waiting
for an I/O operation to complete, for a locked resource to be unlocked, or for a timer to
expire. When the event occurs, the process is sent a wake up; if the reason for its sleep is
gone, the process becomes runnable.

 A process is runnable and swapped if it is not waiting for a specific event but has had its
whole address space written to secondary memory to make room in primary memory for
other processes.

 A process is sleeping and swapped if it is both waiting for a specific event and has had its
whole address space written to secondary memory to make room in primary memory for
other processes.

If a machine does not have enough primary memory to hold all its active processes, it must page
or swap some address space to secondary memory.

 When the system is short of primary memory, it writes individual pages of some
processes to secondary memory but still leaves those processes runnable. When a process
runs, if it accesses those pages, it must sleep while the pages are read back into primary
memory.
 When the system gets into a more serious shortage of primary memory, it writes all the
pages of some processes to secondary memory and marks those processes as swapped.
Such processes get back into a state where they can be scheduled only by being chosen
by the system scheduler daemon process, then read back into memory.

Both paging and swapping cause delay when a process is ready to run again. For processes that
have strict timing requirements, this delay can be unacceptable.

Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating


systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource cant be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.

2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.

Threads in Operating System (OS)


A thread is a single sequential flow of execution of tasks of a process so it is also known as
thread of execution or thread of control. There is a way of thread execution inside the process of
any operating system. Apart from this, there can be more than one thread inside a process. Each
thread of the same process makes use of a separate program counter and a stack of activation
records and control blocks. Thread is often referred to as a lightweight process.
 Traditional ( heavyweight ) processes have a single thread of control - There is one
program counter, and one sequence of instructions that can be carried out at any given
time.

 As shown in Figure 4.1, multi-threaded applications have multiple threads within a single
process, each having their own program counter, stack and set of registers, but sharing
common code, data, and certain structures such as open files.
Figure 4.1 - Single-threaded and multithreaded processes

Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
1) User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-
threaded processes? Examples: Java thread, POSIX threads, etc.
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
2) Kernel level thread
The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-
level thread is implemented by the operating system. The kernel knows about all the threads and
manages them. The kernel-level thread offers a system call to create and manage the threads
from user-space. The implementation of kernel threads is more difficult than the user thread.
Context switch time is longer in the kernel thread. If a kernel thread performs a blocking
operation, the Banky thread execution can continue. Example: Window Solaris.

Advantages of Kernel-level threads


1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
Components of Threads
Any thread has the following components.
1. Program counter 2.Register set 3.Stack space
Benefits of Threads
 Enhanced system throughput: The number of jobs completed per unit time increases
when the process is divided into numerous threads, and each thread is viewed as a job. As
a result, the system’s throughput likewise increases.
 Effective use of a Multiprocessor system: You can schedule multiple threads in
multiple processors when you have many threads in a single process.
 Faster context switch: The thread context switching time is shorter than the process
context switching time. The process context switch adds to the CPU’s workload.
 Responsiveness: When a process is divided into many threads, and each of them
completes its execution, then the process can be responded to as quickly as possible.
 Communication: Multiple-thread communication is straightforward because the threads
use the same address space, while communication between two processes is limited to a
few exclusive communication mechanisms.
 Resource sharing: Code, data, and files, for example, can be shared among all threads in
a process. Note that threads cannot share the stack or register. Each thread has its own
stack and register.
Difference between Process and Thread
S.N. Process Thread
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.
3 In multiple processing environments, each process All threads can share same set of
executes the same code but has its own memory and file open files, child processes.
resources.
4 If one process is blocked, then no other process can While one thread is blocked and
execute until the first process is unblocked. waiting, a second thread in the same
task can run.
5 Multiple processes without using threads use more Multiple threaded processes use
resources. fewer resources.
6 In multiple processes each process operates independently One thread can read, write or change
of the others. another thread's data.

You might also like