OSY Chapter 3
OSY Chapter 3
OSY Chapter 3
Chapter 3
Process Management
Define process.
A process is a program in execution. Process is also called as job, task or unit of work. The execution of a
process must progress in a sequential fashion. Process is an active entity.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory
Process In Memory
Process memory is divided into four sections for efficient working :
The Text section is made up of the compiled program code, read in from non-volatile storage when the
program is launched.
The Data section is made up the global and static variables, allocated and initialized prior to executing
the main.
2
The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc,
free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when they are
declared.
New: The process being created is available in the new state. It is the new state because the system is not
permitted it to enter the ready state due to limited memory available in the ready queue. If some memory
becomes available, then the process from the new state will go to ready state.
Ready State: The process which is not waiting for any external event such as I/O operation and which is
not running is said to be in ready state. It is not in the running state because some other process is already
running. It is waiting for its turn to go to the running state.
Running State: The process which is currently running and has control of the CPU is known as the
process in running state. In single user system, there is only one process which is in the running state. In
multiuser system, there are multiple processes which are in the running state.
Blocked State: The process that is currently waiting for external event such as an I/O operation is said to
be in blocked state. After the completion of I/O operation, the process from blocked state enters in the
ready state and from the ready state when the process turn will come it will again go to running state.
Terminated State: The process whose operation is completed, it will go the terminated state from the
running state. In halted state, the memory occupied by the process is released.
Each process is represented as a process control block (PCB) in the operating system. It contains
Information associated with specific process.
In general, a PCB may contain information regarding:
1. Process Number: Each process is identified by its process number, called process identification
number (PID).
4
2. Priority: Each process is assigned a certain level of priority that corresponds to the relative importance
of the event that it services.
3. Process State: This information is about the current state of the process. I.e. whether process is in
new, ready, running, waiting or terminated state.
4. Program Counter: This contains the address of the next instruction to be executed for this process.
5. CPU Registers: CPU registers vary in number and type, depending upon the computer architectures.
These include index registers, stack pointers and general purpose registers etc. When an interrupt
occurred, information about the current status of the old process is saved in registers along with the
program counters. This information is necessary to allow the process to be continued correctly after the
completion of an interrupted process.
6. CPU Scheduling Information: This information includes a process priority, pointers to scheduling
queues and any other scheduling parameters.
7. Memory Management Information: This information may include such information as the value of
base and limit registers, the page table or the segment table depending upon the memory system used by
operating system.
8. Accounting: This includes actual CPU time used in executing a process in order to charge individual user
for processor time.
9. I/O Status: It includes outstanding I/O request, allocated devices information, pending operation and so
on.
10. File Management: It includes information about all open files, access rights etc.
5
Why is process creation necessary? State the role of fork process in the context
Process creation: When a new process is to be added to those currently being managed, the operating
system builds the data structures that are used to manage the process and allocates address space in main
memory to the process. This is the creation of a new process. Create Process Operating system creates a
new process with the specified or default attributes and identifier. A process may create several new sub
processes.
Process creation in UNIX and Linux are done through fork() system calls. When the operating system
creates a process at the explicit request of another process, the action is referred to as process spawning.
When one process spawns another, the former is referred to as the parent process, and the spawned
process is referred to as the child process. The parent may have to partition its resources among its
children, or it may be able to share some resources among several of its children. A sub-process may be
able to obtain its resources directly from the operating system. exec system call used after a fork to replace
the process’ memory space with a new program
Process termination is a technique in which a process is terminated and release the CPU after completing
the execution.
Most of the OS use exit( ) system call to terminate a process. Main causes of process termination Normal
Completion: The process completes all task and releases the CPU.
6
Scheduling Queues
Scheduling queues refers to queues of processes or devices. When the process enters into the system, then
this process is put into a job queue. This queue consists of all processes in the system. The operating
system also maintains other queues such as device queue
Schedulers
Schedulers are special system software’s which handles process scheduling in various ways. A process
migrates between the various scheduling queues throughout its life time. The operating system must
select, for scheduling purposes, processes from these queues in some fashion. The selection process is
carried out by the appropriate scheduler. Scheduler is the system program which schedules processes from
the scheduling queues. Their main task is to select the jobs to be submitted into the system and to decide
which process to run.
Context switch.
A CPU switch from process to process is referred as context switch. A context switch is a mechanism that
store and restore the state or context of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. When the scheduler switches the CPU from one process to
another process, the context switch saves the contents of all process registers for the process being
removed from the CPU, in its process control block.
8
Context switch includes two operations such as state save and state restore. State save operation stores the
current information of running process into its PCB. State restore operation restores the information of
process to be executed from its PCB. Switching the CPU from one process to another process requires
performing state save operation for the currently executing process (blocked) and a state restore operation
for the process ready for execution. This task is known as context switch.
In this a region of the memory residing in an address space of a process creating a shared memory segment
can be accessed by all processes who want to communicate with other processes. All the processes using
the shared memory segment should attach to the address space of the shared memory. All the processes
can exchange information by reading and/or writing data in shared memory segment. The form of data and
location are determined by these processes who want to communicate with each other. These processes
are not under the control of the operating system. The processes are also responsible for ensuring that
they are not writing to the same location simultaneously. After establishing shared memory segment, all
accesses to the shared memory segment are treated as routine memory access and without assistance of
kernel.
2. Message Passing:
In this model, communication takes place by exchanging messages between cooperating processes. It
allows processes to communicate and synchronize their action without sharing the same address space. It
is particularly useful in a distributed environment when communication process may reside on a different
computer connected by a network. Communication requires sending and receiving messages through the
kernel. The processes that want to communicate with each other must have a communication link between
them. Between each pair of processes exactly one communication link.
10
Each process contains two sections. One is critical section where a process may need to access common
variable or objects and other is remaining section containing instructions for processing of sharable objects
or local objects of the process. Each process must request for permission to enter inside its critical section.
The section of code implementing this request is the entry section. In entry section if a process gets
permission to enter into the critical section then it works with common data. At this time all other
processes are in waiting state for the same data. The critical section is followed by an exit section. Once the
process completes its task, it releases the common data in exit section. Then the remaining code placed in
the remainder section is executed by the process.
Two processes cannot execute their critical sections at the same time. The critical section problem is to
design a protocol that the processes can use to cooperate i.e. allowing entry to only one process at a time
inside the critical section. Before entering into the critical section each process must request for permission
to entry inside critical section.
11
while (true)
{
entry section
critical section
exit section
non-critical section
}
Solution 1
In this solution, lock is a global variable initialized to false. A thread sets lock to true to indicate that it is
entering the critical section.
Solution 1 is incorrect!
Solution 2
The threads use a global array intend To Enter to indicate their intention to enter the critical section.
boolean intendToEnter[] = {false, false};
Solution 2 is incorrect!
13
Solution 3
The global variable turn is used to indicate the next process to enter the critical section. The initial value of
turn can be 0 or 1.
int turn = 1;
Solution 3 is incorrect!
14
Solution 4
When a thread finds that the other thread also intends to enter its critical section, it sets its own
intendToEnter flag to false and waits until the other thread exits its critical section.
Working of semaphore to solve synchronization problem:- Consider two concurrently running processes
P1 and P2.P1 contains statement S1 and P2 contains statement S2.When we want to execute statement S2
only after execution of statement S1, then we can implement it by sharing a common semaphore synch
between two processes. Semaphore synch is initialized to 0.to execute the sequence modify code for
process P1 and P2.
Process P1 contains:
15
S1;
signal (synch);
Process P2 contains:-
wait (synch);
S2;
As synch is initialized to 0, Process P2 will wait and process P1 will execute. Once process P1 completes
execution of statement S1, it performs signal () operation that increments synch value. Then wait ()
operation checks the incremented value and starts execution of statement S2 from Process P2.
Threads benefits The benefits of multithreaded programming can be broken down into four major
categories.
16
2. Resource sharing: By default, threads share the memory and the resources of the process to which they
belong. The benefit of code sharing is that it allows an application to have several different threads of
activity all within the same address space. A word processor with three threads.
For example: A multithreaded word processor allows all threads to have access to the document being
edited.
3. Economy: Because threads share resources of the process to which they belong, it is
more economical to create and switch threads, than create and context switch processes
(it is much more time consuming). For example: in Sun OS Solaris 2 creating a process
is about 30 times slower than is creating a thread (context switching is about five times
slower than threads switching).
Describe Multithreading:
Multithreading is the ability of a central processing unit (CPU) or a single core in a multicore processor to
execute multiple processes or threads concurrently, appropriately supported by the operating system.
17
Data and Code Processes have independent data A thread shares the data segment,
18
sharing and code segments. code segment, files etc. with its peer
threads.
All the different processes are All user level peer threads are treated
Treatment by OS treated separately by the operating as a single task by the operating
system. system.
Processes require more time for
Time for creation Threads require less time for creation.
creation.
Processes require more time for Threads require less time for
Time for termination
termination. termination.
Kernel Threads:
19
In systems that use kernel-level threads, the operating system itself is aware of each individual
thread.
Kernel threads are supported and managed directly by the operating system.
A context switch between kernel threads belonging to the same process requires only the registers,
program counter, and stack to be changed; the overall memory management information does not
need to be switched since both of the threads share the same address space. Thus context switching
between two kernel threads is slightly faster than switching between two processes.
Kernel threads can be expensive because system calls are required to switch between threads. Also,
since the operating system is responsible for scheduling the threads, the application does not have
any control over how its threads are managed.
User-level threads are generally fast to create The kernel-level threads are slow and inefficient.
and manage For instance, threads operations are hundreds of
times slower than that of user-level threads.
User level thread is generic and can run on any Kernel level thread is specific to the operating
operating system. system.
Example: User-thread libraries include POSIX Example: Windows NT, Windows 2000, Solaris 2,
Pthreads, Mach C-threads, and Solaris 2 UI- BeOS, and Tru64 UNIX (formerly Digital UNIX)-
threads. support kernel threads.
OR
21
2. One-to-One: It maps each user level thread to a kernel level thread. Even one thread makes a
blocking call; other thread can run with the kernel thread.
Advantages:-
It allows multiple threads to run in parallel on multiprocessors.
Disadvantages:-
Creating a user thread requires creating the corresponding kernel thread. Creating kernel thread may
affect the performance of an application.
Example: - Linux, Windows OS including Win 95, 98, NT 2000, and XP implement the
one-to-one model.
OR
22
3. Many-to-Many:
This model maps many user level threads to a smaller or equal number of kernel threads. Number of
kernel threads may be specific to either a particular application or particular machine.
OR
23
Linux provides us a utility called ps for viewing information related with the processes on a system which
stands as abbreviation for “Process Status”.ps command is used to list the currently running processes
and their PIDs along with some other information depends on different options.
Syntax –
ps [options]
Options for ps Command :
1. Simple process selection : Shows the processes for the current shell –
[root@rhel7 ~]# ps
PID TTY TIME CMD
12330 pts/0 00:00:00 bash
21621 pts/0 00:00:00 ps
Result contains four columns of information.
Where,
PID – the unique process ID
TTY – terminal type that the user is logged into
TIME – amount of CPU in minutes and seconds that the process has been running
CMD – name of the command that launched the process.
Note – Sometimes when we execute ps command, it shows TIME as 00:00:00. It is nothing but the
total accumulated CPU utilization time for any process and 00:00:00 indicates no CPU time has been
given by the kernel till now. In above example we found that, for bash no CPU time has been given.
This is because bash is just a parent process for different processes which needs bash for their
execution and bash itself is not utilizing any CPU time till now.
2. View Processes : View all the running processes use either of the following option with ps –
[root@rhel7 ~]# ps -A
[root@rhel7 ~]# ps -e
3. View Processes not associated with a terminal : View all processes except both session leaders and
processes not associated with a terminal.
[root@rhel7 ~]# ps -a
PID TTY TIME CMD
27011 pts/0 00:00:00 man
27016 pts/0 00:00:00 less
27499 pts/1 00:00:00 ps
4. View all the processes except session leaders :
[root@rhel7 ~]# ps -d
5. View all processes except those that fulfill the specified conditions (negates the selection) :
Example – If you want to see only session leader and processes not associated with a terminal. Then,
run
[root@rhel7 ~]# ps -a -N
OR
24