0% found this document useful (0 votes)
24 views12 pages

Unit 2

Operating system

Uploaded by

Anagha G rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views12 pages

Unit 2

Operating system

Uploaded by

Anagha G rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

OPERATING SYSTEM BCA IIISEM (UNIT 2)

PROCESS MANAGEMENT
Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be


implemented in the system.
To put it in simple terms, we write our computer programs in a text file and
when we execute this program, it becomes a process which performs all the
tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ¦ stack, heap, text and data. The following image
shows a simplified layout of a process inside main memory

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 1 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

SL.No Component & Description


1. Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local variables.
2. Heap
This is dynamically allocated memory to a process during its run
time.
3. Text
This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
4. Data
This section contains the global and static variables.

Process Life Cycle:


When a process executes, it passes through different states. These stages may
differ in different operating systems, and the names of these states are also not
standardized.
In general, a process can have one of the following five states at a time.
SL.No State & Description
1. Start
This is the initial state when a process is first started/created.
2. Ready
The process is waiting to be assigned to a processor. Ready processes
are waiting to have the processor allocated to them by the operating
system so that they can run. Process may come into this state after
Start state or while running it by but interrupted by the scheduler to
assign CPU to some other process.
3. Running
Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor
executes its instructions.
4. Waiting
Process moves into the waiting state if it needs to wait for a resource,
such as waiting for user input, or waiting for a file to become
available.
5. Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to
be removed from main memory.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 2 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Process Control Block (PCB):


A Process Control Block is a data structure maintained by the Operating System
for every process. The PCB is identified by an integer process ID (PID). A PCB
keeps all the information needed to keep track of a process as listed below in the
table

SL.No Information & Description


1. Process State
The current state of the process i.e., whether it is ready, running,
waiting, or whatever.
2. Process privileges
This is required to allow/disallow access to system resources.
3. Process ID
Unique identification for each of the process in the operating system.
4. Pointer
A pointer to parent process.
5. Program Counter
Program Counter is a pointer to the address of the next instruction to
be executed for this process.
6. CPU registers
Various CPU registers where process need to be stored for execution
for running state.
7. CPU Scheduling Information
Process priority and other scheduling information which is required
to schedule the process.
8. Memory management information
This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.
9. Accounting information
This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10. IO status information
This includes a list of I/O devices allocated to the process

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 3 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

The architecture of a PCB is completely dependent on Operating System and


may contain different information in different operating systems. Here is a
simplified diagram of a PCB

PROCESS CONTROL BLOCK

The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.

Process Scheduling:
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process
is changed, its PCB is unlinked from its current queue and moved to its new
state queue.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 4 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

The Operating System maintains the following important process scheduling


queues -

Job queue - This queue keeps all the processes in the system.
Ready queue - This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues - The processes which are blocked due to unavailability of an
I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.

Two-State Process Model:


Two-state process model refers to running and non-running states which are
described below

Sl.No State & Description


1. Running
When a new process is created, it enters into the system as in the
running state.
2. Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process
is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.

Schedulers:
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types -
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 5 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Long Term Scheduler:


It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler. When a process
changes the state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects a process
among the processes that are ready to execute and allocates CPU to one of
them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term
schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from
the memory. It reduces the degree of multiprogramming. The medium-term
scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A
suspended process cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler:

Sl.No Long-Term Scheduler Short-Term Medium-Term


Scheduler Scheduler
1. It is a job scheduler It is a CPU scheduler It is a process
swapping scheduler.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 6 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

2. Speed is lesser than Speed is fastest among Speed is in between


short term scheduler other two both short- and long-
term scheduler.
3. It controls the degree It provides lesser It reduces the degree of
of multiprogramming control over degree of multiprogramming.
multiprogramming
4. It is almost absent or It is also minimal in It is a part of Time-
minimal in time time sharing system sharing systems.
sharing system
5. It selects processes It selects those It can re-introduce the
from pool and loads processes which are process into memory
them into memory for ready to execute and execution can be
execution continued.

Context Switch:
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from
the same point at a later time. Using this technique, a context switcher enables
multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 7 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time,
some hardware systems employ two or more sets of processor registers. When
the process is switched, the following information is stored for later use.

• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information

Multi-Threading:
What is Thread?
A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers
which hold its current working variables, and a stack which contains the
execution history.
A thread shares with its peer threads few information like code segment, data
segment and open files. When one thread alters a code segment memory item,
all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software
approach to improving performance of operating system by reducing the
overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. The following figure shows the working of a single-
threaded and a multithreaded process.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 8 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Difference between Process and Thread:

Sl.No Process Thread


1. Process is heavy weight or Thread is light weight, taking
resource intensive. lesser resources than a process.
2. Process switching needs Thread switching does not need
interaction with operating to interact with operating
system. system.
3. In multiple processing All threads can share same set of
environments, each process open files, child processes.
executes the same code but has
its own memory and file
resources.
4. If one process is blocked, then While one thread is blocked and
no other process can execute waiting, a second thread in the
until the first process is same task can run.
unblocked.
5. Multiple processes without using Multiple threaded processes use
threads use more resources. fewer resources.
6. In multiple processes each One thread can read, write or
process operates independently change another thread's data
of the others.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 9 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Advantages of Thread:
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.

Types of Thread:
Threads are implemented in following two ways

• User Level Threads − User managed threads.


• Kernel Level Threads − Operating System managed threads acting on
kernel, an operating system core.

User Level Threads:


In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and
for saving and restoring thread contexts. The application starts with a single
thread.

ADVANTAGES

• Thread switching does not require Kernel mode privileges.


• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.

DISADVANTAGES

• In a typical operating system, most system calls are blocking.


• Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads:


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly
by the operating system. Any application can be programmed to be
multithreaded. All of the threads within an application are supported within a
single process.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 10 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

The Kernel maintains context information for the process as a whole and for
individuals’ threads within the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation, scheduling and management
in Kernel space. Kernel threads are generally slower to create and manage than
the user threads.

ADVANTAGES

• Kernel can simultaneously schedule multiple threads from the same


process on multiple processes.
• If one thread in a process is blocked, the Kernel can schedule another
thread of the same process.
• Kernel routines themselves can be multithreaded.

DISADVANTAGES

• Kernel threads are generally slower to create and manage than the user
threads.
• Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.

Multithreading Models:
Some operating system provides a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types

• Many to many relationship.


• Many to one relationship.
• One to one relationship.

Many to Many Model:


The many-to-many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user
level threads are multiplexing with 6 kernel level threads. In this model,
developers can create as many user threads as necessary and the corresponding
Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 11 of 12
OPERATING SYSTEM BCA IIISEM (UNIT 2)

Many to One Model:


Many-to-one model maps many user level threads to one Kernel-level thread.
Thread management is done in user space by the thread library. When thread
makes a blocking system call, the entire process will be blocked. Only one
thread can access the Kernel at a time, so multiple threads are unable to run in
parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such
a way that the system does not support them, then the Kernel threads use the
many-to-one relationship modes.

One to One Model:


There is one-to-one relationship of user-level thread to the kernel-level thread.
This model provides more concurrency than the many-to-one model. It also
allows another thread to run when a thread makes a blocking system call. It
supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the


corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to
one relationship model.

Difference between User-Level & Kernel-Level Thread:

SL.No User-Level Threads Kernel-Level Threads


1. User-level threads are faster to Kernel-level threads are
create and manage. slower to create and manage.
2. Implementation is by a thread Operating system supports
library at the user level. creation of Kernel threads.
3. User-level thread is generic and can Kernel-level thread is specific
run on any operating system. to the operating system.
4. Multi-threaded applications cannot Kernel routines themselves
take advantage of multiprocessing. can be multithreaded.

From the Desk of Mr. Manjunatha Balluli, S.M.D College, BALLARI. Page 12 of 12

You might also like