0% found this document useful (0 votes)
69 views20 pages

Unit 3 Os Notes

The document discusses the concepts of processes and threads in operating systems, detailing the definitions, states, and control mechanisms associated with processes, including the process control block (PCB) and scheduling techniques. It also contrasts processes and threads, highlighting the advantages of using threads for concurrency and resource efficiency, as well as the differences between user-level and kernel-level threads. Additionally, it covers resource management in processes, outlining the request, use, and release sequence for resources.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views20 pages

Unit 3 Os Notes

The document discusses the concepts of processes and threads in operating systems, detailing the definitions, states, and control mechanisms associated with processes, including the process control block (PCB) and scheduling techniques. It also contrasts processes and threads, highlighting the advantages of using threads for concurrency and resource efficiency, as well as the differences between user-level and kernel-level threads. Additionally, it covers resource management in processes, outlining the request, use, and release sequence for resources.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

Process Concept

 A batch system executes jobs, whereas a time-shared system has user


programs, or tasks. The terms job and process are used almost
interchangeably.
 A process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the
contents of the processor's registers.
 A process generally also includes the process stack, which contains
temporary data (such as function parameters, return addresses, and local
variables), and a data section, which contains global variables.

Process State

As a process executes, it changes state. The state of a process is defined in part by


the current activity of that process. Each process may be in one of the following
states:
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.
It is important to realize that only one process can be running on any processor at
any instant.

Process Control Block

Each process is represented in the operating system by a process control block


(PCB)-also called a task control block. A PCB is shown in Figure. It contains
many pieces of information associated with a specific process, including these:

• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information. Along with the
program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward .
• Cpu-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• Accounting information. This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on. In brief, the PCB simply
serves as the repository for any information that may vary from process to process.

Process Scheduling

The objective of multiprogramming is to have some process running at all times, to


maximize CPU utilization. The objective of time sharing is to switch the CPU
among processes so frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for program execution
on the CPU. For a single-processor system, there will never be more than one
running process. If there are more processes, the rest will have to wait until the
CPU is free and can be rescheduled.

Scheduling Queues
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the occurrence
of a particular event, such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since
there are many processes in the system, the disk may be busy with the I/O request
of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. Each
device has its own device queue.

A common representation for a discussion of process scheduling is a queueing


diagram, such as that in Figure . Each rectangular box represents a queue. Two
types of queues are present: the ready queue and a set of device queues. The circles
represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected
for execution, or is dispatched. Once the process is allocated the CPU and is
executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new sub process and wait for the sub process's
termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the
ready state and is then put back in the ready queue. A process continues this cycle
until it terminates, at which time it is removed from all queues and has its PCB and
resources de-allocated.

Schedulers
A process migrates among the various scheduling queues throughout its lifetime.
The operating system must select, for scheduling purposes, processes from these
queues in some fashion. The selection process is carried out by the appropriate
scheduler.

The long-term scheduler, or job scheduler, selects processes from this pool and
loads them into memory for execution.

The short-term scheduler, or CPU scheduler, selects from among the processes
that are ready to execute and allocates the CPU to one of them.

Some operating systems, such as time-sharing systems, may introduce an


additional, intermediate level of scheduling. This medium-term scheduler is
diagrammed in Figure 3.8. The key idea behind a medium-term scheduler is that
sometimes it can be advantageous to remove processes from memory (and from
active contention for the CPU) and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping. The process is
swapped out, and is later swapped in, by the medium-term scheduler. Swapping
may be necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed
up.

The primary distinction between these two schedulers lies in frequency of


execution. The short-term scheduler must select a new process for the CPU
frequently. A process may execute for only a few milliseconds before waiting for
an I/O request.
The long-term scheduler executes much less frequently; minutes may separate the
creation of one new process and the next. The long-term scheduler controls the
degree of multiprogramming (the number of processes in memory). If the degree of
multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.

Context Switch
 interrupts cause the operating system to change a CPU from its current task
and to run a kernel routine. Such operations happen frequently on general-
purpose systems.
 When an interrupt occurs, the system needs to save the current context of the
process currently running on the CPU so that it can restore that context when
its processing is done, essentially suspending the process and then resuming
it.
 The context is represented in the PCB of the process; it includes the value of
the CPU registers, the process state and memory-management information.
 Generically, we perform a state save of the current state of the CPU, be it in
kernel or user mode, and then a state restore to resume operations.
 Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is known
as a context switch. When a context switch occurs, the kernel saves the
context of the old process in its PCB and loads the saved context of the new
process scheduled to run. Context-switch time is pure overhead, because the
system does no useful work while switching. Its speed varies from machine
to machine, depending on the memory speed, the number of registers that
must be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). Typical speeds are a few
milliseconds.
 Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun UltraSPARC) provide multiple
sets of registers. A context switch here simply requires changing the pointer
to the current register set. Of course, if there are more active processes than
there are register sets, the system resorts to copying register data to and from
memory, as before. Also, the more complex the operating system, the more
work must be done during a context switch. advanced memory-management
techniques may require extra data to be switched with each context. For
instance, the address space of the current process must be preserved as the
space of the next task is prepared for use. How the address space is
preserved, and what amount of work is needed to preserve it, depend on the
memory-management method of the operating system.
scheduling criteria
Various criteria or characteristics that help in designing a good scheduling
algorithm are:
 CPU Utilization − A scheduling algorithm should be designed so that CPU
remains busy as possible. It should make efficient use of CPU.
 Throughput − Throughput is the amount of work completed in a unit of
time. In other words throughput is the processes executed to number of jobs
completed in a unit of time. The scheduling algorithm must look to
maximize the number of jobs processed per time unit.
 Response time − Response time is the time taken to start responding to the
request. A scheduler must aim to minimize response time for interactive
users.
 Turnaround time − Turnaround time refers to the time between the
moment of submission of a job/ process and the time of its completion.
Thus how long it takes to execute a process is also an important factor.
 Waiting time − It is the time a job waits for resource allocation when
several jobs are competing in multiprogramming system. The aim is to
minimize the waiting time.
 Fairness − A good scheduler should make sure that each process gets its
fair share of the CPU.

Thread
 A thread is a flow of execution through the process code, with its own
program counter that keeps track of which instruction to execute next,
system registers which hold its current working variables, and a stack which
contains the execution history.
 A thread shares with its peer threads few information like code segment,
data segment and open files. When one thread alters a code segment
memory item, all other threads see that.
 A thread is also called a lightweight process. Threads provide a way to
improve application performance through parallelism. Threads represent a
software approach to improving performance of operating system by
reducing the overhead thread is equivalent to a classical process.
 Each thread belongs to exactly one process and no thread can exist outside
a process. Each thread represents a separate flow of control. Threads have
been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.

Difference between Process and Thread


S.N. Process Thread

1 Process is heavy weight or resource Thread is light weight, taking lesser


intensive. resources than a process.

2 Process switching needs interaction Thread switching does not need to


with operating system. interact with operating system.

3 In multiple processing environments, All threads can share same set of open
each process executes the same code files, child processes.
but has its own memory and file
resources.

4 If one process is blocked, then no While one thread is blocked and


other process can execute until the waiting, a second thread in the same
first process is unblocked. task can run.

5 Multiple processes without using Multiple threaded processes use fewer


threads use more resources. resources.

6 In multiple processes each process One thread can read, write or change
operates independently of the others. another thread's data.

Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on
kernel, an operating system core.
Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to Kernel-level threads are slower to create


create and manage. and manage.

2 Implementation is by a thread Operating system supports creation of


library at the user level. Kernel threads.

3 User-level thread is generic and Kernel-level thread is specific to the


can run on any operating system. operating system.

4 Multi-threaded applications Kernel routines themselves can be


cannot take advantage of multithreaded.
multiprocessing.

User Level Threads


In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and
for saving and restoring thread contexts. The application starts with a single
thread.

Advantages

 Thread switching does not require Kernel mode privileges.


 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages

 In a typical operating system, most system calls are blocking.


 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded.
All of the threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individual’s threads within the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation, scheduling and management in
Kernel space. Kernel threads are generally slower to create and manage than the
user threads.

Advantages

 Kernel can simultaneously schedule multiple threads from the same process
on multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread
of the same process.
 Kernel routines themselves can be multithreaded.

Disadvantages

 Kernel threads are generally slower to create and manage than the user
threads.
 Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.
system
A process must request a resource before using it and must release the resource
after using it.
A process may request as many resources as it requires to carry out its designated
task. Obviously, the number of resources requested may not exceed the total
number of resources available in the system. In other words, a process cannot
request three printers if the system has only two.
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:
1. Request. If the request cannot be granted immediately (for example, if the
resource is being used by another process), then the requesting process must wait
until it can acquire the resource.
2. Use. The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).
3. Release. The process releases the resource.

A system model or structure consists of a fixed number of resources to be circulated


among some opposing processes. The resources are then partitioned into numerous
types, each consisting of some specific quantity of identical instances. Memory space,
CPU cycles, directories and files, I/O devices like keyboards, printers and CD-DVD
drives are prime examples of resource types. When a system has 2 CPUs, then the
resource type CPU got two instances.
Under the standard mode of operation, any process may use a resource in only the
below-mentioned sequence:

1. Request: When the request can't be approved immediately (where the case may
be when another process is utilizing the resource), then the requesting job must
remain waited until it can obtain the resource.
2. Use: The process can run on the resource (like when the resource is a printer, its
job/process is to print on the printer).

 Release: The process releases the resource (like, terminating or exiting any
specific process).

Deadlock Characterization
A deadlock state can occur when the following four circumstances hold simultaneously
within a system:

 Mutual exclusion: At least there should be one resource that has to be held in a
non-sharable manner; i.e., only a single process at a time can utilize the resource. If
other process demands that resource, the requesting process must be postponed until
the resource gets released.
 Hold and wait: A job must be holding at least one single resource and waiting to
obtain supplementary resources which are currently being held by several other
processes.
 No preemption: Resources can't be anticipated; i.e., a resource can get released
only willingly by the process holding it, then after that, the process has completed its
task.
 Circular wait: The circular - wait situation implies the hold-and-wait state or
condition, and hence all the four conditions are not completely independent. They are
interconnected among each other.

Methods for Handling Deadlocks


Normally you can deal with the deadlock issues and situations in one of the three ways
mentioned below:

 You can employ a protocol for preventing or avoiding deadlocks, and ensure that
the system will never go into a deadlock state.
 You can let the system to enter any deadlock condition, detect it, and then
recover.
 You can overlook the issue altogether and assume that deadlocks never occur
within the system.

But is recommended to deal with deadlock, from the 1st option

Deadlock Prevention And Avoidance


Deadlock Characteristics
As discussed in the previous post, deadlock has following characteristics.
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy the mutual exclusion because some resources,
such as the tap drive and printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before the start of its
execution, this way hold and wait condition is eliminated but it will lead to
low device utilization. for example, if a process requires printer at a later
time and we have allocated printer before the start of its execution printer
will remain blocked till it has completed its execution.
2. The process will make a new request for resources after releasing the
current set of resources. This solution may lead to starvation.

Eliminate No Preemption
Preempt resources from the process when resources required by other high
priority processes.

Eliminate Circular Wait


Each resource will be assigned with a numerical number. A process can
request the resources only in increasing order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask
for R4, R3 lesser than R5 such request will not be granted, only request for
resources more than R5 will be granted.

Banker’s Algorithm
The banker’s algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for predetermined
maximum possible amounts of all resources, then makes an “s-state” check
to test for possible activities, before deciding whether allocation should be
allowed to continue.
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resources types.
Available :
 It is a 1-d array of size ‘m’ indicating the number of available
resources of each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
 It is a 2-d array of size ‘n*m’ that defines the maximum demand of
each process in a system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of
resource type Rj.
Allocation :
 It is a 2-d array of size ‘n*m’ that defines the number of resources of
each type currently allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently
allocated ‘k’ instances of resource type Rj
Need :
 It is a 2-d array of size ‘n*m’ that indicates the remaining resource
need of each process.
 Need [ i, j ] = k means process Pi currently need ‘k’ instances of
resource type Rj
for its execution.
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
 Allocationi specifies the resources currently allocated to process P i and
Needi specifies the additional resources that process P i may still
request to complete its task.
 Banker’s algorithm consists of Safety algorithm and Resource request
algorithm
 Safety Algorithm
 The algorithm for finding out whether or not a system is in a safe state
can be described as follows:
 1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
 2) Find an i such that both
a) Finish[i] = false
 b) Needi <= Work
if no such i exists goto step (4)
 3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)

 4) if Finish [i] = true for all i


then the system is in a safe state
 Resource-Request Algorithm
 Let Requesti be the request array for process Pi. Requesti [j] = k means
process Pi wants k instances of resource type Rj. When a request for
resources is made by process Pi, the following actions are taken:
 1) If Requesti <= Needi
Goto step (2) ; otherwise, raise an error condition, since the process
has exceeded its maximum claim.
 2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources are not
available.
 3) Have the system pretend to have allocated the requested resources
to process Pi by modifying the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

Deadlock Detection
1. If resources have single instance:
In this case for Deadlock detection we can run an algorithm to check for
cycle in the Resource Allocation Graph. Presence of cycle in the graph is the
sufficient condition for deadlock.

2. In the above diagram, resource 1 and resource 2 have single instances.


There is a cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
3. If there are multiple instances of resources:
Detection of the cycle is necessary but not sufficient condition for
deadlock detection, in this case, the system may or may not be in
deadlock varies according to different situations.
Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock
recovery as it is time and space consuming process. Real-time operating
systems use Deadlock recovery.
Recovery method
1. Killing the process: killing all the process involved in the deadlock.
Killing process one by one. After killing each process check for deadlock
again keep repeating the process till system recover from deadlock.
2. Resource Preemption: Resources are preempted from the processes
involved in the deadlock, preempted resources are allocated to other
processes so that there is a possibility of recovering the system from
deadlock. In this case, the system goes into starvation.

Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In
both methods, the system reclaims all resources allocated to the terminated
processes.
@ Abort all deadlocked processes. This method clearly will break the

deadlock cycle, but at great expense; the deadlocked processes may have
computed for a long time, and the results of these partial computations
must be discarded and probably will have to be recomputed later.
@ Abort one process at a time until the deadlock cycle is eliminated. This

method incurs considerable overhead, since, after each process is aborted,


a deadlock-detection algorithm must be invoked to determine wl1f'ther
any processes are still deadlocked.
Many factors may affect which process is chosen, including:
1. What the priority of the process is
2. How long the process has computed and how much longer the process
will compute before completing its designated task
3. How many and what type of resources the process has used (for example,
whether the resources are simple to preempt)
4. rlow many more resources the process needs in order to complete
5. How many processes will need to be terminated
6. Whether the process is interactive or batch

Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt
some resources from processes and give these resources to other processes until
the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to
be addressed:
1. Selecting a victim. Which resources and which processes are to be
preempted? As in process termination, we must determine the order of
preemption to minimize cost. Cost factors may include such parameters
as the number of resources a deadlocked process is holding and the
amount of time the process has thus far consumed during its execution.
2. Rollback. Ifwe preempt a resource from a process, what should be done
with that process? Clearly, it cannot continue with its normal execution; it
is missing some needed resource. We must roll back the process to some
safe state and restart it from that state.
Since, in general, it is difficult to determine what a safe state is, the
simplest solution is a total rollback: Abort the process and then restart
it. Although it is more effective to roll back the process only as far as
necessary to break the deadlock, this method requires the system to keep
more information about the state of all running processes.
3. Starvation. How do we ensure that starvation will not occur? That is,
how can we guarantee that resources will not always be preempted from
the same process?
In a system where victim selection is based primarily on cost factors,
it may happen that the same process is always picked as a victim. As
a result, this process never completes its designated task, a starvation
situation that must be dealt with in any practical system. Clearly, we
must ensure that a process can be picked as a victim only a (small) finite
number of times. The most common solution is to include the number of
rollbacks in the cost factor.

You might also like