0% found this document useful (0 votes)
12 views9 pages

Module 5

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 9

POWER SYSTEM OPERATION AND CONTROL (18EESl)

Module-5: Power System Security and State Estimation of Power Systems


Power System Security:
ntroduction: All equipment in a power system is designed such that it can be disconnected from the netwoTK.
Ihe reasons for these disconnections are generally divided into two categories:
outages. scheduled outages and 1orced
Scheduled outages are typically done to perform maintenance or replacement of the
as its name implies, the time of disconnect is scheduled by operators to minimize the impact on the
equipment, and,
of the system. reliabillty
Forced outages are those that happen at random and may be due to
outside influences such as lightning, wind storms, ice internal component failures or
build-up,
Because the specific times at which forced outages etc.
operated at all times in such a way that the system will not be leftoccur are unpredictable, the system must be
in a dangerous condition should any
outage event occur. Since power system equipment is credible
pieces of equipment are protected by automatic devices desipgned to be operated within certain limits, most
that can cause equipment to be switched out of the
system if these limits are violated. If a forced outage occurs
violated on other components, the event may be followed byon aa series system that leaves it operating with limits
of further actions that switch other
equipment out of service. If this process of cascading failures
may completely collapse. This is usually referred to as a continues, the entire system or large parts of it
system blackout.
System Security: Systems security can be divided into three major
operations control centre: functions that are carried out in an
1. System monitoring,
2. Contingency analysis, and
3. Security-constrained optimal power flow.
1. System monitoring:t provides the operators of the power
on the conditions on the power system. It is the most system with pertinent up-to-date information
important function of the three. From the time that
utilities went beyond systems of one unit supplying a group of
that critical quantities be measured and the values of the loads, effective operation of the system required
Such systems of measurement and data transmission, called measurements be transmitted to a central location.
(EMS), have evolved to schemes that can monitor voltages,telemetry systems or energy management system
currents, power flows, and the status of circuit
breakers, and switches in every substation ina power system transmission network.
In addition, other critical information such as
frequency, generator unit outputs and transformer tap
positions can also be telemetered. With so much information telemetered
simultaneously, digital computers
are usually installed in operations control centers to gather the telemetered data,
in a data base from which operators can display information on large display
process them, and place them
monitors. More importantly, the
computer can check incoming information against pre-stored limits and alarm the operators in the event of an
overload or out-of-limit voltage.
State estimation is often used in such systems to combine telemetered system data with
to produce the best estimate (in a statistical sense) of the current power system
system models
conditions or state. Such
systems are usually combined with supervisory control systems that allow operators to control circuit breakers
and disconnect switches and transformer taps remotely. Together, these systems are often referred to as
SCADA (supervisory control and data acquisition) systems. The SCADA system allows a few operators to
monitor the generation and high-voltage transmission systems and to take action to correct overlords or out
of-limit voltages)
2. Contingency analysis/Evaluation ît is much more demanding and normally performed in three distinct
states - contingency definition, selection and evaluation. Contingency definition gives the list of contingencies
to be processed whose probability of occurrence is high. Usually this list is large and hence, these
contingencies are ranked in rough order of severity employing contingency selection algorithms to shorten the
list. Contingency evaluation is then performed (using AC power flow) on the successive individual cases in
descending order of severity. The evaluation process is continued up to the point where no post-contingency
violations are encountered.

BasavarajappaS R, E&E, BIET, Davangere 1


The results of this type of analysis allow systems to be operated defensively (protected manner). Many
of the problems that occur on a power system can cause serious trouble within such a quick time period that
the operator could not take action fast enough. This is often the case with cascading failures. Because of this,
modern operations computers are equipped with contingency analysis programs that model possible systems
troubles before they arise. These programs are based on a model of the power system and are used to study
outage events and alarm the operators to any potential overlords orout-of-limit voltages.
For example, the simplest form of contingency analysis can be put together with a standard power
flow program (such as Gauss-Seidal, N-R, FDPF methods) together with procedures to set up the power-flow
data for each outage to be studied by the power-flow program. Several variations of this type of contingency
analysis scheme involve fast solution methods, automatic contingency event selection, and automatic
initializing of the contingency power flows using actual system data and state estimation procedures.
3. Security-constrained optimal power flow: In this function,a contingency analysis is combined with an
optimal power flow which seeks to make changes to the optimal dispatch of generation, as well as other
adjustments, so that when a security analysis is run, no contingencies result in violations. To show how this
can be done, we shall divide the power system into four operating states.
Optimal dispatch: This is the state that the power system is in prior to any contingency. Itis optimal
with respect to economic operation, but it may not be secure.
Post contingency: This is the state of the power system after a contingency has occurred. We shall
assume here that this condition has a security violation (line or transformer beyond its flow limit, or a
busvoltage outside the limit).
Secure dispatch: This is the state of the system with no contingency outages, but with corrections to
the operating parameters to account for security violations.
Secure post-contingency: This is the state of the system when the contingency is applied to the base
operating condition-with corrections.
Example: Suppose the power system 500 MW
250 MW
700 MW

consisting of two generators, a load, and a double


circuit line, is to be operated with both generators Unit 1 Unit 2

supplying the load as shown below (ignore losses):


250 MW
1200 MVW
NORMAL DISPATCH
Assume that the system as shown is in economic
dispatch, that is, the 500 MW from unit l and the 700 S00 MW
0 MW
700 MW

MW from unit 2 is the optimum dispatch. Each circuit


of the double circuit line can carry a maximum of 400 Unit 1 Unit 2
MW, so that there is no loading problem in the base
operating condition.
Now, we shall assume that one of the two circuits 500 MW (OvERLOAD) 1200 MW
making up the transmission line has been opened POST CONTINGENCY STATE
because of a failure. This results in:
400 MW 800 MW
Now there is an overload on the remaining circuit. We 200 MW

shallassume for this example that we do not want this


condition to arise and that we will correct the condition Unit 1 Unit 2
by lowering the generation on unit 1to 400 MW. The
200 MW
generation on unit 2 is increased to 800 MW. The 1200 MW
secure dispatch is: SECURE DISPATCH

Now, if the same contingency analysis is done, the 400 MW 800 MW


post-contingency condition is: O
MW

Unit 1 Unit 2

400 MW
1200 MW
SECURE POST CONTINGENCY STATE

Basavarajappa SR, E & E, BIET, Davangere 2


5y adjusting the generation on unit 1and unit 2, we
uaving an overload. This is the essence of what isbave nrevented the post-contingency operaune sa
called security corrections. Programs Wnon va
cOntrol adjustiments to the base or pre-contingency oneration to
COnatuons are called security-constrained prevent violations in the posi-cou
OT many contingencies and calculate
optimal noWer fowe or sCOpF These programs can take accounl
interchange, etc. adjustments to generator MW. generator voltages,
transformer laps,
Factors Affecting Power System
As a consequence of many widespread Security:
of modern power systems have blackouts in interconnected power systems, the priorities for
evolved to the operation
Operate the system in such a way that powerfollowing:
Within the constraints placed on the system is delivered reliably.
operated most economically. operation by reliability considerations, the system will oe
In the power systems,
that adequate generationtransmission and generation systems have designed with reliability in mind.
has been installed to meet the load and that This means
to deliver the generated power to the adequate
load. If operation of the system went on transmission has been installed
without experiencing unanticipated operating the without sudden failures
states, then there is no reliability problems. However, any or
of equipment in the system can fail, piece
either due to internal causes or due to external
strikes, objects hitting transmission towers, or human causes such as lightning
impossible, to build a power system with so mucherrors in setting(i.e., relays. It is highly uneconomical, if not
generation, etc.)that failures never cause load to be dropped redundancy extra transmission lines, reserve
the probability of dropping load is on a system. Rather, systems are designed so that
acceptably small. Thus, most
redundancy to withstand all major failure events, but this does not power systems are designed to have sufficient
reliable. guarantee that the system will be 100%
Within the design and economic limitations, it is the job of the
operators to try to
reliability of the system they have at any given time. Concentrate on the possible consequencesmaximize the
and remedial
actions required by two major types of failure
Transmission-line failures cause changes events-transmission-line outages and
in the flows and voltages on the generation-unit failures.
remaining connected to the system. Therefore, the analysis of transnmission failures transmission equipment
these flows and voltages so as to be sure they are within their requires methods to predict
respective
cause flows and voltages to change in the transmission system, with the limits. Generation failures can also
involving system frequency and generator output. addition of dynamic problems

Contingency Analysis: Detection of Network Problems


Operations personnel must know which line or generation outages will cause
outside limits. To predict the effects of outages, contingency analysis techniques flows or voltages to fall
are used. Contingency
analysis procedures model single failure events (i.e., one-line outage or one-generator outage) or multiple
equipment failureevents (i.e., two transmission lines, one transmission line plus one generator, etc.), one after
another in sequence until "all credible outages" have been studied. For each outage tested, the contingency
analysis procedure checks all lines and voltages in the network against their respective limits. The simplest
form of such a contingency analysis technique as shown in Fig. 1.
The most difficult methodological problem to cope with in contingency analysis is the speed of
solution of the modelused. The most difficult logical problem is the selection of "all credible outages. If each
outage case studied were to solve in 1sec and several thousand outages were of concern, it would take close
to lhr before all cases could be reported. This would be useful if the system conditions did not change over
that period of time. However, power systems are constantly undergoing changes and the operators usually
need to know if the present operation of the system is safe, without waiting too long for the answer.
Contingency analysis execution times of less than 1 min for several thousand outage cases are typical of
computer and analytical technology as of 1995.
One way to gain speed of solution in a contingency analysis procedure is to use an approximate model
of the power system. For many systems, the use of DC load flow models provides adequate capability with
sufficient accuracy with respect tothe MW flows. For other systems,voltage is a concern and full ACload
flow analysis is required.
3
Basavarajappa S R, E & E, BIET, Davangere
Linear Sensitivity Factors:
The problem of studying thousands of START

possible outages becomes very difficult to solve


if it is desired to present the results quickly. One SET SYSTEM MOOEL
of the easiest ways to provide a quick calculation TO INITIAL CONDITIONS

of possible overloads is to use linear sensitivity


factors. These factors show the approximate
change in line flows for changes in generation on
the network configuration and are derived from
SIMULATE AN
the DC load flow. These factors can be derived in oUTAGE OF GENERATOR I
USING THE SYSTEM MOOEL
a variety of ways and basically come down to two
types:
1. Power Transfer Distribution Factors (PTDFS) PANY LINE FLOWS1
EXCEED LIMIT
YES DISPLAY
ALARM MESSAGE
or Generation shift factors.
2. Line Outage Distribution Factors (LODFs) YES DISPLAY
(ANY BUS VOLTAGES ALARM MESSAGE
OUTSIDE LIMIT
1. Power Transfer Distribution Factors: The NOL
PTDF factors are designated and have the NO
AST GENERATOA
following definition: DONE

Ires
PTDFij = AP; = ai
where I=line index, i= bus where power is
injected. SIMULATE AN OUTAGE
j= bus where power is taken out OF LINE RUSING THE
SYSTEM MODEL
Afj= change in megawatt power flow on line 1
when a change in generation, APi, occurs at bus i
ANY LINE FLOWS YES DISPLAY
AP;= change in generation at bus i EXCEED ;IMIT ALARM MESSAGE

The PTDF factor then represents the sensitivity of ANY BUS VOLTAGES) YES OISPLAY
the flow on line to a shift of power from ito j. ALARM MESSAGE
oUTSIOE LIMITS /
Suppose one wanted to study the outage of a large
generating unit and it was assumed that all the NO

generation lost would be made up by the reference KuST LINE DOHE


generation. If the generator in question was
END
generating P" and it was lost,we would represent
AP as Fig.I Full AC power flow contingency analysis procedure
AP, = -P
and the new power flow on each line in the network could be calculated using a pre-calculated set of "PTDF"
factors as follows:
f, =f+ PTDE;j AP for i = 1...l
Where f, = flow on line lafter the generator on bus ifails, f = flow before the failure
Note that in this case we substitute "ref" for "j to indicate that the shift is from bus i to the reference bus.
The outage flow," Î, on each line can be compared to its limit and those exceeding their limit flagged for
alarming, This would tellthe operations personnel that the loss of the generator on bus i would result in an
overload on line l.
The PTDF factors are linear estimates of the change in flow on a line with a shift in power from one
bus to another. Therefore, the effects of simultaneous changes on several generating buses can be calculated
using superposition. Suppose, for example, that the loss of the generator on bus i were compensated by
govemor action on machines throughout the interconnected system. One frequently used method assumes that
the remaining generators pick up in proportion to their maximum MW rating. Thus, the proportion of
generation pickup from unit j (G#i) would be. pmax

Yij Sk pmax
k#i

4
Basavarajappa S R, E& E, BIET, Davangere
where Pax = maximum MW rating for
Yij proportionality factor for pickup on generator k
generating unit jwhen unit 11a1lS
Ihen, to test for the flow on line 1, under the
participate in making up the assumntion that all the generators in the interconevuou
loss, use the following:
, =f+ PTDFj AP -
PTDF1j4 Yj AP:)
Note that this assumes that no unit will
detailed generation pickup algorithm that took actuallyofhit its maximum, If this is apt to be the case, a more
account generation limits would be required.
2. Line Outage Distribution
LODF factors are Factors: The
used in a similar manner, START
only they apply to the testing for overloads P AT EACH
GEN, BUSs
when transmission circuits are lost. By ON ALL
READ EXISTING
SYSTEM CONDITIONS
definition, the line outage LINES
has the following meaning:distribution factor
Af
LODF,k = d,k
where LODFk= line outage distribution
factor when monitoring line I after an outage AP, =
on line k CHECK ALL LINES
FOR OVERLOAD
Af; = change in MW flow on line DI^PLAY AFTER GENERATOR
NO
ALARM OUTAGES
f =original flowon line k before it MESSAGE
was outaged (opened) YES

If one knows the power on line l and


line k. the flow on line I with linek
1ATLNE?)
res
out can
be deternmined using LODF" factors: iAST GENERATOA
YES
f, =
f+LODFLk
where f, f=pre-outage flows on lines 1
and k, respectively
f, = flow on linel with line k out.
YES
By pre-calculating the LODFs, a very k
fast procedure can be set up to test all lines in NO'
CHECK ALL LINES
the network for overload for the outage of a FOR OVERLOAD
particular line. Furthermore, this procedure No DISPLAY AFTER LINE
ALARM OUTAGES
can be repeated for the outage of each line in MESSAGE
turn, with overloads reported to the YES
LAST
operations personnel in the form of alarm LINE ?
messages. NO
YES
LAST
Using the generator and line outage LINE ?
procedures described earlier, one can JYES
progranm a digital computer to execute a END
contingency analysis study of the power
system as shown in Fig. 2. Fig. 2 Contingency analysis sensitivity factors

AC Power Flow Methods:


The calculations made by network sensitivity methods are faster than those made by AC power
flow
methods and therefore find wide use in operations control systems. However, there are many
where voltage magnitudes are the critical factor in assessing contingencies. There are some power systems
systems where
VAR flows predominate on some circuits, such as underground cables, and an analysis of only the MW flows
will not be adequate to indicate overloads. When such situations present themselves, the network sensitivity
methods may not be adequate and the operations control system will have to incorporate a full ACpower flow
for contingency analysis.
When an AC power flow is to be used to study each contingency case, the speed of solution and the
number of cases to be studied are critical. Most operations control centers that use an AC power flow program
Basavarajappa SR, E& E, BIET, Davangere 5
for contingency analysis use either a Newton
Raphson or the decoupled power flow. These le1

solution algorithms are used because of their


speed of solution and the fact that they are
reasonably reliable in convergence when Pick outagei from the list and rernove
solving difficult cases. The decoupled power that component from the power flow
model
flow has the further advantage that a matrix
alteration formula can be incorporated into it List of Possible
Outages
to simulate the outage of transmission lines Run an AC Power Flow on the
without reinverting the system Jacobian current model updated to reflect
matrix at each iteration. the outage
The simplest AC security analysis Alarm List
procedure consists of running an AC power Test for overloads and voltage
flow analysis for each possible generator, limit violations. Report all
imit violations in an alarm
transmission line, and transformer outage as Tist.
shown in Fig. 3. This procedure will
determine the overloads and voltage limit Last outage done?
Yes
violations accurately. No
Fast. but inaccurate, methods
involving the PTDF and LODF factors can be |zi+1
End

used to give rapidanalysis of the system, but Fig. 3 AC power flow security analysis
they cannot give information about MVAR
flows and voltages. Slower, full AC power
flow methods give full accuracy but take too
long.
Because of the way the power system Select te bad casc trom he ul
case tst and store ln a short st
is designed and operated, very few of the
outages will actually cause trouble. That is,
most of the time spent running AC power Shot es o mot
flows willgo for solutions of the power flow Ikaly bsd cascs
model that discover that there are no
problems. Only a few of the power flow
solutions will conclude that an overload or
voltage violation exists. The solution to find a ouages
way to select contingencies in sucha way that Pick outago Ifrom he short ist and
remova that conponent rom he power
only those that are likely to result in an ow modet

overload or voltage limit violation will


actually be studied in detail and the other
cases will go unanalysed. This process
Run an AC power oW on tho
flowchart shown in Fig. 4. curTert modelupdated to retiect
Selecting the highest impact the outago
contingency cases from the full outage case
list is not an exact procedure. Two sources of Alam L
error can arise, Test lor overiosds and vatag
1. Placing too many cases on the ümt Yiolstons. Roport ad
Umt violtons In an alam
shortlist: This is essentially the "conservative"
approach and simply leads to longer run times
for the security analysis procedure to execute. Last outage done? Yo
2. Skipping cases: Here, a case that
would have shown a problem is not placed on
the shortlist and results in possibly having that
outage take place and cause trouble without
End

the operators being warned. Fig. 4 AC power flow security analysis with
contingency case selection
6
Basavarajappa S R, E& E, BIET, Davangere
Contingency Selection and Ranking: An over-load nerlomance index is used to find how much aparticular
ouiage might atfeet the power system. The definition for the overload performance index (PI) is as follows:
211
Pl = (Pnow.l
pmax as nT, PAow) < Pmax ’ PIJ and Paow) > P’PIT
Allbranches
lf n (a positive integer) is a large number, the P will be a
and it will be large if one or more lines are overloaded, The small thennumber if all flows are within limit,
index. problem is how to use this performance
Various techniques have been tried to obtain the
calculations can be made exactly if n= 1:that is,a tableof PIvalue of PI when a branch is taken out. These
values, one for each line in the network, can be
calculated quite quickly. The selection procedure then involves ordering the PI
least. The lines corresponding to the top of the list are then table from largest value to
when n =1, the PI does not change suddenly from near zero to picked and placed them on the short list. However,
near infinity
Instead. it rises asa quadratic function. A line that is just below its limit as the branch exceeds its limit.
that is just over its limit. contributes to Pl almost equal to one
The result is a PI that may be large
when many lines are loaded just below their
limit. Thus the Pl's ability to distinguish or Begin power flow solution
detect bad cases is limited when n= 1. Trying
to develop an algorithm that can quickly
calculate PI when n =2 or larger has proven
extremely difficult. Buid B' and B' matrices
One way to perform an outage case
Full outage
selection is to perform what has been called case list
the 1P1Q method. Here, a decoupled power Model outage casse
flow is used. As shown in Fig.5. the solution
procedure is interrupted after one iteration Solve the P-theta equation for
(one P - 0 calculation and one Q - V the A9'S
calculation; thus, the name 1P1Q). With this
procedure, the PI can use as large an n value
as desired, say n = 5. There appears to be eew = A,
sufficient information in the solution at the
end of the first iteration of the decoupled Solve the O-V equation for the
power flow to give a reasonable PI.
Another advantage to this procedure is the
fact that the voltages can also be included in
the PI. Thus,a different PIcan be used, such
as:
2n Calculate flows and voltages for this
PI = /Pnow,!
pmax
case then calculate the Pl

Allbranches
2m
A|E| Pick next outage case
PIL0st
(one entry for
AJJ buses
(4|E|max each outage
i case)
Where AIElis the difference between Fig.5 1P1Q Contingency Selection Procedure
the voltage magnitude as solved at the end of
the 1P1Q procedure and the base-case voltage magnitude. AEma is a value set by utility engineers indicating
how much they wish to limit a bus voltage from changing on one outage case.
To complete the security analysis, the PI list is sorted so that the largest PI appears at the top. The
security analysis can then start by executing full power flows with the case which is at the top of the list, then
solve the case which is second, andso on down the list. This continues until either a fixed number of cases is
solved, or until a predetermined number of cases are solved which do not have any alarms.
Basavarajappa S R, E& E, BIET, Davangere
State Estimation of Power Systems: Introduction:
State estimation is the process of assigning a value to an unknown system state variable based on
measurements from that system according to some criteria. Usually, the process involves imperfect
measurements that are redundant, and the process of estimating the system states is based on a statistical
criterion that estimates the true value of the state variables to minimize or maximize the selected criterion. A
commonlyused and familiar criterion is that of minimizing the sum of the squares of the differences between
the estimated and true" (i.e., measured) values of a function. State estimators maybe both staticand dynamic.
Both types of estimators have been developed for power systems.
In a power system, the state variables are the voltage magnitudes and relative phase angles at the
System nodes. Measurements are required in order to estimate the system performance in real time for both
system security control and constraints on economic dispatch. The inputs to an estimator are imperfect power
system measurements of voltage magnitudes and power, VAR, or ampere-flow quantities. The estimator
isdesigned to produce the "best estimate" of the system voltage and phase angles, recognizing that there are
errors in the measured quantities and that there may be redundant measurements. The output data are then
used in system control centers in the implementation of the security-constrained dispatch and control of the
system.

Linear Least Square Estimation:


Statistical estimation refers to a procedure where one uses samples to calculate the value of one or
more unknown parameters in a system. Since the samples (or measurements) are inexact, the estimate obtained
for the unknown parameter is also inexact. This leads to the problem of how to formulate a best estimate of
the unknown parameters given the available measurements.
The development of the notions of state estimation may proceed along several lines, depending on the
statistical criterion selected. Of the many criteria that have been examined and used in various applications,
the following three are perhaps the most commonly encountered.
a. The maximum likelihood criterion, where the objective is to maximize the probability that the
estimate of the state variable, , is the true value of the state variable vector, x (i.e., maximize =P () = X)
b. The weighted least-squares criterion, where the objective is to minimize the sum of the squares of
the weighted deviations of the estimated measurements, 2, from the actual measurements, z
c. The minimum variance criterion, where the object is to minimize the expected value of the sum of
the squares of the deviations of the estimated components of the state variable vector from the corresponding
components of the true state variable vector.

When normally distributed, unbiased meter error distributions are assumed, each of these approaches
results in identical estimators. Thus, utilize the maximum likelihood approach because the method introduces
the measurement error weighting matrix R]in a straightforward manner.
The maximum likelihood procedure asks the following question:What is the probability (or likelihood)
that I will get the measurements I have obtained? This probability depends on the random error in the
measuring device (transducer) as well as the unknown parameters to be estimated. Therefore, a reasonable
procedure would be one that simply chose the estimate as the value that maximizes this probability. The
maximum likelihood estimator assumes that know the probability density function (PDF) of the random errors
in the measurement.
The least-squares estimator does not require that know the PDF for the sample or measurement errors.
Assume that the PDF of sample or measurement error is anormal (Gaussian) distribution, it will end up with
the same estimation formula. Hence, proceed to develop estimation formula using the maximum likelihood
criterion assuming normal distributions for measurement errors. The result will be a least-squares or more
precisely a weighted least-squares estimation formula.
First, we introduce the concept of random measurement error. The measurements are assumed to be in
error: that is, the value obtained from the measurement device is close to the true value of the parameter being
measured but differs by an unknown eror. Mathematically, this can be modelled as follows.
Let zmeas be the value of a measurement as received from a measurement device.
Let z'rue be the true value of the quantity being measured. Finally, let n bethe random measurement
error. Then represent measured value as
zmeas = ztrue + n ’ (5.1)
Basavarajappa SR, E&E, BIET, Davangere 8
ne random number,n, serves to model the uncertainty in the measurements. If the measurement error
1S unbiased, the PDF of n is usually chosen as a normal distribution with zero mean. Note that OLnel
measurement PDFs will also work in the maxiMum likelihood method as well. The PDF of nis
1
PDF(n) = oV2 exp ’ (5.2)
where o is called the standard deviation and g² is called the
variance of the random number. PDF(n)
behavior of n. A plot of PDFM) isshown in describes
Fig. 6.
the PDF (n)
Note that o, the standard
to model the seriousness of the deviation, provides a way
random measurement error.
If o is large, the measurement is relatively
inaccurate (i.e.,
apoor-quality measurement device), whereas a small value
of o denotes a small error spread (i.e., a
measurement device). The normal distributionhigher-quality
is commonly 20 30
used for modelling measurement errors since it is the
Fig.6 The normal distribution
distribution that willresult when many factors contribute to
the overall error.

Basavarajappa S R, E& E, BIET, Davangere 9

You might also like