0% found this document useful (0 votes)
7 views14 pages

Understanding PSSE

The document discusses Power System State Estimation (PSSE), a method for estimating voltage magnitudes and angles in power grids based on metered measurements, which is crucial for accurate system monitoring and fault detection. It outlines various types of errors affecting state estimation, methods for bad data detection including the J(x)-test, r-test, and rN-test, and proposes practical recommendations for system operators and designers. The paper emphasizes the importance of redundancy in measurements for effective bad data identification and presents a case study demonstrating the effectiveness of the proposed methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views14 pages

Understanding PSSE

The document discusses Power System State Estimation (PSSE), a method for estimating voltage magnitudes and angles in power grids based on metered measurements, which is crucial for accurate system monitoring and fault detection. It outlines various types of errors affecting state estimation, methods for bad data detection including the J(x)-test, r-test, and rN-test, and proposes practical recommendations for system operators and designers. The paper emphasizes the importance of redundancy in measurements for effective bad data identification and presents a case study demonstrating the effectiveness of the proposed methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

1.

Power System State Estimation (PSSE)

What is it?

 PSSE is a method used in power grids to estimate the state variables (voltage
magnitudes and angles at each bus) based on metered measurements (like power flows,
voltage readings, and injections).

 It helps grid operators monitor system conditions and detect potential issues.

Why is it important?

 Real-world measurements contain errors (due to meter inaccuracies, communication


noise, etc.), so state estimation provides a more reliable view of the system’s actual
state.

 A good state estimation ensures accurate power flow analysis, fault detection, and
optimal grid operation.

2. Types of Errors Considered in the Paper

The paper classifies four main types of errors that can affect state estimation:

Type of Error Definition Example

Measurement Errors Small random variations in meter readings. Noise in voltage sensors.

Errors in system parameters like line Incorrect transmission


Parameter Errors
impedance. line resistance value.

A faulty power flow


Large, unexpected meter errors due to
Bad Data Errors sensor reading twice the
failures.
actual value.

A disconnected line
Mistakes in system topology or breaker
Structural Errors mistakenly assumed to
status.
be in service.

3. Residual Analysis (Core of Bad Data Detection)

What is a residual?
 A residual is the difference between a measured value and the estimated value
calculated using the power system model.

ri=zi−h(x)
where:

o ri = residual for measurement iii

o zi= actual measurement from the system

o h(x) = estimated measurement from the state estimation model

Why are residuals useful?

 If there is no bad data, residuals should be small and normally distributed (random
noise).

 If bad data is present, residuals will be unusually large, allowing us to detect faulty
measurements.

4. Bad Data Detection Methods

The paper introduces different ways to test whether bad data is present using statistical
techniques:

1. J(x)-Test (Chi-Square Test)

o Calculates the sum of squared residuals and checks if it is too large.

o If the sum is too high, it suggests that one or more bad data points are present.

2. r-Test (Largest Residual Test)

o Checks if any individual residual is too large.

o More effective than the J(x)-test in large networks.

3. rN-Test (Normalized Residual Test)

o Similar to r-test but weights residuals based on measurement accuracy.

o More sensitive and reliable for identifying bad data.

Key Finding from the Paper:

 The rN-test performs best for single bad data detection, but J(x)-test can sometimes be
better for detecting multiple errors.
5. Bad Data Identification

After detecting bad data, how do we find its source?

 The paper proposes two key approaches:

1. Residual Search Methods

 Remove the measurement with the largest residual, then re-run state estimation.

 Repeat until the system passes the bad data detection test.

 Works well when bad data is isolated (non-interacting bad data).

2. Non-Quadratic Estimation Criteria

 Instead of using traditional Least Squares Estimation (LSE), use a modified cost function
that reduces the effect of bad data.

 Example: Instead of penalizing large errors quadratically (which can amplify bad data
effects), use an alternative penalty function to suppress large deviations.

6. Local Redundancy & Bad Data Spreading

What is Redundancy?

 Redundancy (η) = ratio of the number of measurements to the number of unknown


state variables.

 Local Redundancy (ηk) = redundancy for each bus in the network.

 High local redundancy improves bad data detection since multiple independent
measurements exist to validate the system state.

What is Bad Data Spreading?

 If bad data affects only one measurement, its effect is localized.

 If measurements are highly correlated, bad data can spread across multiple buses,
making it harder to identify.

Key Finding from the Paper:

 Bad data generally has a limited spreading effect, meaning detection and correction can
often be localized instead of requiring system-wide changes.
7. Case Study: 25-Bus System

 The authors tested their methods on a 25-bus power system.

 They simulated bad data in different locations and evaluated how well each detection
method performed.

 The study confirmed:

o Residual tests are effective for detecting single errors.

o Non-quadratic estimation methods work better when multiple interacting bad


data points exist.

8. Practical Recommendations from the Paper

 For System Operators:

o Use rN-test for routine bad data detection.

o If a bad data alarm is triggered, apply residual search methods to find the
source.

o If errors persist, consider using non-quadratic estimators to refine state


estimation.

 For Power System Designers:

o Design measurement systems with high local redundancy to improve bad data
identification.

o Use network decomposition to isolate affected areas, reducing computational


burden.

How This Paper Relates to Your Research

Since you're working on DNN-SVM hybrid models for bad data detection, you can take insights
from this paper to:

1. Use residual-based features as inputs for your Deep Neural Network (DNN) to enhance
feature extraction.

2. Incorporate redundancy information in your model to improve classification accuracy.


3. Validate your model’s predictions using statistical methods like J(x)-test or rN-test.

4. Compare your machine learning approach with the non-quadratic estimation


techniques proposed in the paper to see which is more effective in handling interacting
bad data.

Final Thoughts

This paper laid the foundation for modern bad data detection in power systems. Even though it
was published in 1975, its concepts remain highly relevant. Your work in integrating machine
learning with traditional state estimation can enhance accuracy and automation, making bad
data detection even more reliable.

Understanding the J(X)-Test (Chi-Square Test) for Bad Data Detection

The J(X)-test is a statistical method used in power system state estimation to detect bad data
(faulty measurements). It is based on the Chi-square (χ²) test, which checks whether the sum of
squared errors (residuals) is within a normal range or if there is an anomaly.

Step-by-Step Breakdown of the J(X)-Test

1. What is the goal of the J(X)-test?

 To determine whether the measurement data in the power system contains bad data
(gross errors).

 If J(X) is too large, it means some measurements do not match the expected values,
suggesting bad data.

2. What is J(X)?

It is a statistical test based on the sum of squared residuals:

J(X)=rTWr

where:

 r= residual vector (the difference between actual measurements and estimated values).
 W= weighting matrix (based on measurement confidence, often the inverse of the
covariance matrix).

 rTWr is the weighted sum of squared residuals.

If there are no bad data points, J(X) follows a Chi-square (χ²) distribution.

3. How is the test applied?

1. Compute residuals:

o Use state estimation (like Weighted Least Squares (WLS)) to find estimated
values.

o Calculate residuals (difference between actual and estimated measurements).

2. Compute J(X):

o Apply the formula: J(X)=rTWr

o This gives a single value representing how well the measurements match the
estimated state.

3. Compare J(X) to a threshold:

o The threshold is obtained from a Chi-square table based on

o Threshold=X2m−n,α2

 m = number of measurements.

 n= number of state variables (unknowns).

 α = confidence level (typically 95% or 99%).

4. Decision Making:

o If J(X) < Threshold, the data is likely good (no bad data). ✅

o If J(X) > Threshold, bad data is detected (one or more faulty measurements). ❌

4. Why does this work? (Statistical Explanation)

 Normally distributed measurement errors (random noise) should keep residuals within
a certain range.
 The Chi-square distribution describes the expected sum of squared errors when there
are no gross errors.

 If the computed J(X) is much larger than expected, it means some measurements don’t
fit the normal error pattern, indicating bad data.

5. Example Calculation

Scenario:

Suppose a power system has 10 measurements and 4 state variables, meaning the system has
6 degrees of freedom (10 - 4 = 6).

Step 1: Compute J(X)

Let's say after running the state estimator, we find:

J(X)=16.5J(X) = 16.5J(X)=16.5

Step 2: Find the Chi-Square Threshold

 From a Chi-square table, for 6 degrees of freedom and a 95% confidence level, the
threshold is 12.6.

Step 3: Compare J(X) to the Threshold

 Since 16.5 > 12.6, we conclude that the data contains bad data.

6. Limitations of J(X)-Test

 It only tells us bad data is present but does not identify which measurement is bad.

 Multiple interacting bad data can sometimes remain undetected.

 It is sensitive to measurement redundancy (more measurements improve detection


accuracy).

7. How It Relates to Your Work

Since you’re integrating Support Vector Machines (SVM) with Neural Networks (DNNs) for bad
data detection, you could use the J(X)-test as a feature to help your model decide when an
input measurement might contain bad data.
Understanding the r-Test (Largest Residual Test) for Bad Data Detection

The r-Test, also called the Largest Residual Test, is a method used in power system state
estimation to detect bad data by analyzing individual measurement residuals. It is a simpler
alternative to the J(X)-test and is often more effective in large power systems.

1. What is the Goal of the r-Test?

 To identify whether bad data is present by checking individual measurement errors


(residuals) instead of considering all residuals together (like in J(X)-test).

 If one or more residuals are too large, it suggests that the corresponding measurement
is incorrect (bad data).

2. What is a Residual?

A residual is the difference between an actual measurement and its estimated value from state
estimation:

ri = zi-h(x)
where:

 ri = residual of measurement iii.


 zi = actual measurement.
 h(x) = estimated measurement based on state estimation.
A small residual means the measurement is close to expected (good data).
A large residual means the measurement is far from expected (possible bad data).

3. How is the r-Test Applied?

1. Compute Residuals
o Run state estimation (e.g., Weighted Least Squares, WLS).

o Compute the residuals for all measurements.

2. Compute Normalized Residuals

o To account for different measurement accuracies, we use normalized residuals:

rNi =σi/ri
where:

 rNi = normalized residual for measurement iii.


 σi = standard deviation (uncertainty) of measurement iii.
This step ensures that different units (MW, kV, etc.) do not affect the test.

3. Compare the Largest Residual to a Threshold

rNmax=max∣rNi∣
o Find the largest normalized residual:

Compare it to a statistical threshold (usually 2.5 to 3.0 for a 95% confidence


level).

4. Decision Making

o If rNmax <Threshold → ✅ No bad data detected.


o If rNmax >Threshold → ❌ Bad data detected.

4. Why Does This Work? (Statistical Explanation)

 Normally distributed measurement errors (random noise) should produce small


residuals (within ±2.5 standard deviations in 95% of cases).

 If a residual is too large, it means the measurement does not follow normal behavior,
suggesting bad data.
5. Example Calculation

Scenario:

A power system has 5 measurements with the following residuals:

Normalized
Measurement Residual (ri) Standard Deviation (σi )
Residual

1 0.5 MW 0.2 MW 0.5/0.2=2.5

2 -0.4 MW 0.2 MW −0.4/0.2=−2.0

3 1.2 MW 0.3 MW 1.2/0.3=4

4 0.3 MW 0.2 MW 0.3/0.2=1.5

5 -0.2 MW 0.1 MW −0.2/0.1=−2.0

Step 1: Find the Largest Normalized Residual

 The largest value is 4.0 (from Measurement 3).

Step 2: Compare with the Threshold

 Suppose the threshold is 2.5.

 Since 4.0 > 2.5, bad data is detected in Measurement 3.

✅ Solution: Measurement 3 is likely bad and should be corrected or removed.

6. r-Test vs. J(X)-Test: Which is Better?

Test How it Works Pros Cons

J(X)-Test (Chi- Analyzes all Good for detecting Cannot tell which
Square Test) residuals together multiple interacting errors measurement is bad

r-Test (Largest Checks individual Simple and pinpoints the May fail if multiple bad data
Residual Test) residuals bad data points interact

🔹 Key Insight from the Paper:

 The r-Test is more effective for large networks because bad data usually affects a few
measurements at a time, and this test identifies them directly.
7. How This Relates to Your Work

Since you're working on DNN-SVM hybrid models for bad data detection, you can use r-Test
results as features in your machine learning model:

 Input: Residuals & Normalized Residuals.

 Label: "Good" (if within threshold) or "Bad" (if above threshold).

 Training Goal: Teach your model to learn patterns in bad data and predict errors
automatically.

Understanding the rN-Test (Normalized Residual Test) for Bad Data Detection

The rN-Test, or Normalized Residual Test, is an improved version of the r-Test (Largest Residual
Test). It accounts for the accuracy (confidence level) of different measurements, making it
more reliable for detecting bad data in power system state estimation.

1. What is the Goal of the rN-Test?

 To detect bad data by comparing measurement errors (residuals) while considering the
uncertainty of each measurement.

 Unlike the r-Test, which only looks at residual size, the rN-Test scales residuals by their
standard deviations to avoid bias toward large measurements.

2. What is a Residual?

A residual is the difference between a measured value and its estimated value from state
estimation:

ri = zi-h(x)
where:

 ri = residual of measurement iii.


 zi = actual measurement.
 h(x) = estimated measurement from state estimation.

3. How Does the rN-Test Work?

1. Compute Residuals

o Use state estimation (e.g., Weighted Least Squares, WLS) to find residuals for
each measurement.

2. Normalize Residuals

o Each residual is divided by its standard deviation (σi\sigma_iσi) to account for


different measurement accuracies:

where:

rNi =σi/ri
 rNi = normalized residual for measurement iii.
 σi = standard deviation (uncertainty) of measurement iii.
Why is this important?

 Measurements with higher accuracy (small σi\sigma_iσi) should have


smaller normalized residuals.

 Measurements with lower accuracy (large σi\sigma_iσi) are expected to


have larger residuals.

3. Compare to a Threshold

rNmax=max∣rNi∣
o Find the largest normalized residual:

4.

o Compare it to a statistical threshold (typically 2.5 to 3.0 for a 95% confidence


level).

5. Decision Making
max∣rNi∣ <Threshold → ✅ No bad data detected.
max∣rNi∣ > Threshold → ❌ Bad data detected in that measurement.
o If

o If

4. Why Does This Work? (Statistical Explanation)

 Normally distributed measurement errors should have normalized residuals within ±2.5
standard deviations for 95% of cases.

 If a normalized residual is too large, it suggests that the measurement is not behaving
normally → bad data is present.

5. Example Calculation

Scenario:

A power system has 5 measurements, each with different levels of accuracy.

Residual (rir_iri Standard Deviation (σi\ Normalized Residual


Measurement
) sigma_iσi) (riNr^N_iriN)

1 0.5 MW 0.2 MW 0.5/0.2=2.50

2 -0.4 MW 0.2 MW −0.4/0.2=−2.0

3 1.2 MW 0.3 MW 1.2/0.3=4.0

4 0.3 MW 0.2 MW 0.3/0.2=1.50

5 -0.2 MW 0.1 MW −0.2/0.1=−2.0

Step 1: Find the Largest Normalized Residual

 The largest value is 4.0 (from Measurement 3).

Step 2: Compare with the Threshold

 Suppose the threshold is 2.5.

 Since 4.0 > 2.5, bad data is detected in Measurement 3.

✅ Solution: Measurement 3 is likely bad and should be corrected or removed.


6. rN-Test vs. r-Test vs. J(X)-Test

Test How it Works Pros Cons

J(X)-Test (Chi-Square Analyzes all residuals Detects Cannot tell which


Test) together multiple errors measurement is bad

r-Test (Largest Checks the single largest Can be biased by


Simple & direct
Residual Test) residual measurement accuracy

rN-Test (Normalized Scales residuals by More reliable Can still struggle with
Residual Test) measurement accuracy than r-Test multiple interacting bad data

🔹 Key Insight from the Paper:

 The rN-Test is more accurate and fair than the r-Test because it considers measurement
accuracy.

 It is a better choice for large power systems where measurement quality varies.

7. How This Relates to Your Work

Since you're working on DNN-SVM hybrid models for bad data detection, you can use rN-Test
results as features in your machine learning model:

 Input: Normalized residuals.

 Label: "Good" (if within threshold) or "Bad" (if above threshold).

 Training Goal: Teach your model to learn patterns in bad data and predict errors
automatically.

You might also like