0% found this document useful (0 votes)
39 views55 pages

Wind Farm Presentation

The document discusses using a graph neural network to estimate power output from a wind farm. It represents the wind farm as a graph with turbines as nodes and their spatial relationships as edges. The graph neural network learns functions to update edge and node features to model interactions between turbines and estimate the overall farm output and individual turbine outputs. It aims to capture the relational structure of a wind farm better than methods that ignore spatial dependencies between turbines.

Uploaded by

isimone7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views55 pages

Wind Farm Presentation

The document discusses using a graph neural network to estimate power output from a wind farm. It represents the wind farm as a graph with turbines as nodes and their spatial relationships as edges. The graph neural network learns functions to update edge and node features to model interactions between turbines and estimate the overall farm output and individual turbine outputs. It aims to capture the relational structure of a wind farm better than methods that ignore spatial dependencies between turbines.

Uploaded by

isimone7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Wind Farm Power prediction with

Graph Neural Network


Junyoung Park
SYSTEMS INTELLIGENCE Lab
Industrial and Systems Engineering (ISysE)

<https://wall.alphacoders.com/big.php?i=526859>
Wind Farm Power Estimation Task

Wind direction 1 Wind direction 2

2
Wind Farm Power Estimation Task

• Farm-level power estimation

Wind direction 1 Wind-farm power = ??


• Turbine-level power estimation
Wind turbine powers = ??

3
Wind Farm Power Estimation Task

• Farm-level power estimation

Wind direction 1 Wind-farm power = ??


• Turbine-level power estimation
Wind turbine powers = ??

4
Wind Farm and Its Graph Representation

𝒢 = (𝑁, 𝐸, 𝑔)

Wind direction 1
Node features 𝑁 = 𝑓𝑟𝑒𝑒 𝑓𝑙𝑜𝑤 𝑤𝑖𝑛𝑑 𝑠𝑝𝑒𝑒𝑑 ∀𝑖 ∈ 𝑡𝑢𝑟𝑏𝑖𝑛𝑒 𝑖𝑛𝑑𝑒𝑥

the down−stream wake distance 𝑑,


Edge features 𝐸 =
the radial−wake distance 𝑟 ∀ 𝑖,𝑗 ∗
Global features 𝑔 = {𝑓𝑟𝑒𝑒 𝑓𝑙𝑜𝑤 𝑤𝑖𝑛𝑑 𝑠𝑝𝑒𝑒𝑑}

𝑖, 𝑗 𝑎𝑟𝑒 𝑡𝑢𝑟𝑏𝑖𝑛𝑒 𝑖𝑛𝑑𝑒𝑥


∗ ∀ 𝑖, 𝑗 ∈ 𝑖𝑛𝑡𝑒𝑟𝑎𝑐𝑡𝑖𝑜𝑛 𝑡𝑢𝑟𝑏𝑖𝑛𝑒𝑠

5
Details on Edge Features

𝒢 = (𝑁, 𝐸, 𝑔)
the down−stream wake distance 𝑑,
Edge features 𝐸 =
the radial−wake distance 𝑟 ∀ 𝑖,𝑗 ∗

Wind direction 1

6
Neural Network in EXTREMELY High Level View

𝑦
𝑦ො = 𝑁𝑒𝑢𝑟𝑎𝑙𝑁𝑒𝑡𝑤𝑜𝑟𝑘(𝑥; 𝜽)

Input data

Neural network is a function approximator that has trainable parameter 𝜽


such 𝑦 ≈ 𝑦ො as accurate as possible

7
Why Graph Representation?

𝒢 = (𝑁, 𝐸, 𝑔)

vs.
X coord. Y coord.
Wind direction 1
T0 850 713

#. Turbines
T1 303 587
T2 569 775
T3 642 290
T4 217 97

Matrix (Tensor) Representations

8
Why Graph Representation?

X coord. Y coord. 1. MLP/CNN’s input size tends to be fixed.


T0 850 713 e.g.) MNSIT = [28 X 28]
#. Turbines

T1 303 587 If we deploy one more turbine to the farm,


T2 569 775 then the input dimension would change
T3 642 290 2. Input data has no natural order.
T4 217 97
e.g.) time-series has time index!
Which turbine should be the first input?

9
Spatial/Temporal Adjacency does not imply ‘related’

Convolution operation presumes that RNNs presumes that


‘Nearby pixels are somewhat related’. ‘Nearby inputs are somewhat related’.
Since we share the convolution filters Since we share the RNN blocks.

Figure source <Left: https://github.com/vdumoulin/conv_arithmetic>, <Right: https://towardsdatascience.com/illustrated-guide-to-recurrent-neural-networks-79e5eb8049c9>

10
Graph Neural Network

𝒢𝑦 = 𝐺𝑟𝑎𝑝ℎ𝑁𝑒𝑢𝑟𝑎𝑙𝑁𝑒𝑡𝑤𝑜𝑟𝑘 (𝒢𝑥 ; 𝜃)
𝑥0 𝑦0
0 𝑦1 0
𝑥1
1 1
𝑥4 𝑦4
4 4
𝑥2 2 𝑦2 2

𝑥3 𝑦3
3 3

- Graph Convolution Networks (GCN) (or tensors)


- Attention based approaches
- Relational inductive bias (GN block)
- …

Image source <https://becominghuman.ai/lets-build-a-simple-neural-net-f4474256647f?gi=743618029571>

11
Imposing Relational Inductive Bias

𝑓 ∙ Edge update function


𝑓 ∙

𝑛0 𝑔 ∙ Node update function

𝑒0,1 𝑒′0,1 0
𝑛1 0

1 1

4 4
2 2

3 3
Input Graph Updated Graph

Share edge update function 𝑓 and node update function 𝑔


for updating graph represented data

12
Imposing Relational Inductive Bias

𝑓 ∙ Edge update function

𝑛0 𝑔 ∙ Node update function

𝑒′0,1 0
0
𝑓 ∙ 1
1 𝑒0,4 𝑒′0,4

𝑛4 4
4
2 2

3 3
Input Graph Updated Graph

Share edge update function 𝑓 and node update function 𝑔


for updating graph represented data

13
Imposing Relational Inductive Bias

𝑓 ∙ Edge update function

𝑔 ∙ Node update function

0 0

1 1

4 4
2 2

3 3
Input Graph Updated Graph

Share edge update function 𝑓 and node update function 𝑔


for updating graph represented data

14
Imposing Relational Inductive Bias

𝑓 ∙ Edge update function

𝑔 ∙ Node update function

0 0
𝑛1 𝑛′1

1 𝑔 ∙ 1

𝑒1,4
𝑒1,2 4 4
2 2

3 3
Input Graph Updated Graph

Share edge update function 𝑓 and node update function 𝑔


for updating graph represented data

15
Imposing Relational Inductive Bias

𝑓 ∙ Edge update function

𝑛′0 𝑔 ∙ Node update function


𝑛0 𝑔 ∙
𝑒0,1 0 0

1 𝑒0,2 𝑒0,4 1

𝑒1,2 4 4
2 2

3 3
Input Graph Updated Graph

Share edge update function 𝑓 and node update function 𝑔


for updating graph represented data

16
Physics-induced Graph Neural Network On Wind Power Estimations

17
GN (Graph Neural) Block

Graph Neural (GN) Block


𝑁𝑜𝑑𝑒′0 features
𝑁𝑜𝑑𝑒0 features
𝑁0 ′
𝑁0
Edge update 𝐸𝑑𝑔𝑒′0,1 features
𝐸𝑑𝑔𝑒0,1 features
𝑓(∙: 𝜃0 )
network
𝑁1 ′
𝑁1
Node update 𝑓(∙: 𝜃1 ) 𝑁4 ’
𝑁4
network
𝑁2 ’
𝑁2
Global features
Global features Global update
𝑓(∙: 𝜃2 ) 𝑁3 ′
𝑁3 network 𝑔′
𝑔

Update graph 𝒢′
Input graph 𝒢

18
GN Block – Edge update steps

𝑁𝑜𝑑𝑒0 features 𝐸𝑑𝑔𝑒 ′ 0,1 = 𝑓(𝐸𝑑𝑔𝑒0,1 𝑁𝑜𝑑𝑒1 , 𝑁𝑜𝑑𝑒0 , 𝑔; 𝜃0 )


𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔

Input graph 𝒢

Update edge features with 𝑓(𝐸𝑑𝑔𝑒 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑅𝑒𝑐𝑖𝑒𝑣𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑆𝑒𝑛𝑑𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑔; 𝜃0 )

19
GN Block – Edge update steps

𝑁𝑜𝑑𝑒0 features 𝐸𝑑𝑔𝑒 ′ 4,1 = 𝑓(𝐸𝑑𝑔𝑒4,1 𝑁𝑜𝑑𝑒4 , 𝑁𝑜𝑑𝑒1 , 𝑔; 𝜃0 )


𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔

Input graph 𝒢

Update edge features with 𝑓(𝐸𝑑𝑔𝑒 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑅𝑒𝑐𝑖𝑒𝑣𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑆𝑒𝑛𝑑𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑔; 𝜃0 )

20
GN Block – Edge update steps

𝑁𝑜𝑑𝑒0 features

𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔

Input graph 𝒢 Updated edge features

Update edge features with 𝑓(𝐸𝑑𝑔𝑒 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑅𝑒𝑐𝑖𝑒𝑣𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑆𝑒𝑛𝑑𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑔; 𝜃0 )

21
GN Block – Node update steps

𝑁𝑜𝑑𝑒 ′ 0 = 𝑓(𝐸𝑑𝑔𝑒0 ; 𝜃1 )
𝐸𝑑𝑔𝑒0 = mean concat(𝐸𝑑𝑔𝑒0,𝑖 , 𝑁𝑜𝑑𝑒1 , 𝑁𝑜𝑑𝑒𝑖 )
𝑁𝑜𝑑𝑒0 features
∀𝑖 𝑖𝑛𝑐𝑜𝑚𝑖𝑛𝑔 𝑒𝑑𝑔𝑒𝑠
𝑁0
𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔 𝑔

Input graph 𝒢 Updated edge features

Aggregation function: any function obeys ‘input-order invariant’ and ‘input-number invariant’ properties.
e.g., Mean, Max, Min, etc.

22
GN Block – Node update steps

𝑁𝑜𝑑𝑒0 features

𝑁0
𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔 𝑔

Input graph 𝒢 Updated edge features

23
GN Block – Global feature update

𝑁𝑜𝑑𝑒0 features

𝑁0
𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔 𝑔′

Input graph 𝒢 Updated edge features


𝑔′ = 𝑓(𝐸𝑑𝑔𝑒 ′ , 𝑁𝑜𝑑𝑒’,𝑔; 𝜃2 )
𝐸𝑑𝑔𝑒’= mean 𝐸𝑑𝑔𝑒′𝑖,𝑗 ∀𝑒𝑑𝑔𝑒𝑠 𝑖, 𝑗
𝑁𝑜𝑑𝑒’= mean 𝑁𝑜𝑑𝑒𝑖 ∀ 𝑛𝑜𝑑𝑒𝑠 𝑖

24
Revisit Aggregation Method

𝑁𝑜𝑑𝑒 ′ 0 = 𝑓(𝐸𝑑𝑔𝑒0 ; 𝜃1 )
𝐸𝑑𝑔𝑒0 = mean concat(𝐸𝑑𝑔𝑒0,𝑖 , 𝑁𝑜𝑑𝑒1 , 𝑁𝑜𝑑𝑒𝑖 )
𝑁𝑜𝑑𝑒0 features
∀𝑖 𝑖𝑛𝑐𝑜𝑚𝑖𝑛𝑔 𝑒𝑑𝑔𝑒𝑠
𝑁0
𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔 𝑔

Input graph 𝒢 Updated edge features

Aggregation function: any function obeys ‘input-order invariant’ and ‘input-number invariant’ properties.
e.g., Mean, Max, Min, etc.

25
Weighted “__” ≈ Attention (in Deep Learning)

Figure source <Agile Amulet: Real-Time Salient Object Detection with Contextual Attention>

26
Consider weighted Aggregations

<Robot soccer> <Visualized weights>

Figure source <Left: https://www.youtube.com/watch?v=HHlN0TDgllE> , <Right: VAIN: Attentional Multi-agent Predictive Modeling>

27
How can we get the weights?

Learn to weight!

28
GN Block – Edge update steps Revisit

𝑊4,1 = 𝑓(𝑠𝑜𝑚𝑒 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑖𝑛𝑝𝑢𝑡𝑠; 𝜃3 )

𝑁𝑜𝑑𝑒0 features 𝐸𝑑𝑔𝑒 ′ 4,1 = 𝑊4,1 × 𝑓(𝐸𝑑𝑔𝑒4,1 𝑁𝑜𝑑𝑒4 , 𝑁𝑜𝑑𝑒1 , 𝑔; 𝜃0 )


𝑁0
𝐸𝑑𝑔𝑒0,1 0 features

𝑁1
𝑁4

𝑁2
Global features
𝑁3
𝑔 𝑔

Input graph 𝒢

Update edge features with 𝑓(𝐸𝑑𝑔𝑒 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑅𝑒𝑐𝑖𝑒𝑣𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑆𝑒𝑛𝑑𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠, 𝑔; 𝜃0 )

29
Physics-induced Attention

JK park, and K.H. law suggest the continuous deficit factor δu(d, r, α) as

𝑅0 2 𝑟 2
δu d, r = 2α exp −
𝑅0 +𝜅𝑑 𝑅0 +𝜅𝑑

𝑅0 : 𝑅𝑜𝑡𝑜𝑟 𝑑𝑖𝑎𝑚𝑒𝑡𝑒𝑟
𝑑: Down−stream wake distance
𝑟: 𝑅𝑎𝑑𝑖𝑎𝑙 𝑤𝑎𝑘𝑒 − 𝑑𝑖𝑠𝑡𝑛𝑎𝑛𝑐𝑒
𝛼, 𝜅: 𝑇𝑢𝑛𝑎𝑏𝑙𝑒 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠

Figure source <Cooperative wind turbine control for maximizing wind farm power using sequential convex programming by Jinkyoo Park, Kincho H.Law >

30
Physics-induced Attention

𝑅0 2 𝑟 2
δu d, r = 2α exp −
𝑅0 +𝜅𝑑 𝑅0 +𝜅𝑑

δu d, r indicates
‘How much the down stream turbine get affected
Due to the upstream turbines’
→ Weighting Factor 𝑊!

However, they tuned the parameters 𝛼, 𝜅 to the observed data

Figure source <Cooperative wind turbine control for maximizing wind farm power using sequential convex programming by Jinkyoo Park, Kincho H.Law >

31
Physics-induced Attention

𝑊4,1 = 𝑓(𝑠𝑜𝑚𝑒 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑖𝑛𝑝𝑢𝑡𝑠; 𝜃3 )

𝑁𝑜𝑑𝑒0 features 𝐸𝑑𝑔𝑒 ′ 4,1 = 𝑊4,1 × 𝑓(𝐸𝑑𝑔𝑒4,1 𝑁𝑜𝑑𝑒4 , 𝑁𝑜𝑑𝑒1 , 𝑔; 𝜃0 )


𝑁0
𝐸𝑑𝑔𝑒0,1 0 features 𝑓(𝑠𝑜𝑚𝑒 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑖𝑛𝑝𝑢𝑡𝑠; 𝜃3 )
𝑁1
𝑁4

𝑁2
𝑓(𝑟, 𝑑; 𝛼, 𝜅, 𝑅0 )
Global features 2 2
𝑅0 𝑟
𝑁3
𝑔 = 2α exp −
𝑔 𝑅0 + 𝜅𝑑 𝑅0 + 𝜅𝑑

Input graph 𝒢

Let neural network learn 𝛼, 𝜅, 𝑅0 !

32
Physics-induced Graph Neural Network On Wind Power Estimations

33
Physics-induced Graph Neural Network On Wind Power Estimations

34
Graph Dense Layer

Graph Dense Layer 𝑃෠0 = 𝑓(𝑁 ′ 0 ; 𝜃5 )

𝑁′0 𝑃෠0

𝑃෠1
Prediction
𝑓(∙: 𝜃5 ) 𝑃෠4
network
𝑃෠2

𝑔′ 𝑃෠3

35
Graph Dense Layer

𝑃෠0 = 𝑓(𝑁 ′ 0 ; 𝜃5 )
𝑁′0
𝑃෠0

𝑁1′
𝑁4′

𝑁2′

𝑔′
𝑁5′

36
Graph Dense Layer

𝑃෠0 = 𝑓(𝑁 ′ 0 ; 𝜃5 )
𝑁′0
𝑃෠0

𝑃෠1
𝑃෠4
𝑃෠2

𝑔′
𝑃෠3

37
Graph Dense Layer

𝑃෠0 = 𝑓(𝑁 ′ 0 ; 𝜃5 )
𝑁′0
𝑃෠0

𝑃෠1
𝑃෠4
𝑃෠2

𝑔′
𝑃෠3

38
How to train your PGNN

𝑃෠
2
𝑃
𝑃0
𝑃෠0

𝑃෠1 𝑃1

𝑃෠4 𝑃4

𝑃෠2 𝑃2

𝑃෠3 𝑃3

We use mean-squared-error as a loss function of PGNN

39
Lovely but Dreadful Exponential functions

𝑓 𝑥 = exp(𝑥)

Numerical under-flow Numerical over-flow

40
Simple approximation for exponential functions


𝑥𝑘
exp 𝑥 ≔ ෍
𝑘!
𝑘=0
D
𝑥𝑘
≈෍
𝑘!
𝑘=0
We set D = 5

41
Bottom side of power-series approximation

The suggested approximation works


(relatively) properly when 𝑥 is small.

Question?
“why don’t you use Taylor's expansion?”
Answer:
“You may encounter exponential again!”

42
Scale-only normalization

Instead of using raw the down stream distance 𝑑, and the radial wake distance 𝑟 as inputs,

𝑑 𝑑
𝑑 ′ = 𝜎(𝑑) × max(0, 𝑠𝑑 ) 𝑟 ′ = 𝜎(𝑟) × max(0, 𝑠𝑟 )

𝑠𝑑 , 𝑠𝑟 are learnable parameters

43
Dissect Scale-only normalization

Instead of using raw the down stream distance 𝑑, and the radial wake distance 𝑟 as inputs,

(1) (2)
𝑑
𝑑 ′ = 𝜎(𝑑) × max(0, 𝑠𝑑 )
(4)
(3)

(1) Why do not subtract means?


→ We want the scaled values to be positive
(2) What are max(0, 𝑠) for?
→ Since 𝑠’s are learnable parameters, w/o max(0, 𝑠) could be negative
(3) How do you get 𝜎(∙)?
→ We employed EWMA to get 𝜇(∙), 𝜎(∙) estimation
(4) Why do you multiply max(0, 𝑠) again?
→ If not scaling was the best, then we can recover the original values.
Same intuition Batch Normalization did.
44
Approximated weighting function

downstream-wake Normalized d
distance d
Scale- 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒𝑑
Weight 𝑤
norm f𝑤 (∙ ; 𝜃3 )
radial-wake Normalized r
distance r

45
Training Procedure

Sample wind-farm layout Simulator PGNN


𝑛 = {5,10,15,20}
Power Graph Graph
simulations with FLORIS representation encoding
# turbines 𝑛

MSE
𝑃0 𝑃1

𝑃2 𝑃෠ 𝑃
𝑃4
Wind speed S 𝑃3

Wind direction
𝜃

sample 𝑠 ~ 𝑈 5.0𝑚/𝑠, 15.0𝑚/𝑠 , 𝜃 ~ 𝑈(0°, 360°)

46
Generalization Tests

Wind
direction 𝜃

wind farm layouts


Wind farm layouts Wind speed 𝑆
Generalization over environmental factors
- wind directions, wind speeds
Generalization over wind farm layouts

47
Generalization Over Environmental Factors
Wind speed = 8.0 m/s

Error = 0.0172 Error = 0.022


48
Generalization Over Layouts

- Sample 20 wind farm layouts and Estimate average estimation errors.


- Each layout has 20 wind turbines in it.

49
Qualitative Analysis on Physics-induced Bias

𝑊4,1 = 𝑓(𝑖𝑛𝑝𝑢𝑡𝑠; 𝜃3 )
𝐸𝑑𝑔𝑒 ′ 4,1 = 𝑊4,1 × 𝑓(𝐸𝑑𝑔𝑒4,1 𝑁𝑜𝑑𝑒4 , 𝑁𝑜𝑑𝑒1 , 𝑔; 𝜃0 )
DGNN
𝑓 is another neural network

𝑓(𝑖𝑛𝑝𝑢𝑡𝑠; 𝜃3 )
PGNN
𝑔 𝑓(𝑟, 𝑑; 𝛼, 𝜅, 𝑅0 )
2 2
𝑅0 𝑟
= 2α exp −
𝑅0 + 𝜅𝑑 𝑅0 + 𝜅𝑑

50
Qualitative Analysis on Physics-induced Bias

Out-of-distribution

Training data
PGNN achieved 11% smaller validation error than DGNN

51
Case Study on Inferred Weights

Weight values Ignored edges

52
Case Study on a Regularized Grid Layout

Error = 0.0642 Error = 0.0702

53
Anyway the wind blows
Junyoung Park
SYSTEMS INTELLIGENCE Lab
Industrial and Systems Engineering (ISysE)

<https://wall.alphacoders.com/big.php?i=526859>
Normalizing powers

55

You might also like