State Estimation in Electric Power Systems
Leveraging Graph Neural Networks
Ognjen Kundacina Mirsad Cosovic Dejan Vukobratovic
The Institute for Artificial Intelligence Faculty of Electrical Engineering Faculty of Technical Sciences
Research and Development of Serbia University of Sarajevo University of Novi Sad
Novi Sad, Serbia Sarajevo, Bosnia and Herzegovina Novi Sad, Serbia
[email protected] [email protected] [email protected] Abstract—The goal of the state estimation (SE) algorithm is II. R ELATED R ESEARCH AND C ONTRIBUTIONS
to estimate complex bus voltages as state variables based on
the available set of measurements in the power system. Because Several pieces of research suggest learning computationally
phasor measurement units (PMUs) are increasingly being used heavy algorithms’ outputs using a deep learning model, trained
in transmission power systems, there is a need for a fast SE on the set of examples generated by the algorithm offline. In
solver that can take advantage of high sampling rates of PMUs. [4] combination of recurrent and feed-forward neural networks
This paper proposes training a graph neural network (GNN)
to learn the estimates given the PMU voltage and current is used to solve the power system SE problem, given the
measurements as inputs, with the intent of obtaining fast and measurement data and the history of network voltages. An
accurate predictions during the evaluation phase. GNN is trained example of training a feed-forward neural network to initialise
using synthetic datasets, created by randomly sampling sets of the network voltages for the Gauss-Newton distribution system
measurements in the power system and labelling them with a SE solver is given in [5]. GNNs are beginning to be used for
solution obtained using a linear SE with PMUs solver. The
presented results display the accuracy of GNN predictions in solving similar problems, like in [6], where power flows in the
various test scenarios and tackle the sensitivity of the predictions system are predicted based on the power injection data labelled
to the missing input data. by the traditional power flow solver. In paper [7], the authors
Index Terms—state estimation, graph neural networks, ma- propose a combined model and data-based approach in which
chine learning, power systems, real-time GNNs are used for the power system parameter and state
estimation. The model predicts power injections/consumptions
I. I NTRODUCTION in the nodes where voltages and phases are measured, whereas
The state estimation (SE), which estimates the set of power it doesn’t include the branch measurements, as well as node
system state variables based on the available set of mea- measurement types other than the voltage in the calculation.
surements, is an essential tool used for the power system’s In [8], GNN was trained to provide state variable initialization
monitoring and operation [1]. A fast state estimator is required to existing SE solver by propagating simulated or measured
to maximise the use of high sampling rates of phasor mea- voltages through the graph to learn the voltage labels from the
surement units (PMUs). In this paper, we propose training a historical dataset. However, the proposed GNN also does not
graph neural network (GNN) for a regression problem on a take into account measurement types other than node voltage
dataset of SE inputs and outputs, to provide fast and accurate measurements, but they are handled in the other parts of the
predictions during the evaluation phase. One of the main algorithm.
benefits of using GNNs in power systems instead of common In this paper, the proposed model operates on factor-graph-
deep learning methods is the fact that the prediction model is like structures, which enables trivial inclusion and exclusion
not limited to the training and test examples of fixed power of any type of measurements on the power system’s buses and
system topologies. GNNs exploit the graph structure of the branches, by adding or removing the corresponding nodes in
input data [2] [3], resulting in a lower number of learning the graph. The trained model is tested thoroughly in the various
parameters, reduced memory requirements, and incorporate missing data scenarios which include communication errors in
the connectivity information into the learning process as well. the delivery of isolated phasor data or failures of the complete
Furthermore, the inference phase of the trained GNN model PMUs, in which the power system is unobservable, and the
can be distributed, since the prediction of the state variable of reports on the prediction qualities are presented.
a node requires only K-hop neighbourhood measurements.
III. L INEAR S TATE E STIMATION WITH PMU S
This paper has received funding from the European Union’s Horizon 2020
research and innovation programme under Grant Agreement number 856967.
The SE algorithm estimates the values of the state variables
x based on the knowledge of the network topology and pa-
rameters, and measured values obtained from the measurement
978-1-6654-1211-7/22/$31.00 ©2022 IEEE devices spread across the power system.
The power system network topology is described by the the bus i where the transformer is located, functions are given
bus/branch model and can be represented using a graph G = as:
(H, E), where the set of nodes H = {1, . . . , n} represents the fℜ{Iij } (·) = qℜ{Vi } − wℑ{Vi } − (r − t)ℜ{Vj } + (u + p)ℑ{Vj }
set of buses, while the set of edges E ⊆ H × H represents (4)
fℑ{Iij } (·) = wℜ{Vi } + qℑ{Vi } − (u + p)ℜ{Vj } − (r − t)ℑ{Vj },
the set of branches of the power network. The branches of
2 2
the network are defined using the two-port π-model. More where q = gij /τij , w = (bij + bsi )/τij , r = (gij /τij ) cos ϕij ,
precisely, the branch (i, j) ∈ E between buses {i, j} ∈ H can t = (bij /τij ) sin ϕij , u = (bij /τij ) cos ϕij , p = (gij /τij )
be modelled using complex expressions: sin ϕij . In the case where PMU is installed at the bus j, at the
opposite side of the transformer, measurement functions are:
1
∗
Iij (y ij + ysij ) −α ij yij Vi fℜ{Iji } (·) = zℜ{Vj } − eℑ{Vj } − (r + t)ℜ{Vi } + (u − p)ℑ{Vi }
= τij
2 , (1) (5)
Iji Vj fℑ{Iji } (·) = eℜ{Vj } + zℑ{Vj } − (u − p)ℜ{Vi } − (r + t)ℑ{Vi },
−αij yij yij + ysij
where z = gij and e = bij + bsi . To recall, the presented
where the parameter yij = gij + jbij represents the branch model represents the system of linear equations, where the
series admittance, half of the total branch shunt admittance solution can be found by solving the linear weighted least-
(i.e., charging admittance) is given as ysij = jbsi . Further, the squares (WLS) problem:
transformer complex ratio is defined as αij = (1/τij )e−jϕij ,
HT R−1 H x = HT R−1 z,
where τij is the transformer tap ratio magnitude, while ϕij is (6)
the transformer phase shift angle. It is important to remember where the Jacobian matrix H ∈ Rk×2n is defined according
that the transformer is always located at the bus i of the branch to measurement functions, k is the total number of linear
described by (1). Using the branch model defined by (1), if equations, the covariance matrix is given as R ∈ Rk×k , and
τij = 1 and ϕij = 0 the system of equations describes a line. the vector z ∈ Rk contains measurement values given in
In-phase transformers are defined if ϕij = 0 and ysij = 0, rectangular coordinate system.
while phase-shifting transformers are obtained if ysij = 0.
The complex expressions Iij and Iji define branch currents IV. G RAPH N EURAL N ETWORK BASED S TATE E STIMATION
from the bus i to the bus j, and from the bus j to the bus GNNs are a suitable tool for learning over graph-structured
i, respectively. The complex bus voltages at buses {i, j} are data, which is being processed by following a recursive
given as Vi and Vj , respectively. neighbourhood aggregation scheme, also known as message
PMUs measure complex bus voltages and complex branch passing procedure [2]. This results in an s-dimensional vector
currents. More precisely, phasor measurement provided by embedding h ∈ Rs of each node, which captures the informa-
PMU is formed by a magnitude, equal to the root mean tion about the node’s position in the graph, as well as it’s
square value of the signal, and phase angle [9, Sec. 5.6]. own and the input features of the neighboring nodes. The
The PMU placed at the bus measures bus voltage phasor GNN layer, which implements one iteration of the recursive
and current phasors along all branches incident to the bus neighbourhood aggregation consists of several functions, that
[10]. Thus, the PMU outputs phasor measurements in polar can be represented using a trainable set of parameters, usually
coordinates. In addition, PMU outputs can be observed in the in form of the feed-forward neural networks. Those include
rectangular coordinates with real and imaginary parts of the the message function between two node embeddings, the
bus voltage and branch current phasors. In that case, the vector aggregation function which defines in which way are incoming
of state variables x can be given in rectangular coordinates messages combined and the update function which updates the
x ≡ [Vre , Vim ]T , where we can observe real and imaginary node embedding based on the aggregated messages and the
components of bus voltages as state variables: current node embedding value. The recursive neighbourhood
Vre = ℜ(V1 ), . . . , ℜ(Vn )
aggregation scheme is repeated a predefined number of times
(2) K, also known as the number of GNN layers, where the
Vim = ℑ(V1 ), . . . , ℑ(Vn ) . initial node embedding values are equal to the l-dimensional
Using rectangular coordinates, we obtain the linear system node input features, linearly transformed to the initial node
of equations defined by voltage and current measurements ob- embedding h0 ∈ Rs . The output of this process are final
tained from PMUs. The measurement functions corresponding node embeddings which can be used for the classification or
to the bus voltage phasor measurement on the bus i ∈ H are regression over the nodes, edges, or the whole graph, or can be
simply equal to: used directly for the unsupervised node or edge analysis of the
graph. In the case of supervised learning over the nodes, the
fℜ{Vi } (·) = ℜ{Vi } final embeddings are passed through the additional nonlinear
(3) function, creating the outputs that represent the predictions of
fℑ{Vi } (·) = ℑ{Vi }.
the GNN model for the set of inputs fed into the nodes and
According to the unified branch model (1), functions cor- their neighbors. The GNN model is trained by backpropagation
responding to the branch current phasor measurement vary of the loss function between the labels and the predictions over
depending on where the PMU is located. If PMU is placed at the whole computational graph.
Inspired by the recent work [11] in which the power system parameters for variable-to-variable message function. We will
SE is modelled as a probabilistic graphical model, the initial still use the terms factor and variable nodes, although the
version of the graph over which GNN operates has a power second version graph over which the GNN operates, displayed
system’s factor graph topology, which is a bipartite graph con- in Fig. 1 with additional edges depicted with dashed lines is
sisted of the factor and variable nodes, and the edges between not a factor graph, since it is no longer bipartite.
them. Variable nodes, two per each power system bus, learn Since our GNN operates on a heterogeneous graph, in
the s-dimensional representation of the state variables x, i.e. this work we employ two different types of GNN layers,
f
real and imaginary parts of the bus voltages, ℜ(Vi ) and ℑ(Vi ), Layerf (·|θLayer ) : Rdeg(f )+1 7→ Rs for aggregation in factor
v
defined in (2). Factor nodes, two per each measurement phasor, nodes f , and Layerv (·|θLayer ) : Rdeg(v)+1 7→ Rs for variable
serve as inputs for the measurement values and variances, also nodes v, so that their message, aggregation, and update func-
given in rectangular coordinates, and whose embedded values tions are learned using a separate set of trainable parameters,
f v
are sent to variable nodes via GNN message passing. Feature denoted all together as θLayer and θLayer . Additionally,
augmentation using one-hot index encoding is performed for we use a different set of parameters for variable-to-variable
v
variable nodes only, to help the GNN model to represent the and factor-to-variable node messages in the Layerv (·|θLayer )
neighbourhood structure of a node better, since variable nodes layer. In both GNN layers, we used two-layer feed-forward
have no additional input features. Nodes connected by full neural networks as message functions, gated recurrent units
lines in Fig. 1 represent a simple example of the factor graph as update functions and the attention mechanism in the ag-
for a two-bus power system, with a PMU on the first bus, gregation function [12], using which the importance factor of
containing one voltage and one current phasor measurement. each neighbour is learned. Furthermore, we apply a two-layer
Compared to the approaches like [7], in which GNN nodes neural network Pred(·|θPred ) : Rs 7→ R on top of the final
correspond to state variables only, we find factor-graph-like node embeddings hK of variable nodes only, to create the
GNN topology convenient for incorporating measurements in state variable predictions xpred . For the factor and variable
the GNN, because factor nodes can be added or removed from nodes with indices f and v, neighbourhood aggregation, and
any place in the graph, using which one can simulate inclusion state variable prediction can be described as:
of various types and quantities of measurements both on power v
system buses and branches. hv k = Layerv ({hi k−1 |i ∈ {v ∪ Nv }}|θLayer )
f
hf k = Layerf ({hi k−1 |i ∈ {f ∪ Nf }}|θLayer )
(7)
ℜ(V2 ) ℑ(V2 ) xv pred = Pred(hv K |θPred )
k ∈ {1, . . . , K}.
All of the GNN trainable parameters θ are updated by
applying gradient descent (i.e. backpropagation) to a loss
function calculated over the whole mini-batch of graphs, as a
fℜ(I12 ) fℑ(I12 ) mean squared difference between the state variable predictions
and labels xlabel :
2nB
1 X pred
L(θ) = (xi − xi label )2
2nB i=1 (8)
fℜ(V1 ) fℑ(V1 ) Layerf Layerv Pred
θ = {θ ∪θ ∪θ },
ℜ(V1 ) ℑ(V1 )
where 2n is the total number of variable nodes in a graph, and
Fig. 1: Example of the factor graph (full-line edges) and the B is the number of graphs in the mini-batch.
augmented factor graph for a two-bus power system. Variable The inference process using the trained model can be
nodes are depicted as circles, and factor nodes as squares, computationally and geographically distributed, as long as all
colored differently to distinguish between measurement types. of the measurements within the K-hop neighbourhood in the
augmented factor graph are fed into the computational module
We extend this approach by augmenting the graph topology that generates the predictions. For arbitrary K, PMUs required
by connecting the variable nodes in the 2-hop neighbourhood, for the inference will be physically located within the ⌈K/2⌉-
following the idea that the graph should stay connected even hop neighbourhood of the power system bus.
in a case of simulating measurement loss by removing the
factor nodes, enabling the messages to still be propagated V. N UMERICAL RESULTS
in the whole K-hop neighbourhood of the variable node. In this section, we describe the training setup which is used
In other words, a factor node corresponding to a branch for both GNN models described in the previous section and
current measurement can be removed, while still preserving assess the prediction quality of the trained models in various
the physical connection that exists between the power system test scenarios. Training, validation, and test sets are obtained
buses. This requires adding an additional set of trainable using the WLS solutions of the system described in (6) for
TABLE I: List of GNN hyperparameters. will preserve information exchange when factor nodes are
removed. Therefore, we will analyse the second model in the
Hyperparameters Values
Node embedding size s 64
detail in the rest of the paper.
Learning rate 4 × 10−4
Minibatch size B 32 0.08
Number of GNN layers K 4 Augmented factor graphs
Activation functions ReLU
Factor graphs
Average MSEs
Gradient clipping value 0.5 0.06
Optimizer Adam
Batch normalization type Mean
0.04
various measurement samples. The number and the positions
on PMUs are fixed and determined using the optimal PMU 0.02
placement algorithm [13], which finds the smallest set of PMU
measurements that make the system observable. The algorithm 0
has resulted in a total of 50 measurement phasors, 10 of 0 10 20 30 40 50
which are the voltage phasors and the rest are the current Excluded phasors
phasors. The IEEE 30-bus test case is the foundation for all
of the datasets we used in our experiments, which is enriched Fig. 2: Average MSEs of test sets created by randomly
by measurements, obtained by adding Gaussian noise to the excluding measurement phasors.
exact power flow solutions, with each power flow calculation
A. Analysis of results for the GNN model operating on aug-
executed with different load profile.
mented factor graphs
The training set consists of 10000 samples, while validation
and test set both contain 100 samples. The GNN model for Average MSEs and Pearson’s Correlations between the
factor graphs, as well as the model for augmented factor prediction and the labels over the whole test sets with various
graphs, were both trained on 100 epochs, which was sufficient numbers of excluded measurement phasors are presented in
to reach convergence considering both training and validation Table II. To further depict the quality of the trained model, for
loss functions. We used the IGNNITION framework [14] for one of the test samples, in Fig. 3 we present the predictions
building and utilising GNN models, with the hyperparameters and the labels for each of the variable nodes, where the upper
presented in Table I, the first three of which were obtained plot displays the results for the real parts of complex node
with the grid search hyperparameter optimization using the voltages, while the lower plot displays the imaginary parts.
Tune tool [15]. Fig. 3 also displays the minimal and the maximal value of the
To assess the quality of the proposed GNN model in unob- state variables for each node, looking over every sample in
servable areas, we test the trained models by excluding various the training, validation, and test set. Presented results indicate
numbers of measurement phasors from the previously used that GNNs can be used as accurate SE approximators when
test samples, making the system of equations that describes trained on the representative dataset.
the SE problem underdetermined. Excluding the measurement
TABLE II: Average MSEs and Pearson’s Correlations for test
phasor from the test sample is realised by removing its real
sets with various numbers of randomly excluding measurement
and imaginary parts from the input data, which is equivalent
phasors.
to removing two factor nodes from the graph on which the
GNN operates. Using the previously used 100-sample test
Excluded phasors Pearson’s Correlation MSE
set we create 49 additional test sets, by removing a number 0 0.9999948 1.0512 × 10−5
in a range of [1, 49] measurement phasors randomly from 2 0.9986435 1.2259 × 10−5
each sample, while preserving the same labels obtained as SE 10 0.9947819 6.8697 × 10−3
solutions of the system with all of the measurements present. 25 0.9880212 2.3002 × 10−2
49 0.9697979 3.1927 × 10−2
Fig. 2 summarises the quality of predictions of the both GNN
models on the mentioned test sets. For the test set with no
measurements excluded, the average mean square error (MSE) Fig. 4 and Fig. 5 depict predictions and labels for the
between the predictions and the labels for GNN models for model evaluated on the same test example with 2 and 49
factor graphs and augmented factor graphs equals 1.056×10−5 measurement phasors excluded, respectively. We can observe
and 1.051 × 10−5 , respectively, while for the test set with 49 that GNN predictions are a decent fit for many of the node
measurement phasors excluded, the corresponding values are labels when small fractions of all PMU measurements are
8.2235 × 10−2 and 3.193 × 10−2 . Both models displayed very not delivered to the state estimator, demonstrating robustness
low MSE on test examples with no excluded measurements, in these unobservable scenarios. In the unlikely scenario of
but the second model performed better on test sets with only one measurement phasor left in the power system, the
excluded measurements. This confirms the assumption that proposed model generates predictions within the scope of the
additional connections between variable nodes in the graph depicted bounds for most of the variable nodes, although GNN
1.1 1.1
1 1
ℜ(Vi )
ℜ(Vi )
0.9 0.9
Predictions Predictions
Labels Labels
0.8 Bounds 0.8 Bounds
0 0
−0.2 −0.2
ℑ(Vi )
−0.4 ℑ(Vi ) −0.4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Bus index (i) Bus index (i)
Fig. 3: Predictions and labels for one test example with no Fig. 4: Predictions and labels for one test example with two
measurement phasors removed. measurement phasors removed.
inputs, in this case, differ significantly from the samples on verged after 10000, 1000, and 100 epochs respectively. All
which the proposed model was trained. three trained models are tested on the same test set of size
To further analyze the robustness of the proposed model, we 100, which contains no excluded measurements. The average
observe the predictions for the scenario where 2 neighboring Pearson’s Correlation and MSE between the labels and the
PMUs fail to deliver measurements to the state estimator, predicted bus voltages over the entire test set are shown in
hence all of the 8 measurement phasors associated to the Table III. It can be seen that models trained with 1000 and
removed PMUs are excluded from the GNN inputs. Average 10000 samples produce results of similar quality during the
Pearson’s Correlation and MSE for the test set of 100 samples test phase, whereas there is a more noticeable deterioration of
created by removing these measurements from the original results for the model trained with 100 samples. We assume
test set used throughout this section are equal to 0.9985532 that further exponential increase in training set size would
and 1.3815 × 10−3 . Predictions and labels per variable node not be followed by significant improvements in the displayed
index for one sample are shown in Fig 6, in which vertical performance indicators.
dashed lines indicate indices of the variable nodes within 1-
hop neighbourhood of the removed PMUs. We stress that TABLE III: Test set results for different training set sizes.
significant deviations from the labels occur for the neighboring
Training set size Pearson’s Correlation MSE
nodes only, with no greater effect on the predictions for the 100 0.9999642 1.5329 × 10−4
remaining nodes, making the proposed model a suitable tool 1000 0.9999909 1.4891 × 10−5
to be used in PMU failure scenarios. 10000 0.9999948 1.0512 × 10−5
B. Sample efficiency analysis
To demonstrate how sample efficient is the proposed VI. C ONCLUSIONS
method, it is trained on three training sets with sizes of In this paper, we present a study on possibilities to use
100, 1000, and 10000, for which validation set losses con- GNNs as fast linear SE with PMUs solvers. We propose
1.1 1.1
1 1
ℜ(Vi )
ℜ(Vi )
0.9 0.9
Predictions Predictions
Labels Labels
0.8 Bounds
0.8
Bounds
1
0
0.5
ℑ(Vi )
ℑ(Vi )
−0.2
0
−0.4
−0.5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Bus index (i) Bus index (i)
Fig. 5: Predictions and labels for one test example with 49 Fig. 6: Predictions and labels for one test example with phasors
measurement phasors removed. from two neighboring PMUs removed.
a model with graph attention network based architecture, [4] L. Zhang, G. Wang, and G. B. Giannakis, “Real-time power system state
estimation and forecasting via deep unrolled neural networks,” IEEE
which operates on heterogeneous graphs, containing variable Transactions on Signal Processing, vol. 67, no. 15, pp. 4069–4077, 2019.
nodes that output the predictions based on the WLS SE [5] A. S. Zamzam, X. Fu, and N. D. Sidiropoulos, “Data-driven learning-
solutions, and factor nodes, which take PMU voltage and based optimization for distribution system state estimation,” IEEE Trans-
actions on Power Systems, vol. 34, no. 6, pp. 4796–4805, 2019.
current measurements and variances as inputs and propagate [6] B. Donon, B. Donnot, I. Guyon, and A. Marot, “Graph neural solver
them in the local neighbourhood. Evaluating the trained model for power systems,” in Proc. IJCNN, 2019, pp. 1–8.
on the unseen data samples confirms that the proposed GNN [7] L. Pagnier and M. Chertkov, “Physics-informed graphical neural network
for parameter & state estimations in power systems,” 2021.
approach can be used as a very accurate approximator of the [8] Q. Yang, A. Sadeghi, G. Wang, G. B. Giannakis, and J. Sun, “Robust
WLS SE solutions. By testing the model on scenarios in which psse using graph neural networks for data-driven and topology-aware
individual phasor measurements, or the whole PMUs fail to priors,” ArXiv, vol. abs/2003.01667, 2020.
[9] J. De La Ree, V. Centeno, J. S. Thorp, and A. G. Phadke, “Synchronized
deliver measurement data to the proposed SE solver, we have phasor measurement applications in power systems,” IEEE Transactions
found that adding variable-to-variable node connections in the on smart grid, vol. 1, no. 1, pp. 20–27, 2010.
training and test graphs significantly improves the predictions [10] A. Gomez-Exposito, A. Abur, P. Rousseaux, A. de la Villa Jaen,
and C. Gomez-Quiles, “On the use of PMUs in power system state
in the cases when the system of equations defining the SE estimation,” Proceedings of the 17th PSCC, 2011.
problem becomes underdetermined. [11] M. Cosovic and D. Vukobratovic, “Distributed Gauss–Newton method
for state estimation using belief propagation,” IEEE Trans. Power Syst.,
R EFERENCES vol. 34, no. 1, pp. 648–658, 2019.
[12] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and
[1] A. Monticelli, “Electric power system state estimation,” Proceedings of Y. Bengio, “Graph Attention Networks,” International Conference on
the IEEE, vol. 88, no. 2, pp. 262–282, 2000. Learning Representations, 2018.
[2] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, [13] B. Gou, “Optimal placement of pmus by integer linear programming,”
“Neural message passing for quantum chemistry,” in Proceedings of the Power Systems, IEEE Transactions on, pp. 1525 – 1526, 2008.
34th International Conference on Machine Learning, ser. Proceedings [14] D. Pujol-Perich, J. Suárez-Varela, M. Ferriol, S. Xiao, B. Wu,
of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70. A. Cabellos-Aparicio, and P. Barlet-Ros, “Ignnition: Bridging the gap
PMLR, 06–11 Aug 2017, pp. 1263–1272. between graph neural networks and networking systems,” 2021.
[3] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, [15] R. Liaw, E. Liang, R. Nishihara, P. Moritz, J. E. Gonzalez, and I. Stoica,
“Geometric deep learning: Going beyond euclidean data,” IEEE Signal “Tune: A research platform for distributed model selection and training,”
Processing Magazine, vol. 34, no. 4, pp. 18–42, 2017. arXiv preprint arXiv:1807.05118, 2018.