2019-Adversarially Regularized Graph Autoencoder For Graph Embedding
2019-Adversarially Regularized Graph Autoencoder For Graph Embedding
Shirui Pan1∗ , Ruiqi Hu1∗ , Guodong Long1 , Jing Jiang1 , Lina Yao2 , Chengqi Zhang1
1
Centre for Artificial Intelligence, FEIT, University of Technology Sydney, Australia
2
School of Computer Science and Engineering, University of New South Wales, Australia
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
⋯ ⋯ ⋯ 𝜎( ∗ )
⋯ ⋯
𝑋
Encoder
Fake
−
𝑍 >~ 𝑝(𝑍)
Real 1 Real
+
Input
0 Fake
Discriminator
Figure 1: The architecture of the adversarially regularized graph autoencoder (ARGA). The upper tier is a graph convolutional autoencoder
that reconstructs a graph A from an embedding Z which is generated by the encoder which exploits graph structure A and the node content
matrix X. The lower tier is an adversarial network trained to discriminate if a sample is generated from the embedding or from a prior
distribution. The adversarially regularized variational graph autoencoder (ARVGA) is similar to ARGA except that it employs a variational
graph autoencoder in the upper tier (See Algorithm 1 for details).
the content features associated with each node vi . Graph Convolutional Encoder Model G(X, A). To rep-
Given a graph G, our purpose is to map the nodes vi ∈ V resent both graph structure A and node content X in a uni-
to low-dimensional vectors zi ∈ Rd with the formal format fied framework, we develop a variant of the graph convolu-
as follows: f : (A, X) Z, where z> i is the i-th row of tional network (GCN) [Kipf and Welling, 2016a] as a graph
the matrix Z ∈ Rn×d . n is the number of nodes and d is the encoder. Our graph convolutional network (GCN) extends the
dimension of embedding. We take Z as the embedding ma- operation of convolution to graph data in the spectral domain,
trix and the embeddings should well preserve the topological and learns a layer-wise transformation by a spectral convolu-
structure A as well as content information X . tion function f (Z(l) , A|W(l) ):
Table 2: Results for Link Prediction. GAE∗ and VGAE∗ are variants of GAE, which only explore topological structure, i.e., X = I.
Figure 3: The Cora data visualization comparison. From left to right: embeddings from our ARGA, VGAE, GAE, DeepWalk, and Spectral
Clustering. The different colors represent different groups.
Cora Acc NMI F1 Precision ARI Citeseer Acc NMI F1 Precision ARI
K-means 0.492 0.321 0.368 0.369 0.230 K-means 0.540 0.305 0.409 0.405 0.279
Spectral 0.367 0.127 0.318 0.193 0.031 Spectral 0.239 0.056 0.299 0.179 0.010
GraphEncoder 0.325 0.109 0.298 0.182 0.006 GraphEncoder 0.225 0.033 0.301 0.179 0.010
DeepWalk 0.484 0.327 0.392 0.361 0.243 DeepWalk 0.337 0.088 0.270 0.248 0.092
DNGR 0.419 0.318 0.340 0.266 0.142 DNGR 0.326 0.180 0.300 0.200 0.044
RTM 0.440 0.230 0.307 0.332 0.169 RTM 0.451 0.239 0.342 0.349 0.203
RMSC 0.407 0.255 0.331 0.227 0.090 RMSC 0.295 0.139 0.320 0.204 0.049
TADW 0.560 0.441 0.481 0.396 0.332 TADW 0.455 0.291 0.414 0.312 0.228
GAE 0.596 0.429 0.595 0.596 0.347 GAE 0.408 0.176 0.372 0.418 0.124
VGAE 0.609 0.436 0.609 0.609 0.346 VGAE 0.344 0.156 0.308 0.349 0.093
ARGE 0.640 0.449 0.619 0.646 0.352 ARGE 0.573 0.350 0.546 0.573 0.341
ARVGE 0.638 0.450 0.627 0.624 0.374 ARVGE 0.544 0.261 0.529 0.549 0.245