Tutorial 7
Tutorial 7
&
Adversarially regularized VGAE (ARVGA)
Gabriele Santin1
Motivation
ARGA & ARVGA
02
TABLE OF
Ideas from adversarial
models
03 CONTENTS 04
ARGA & ARVGA
Architecture and algorithm
Input:
Graph with node features
● Adj. matrix A
● Data matrix X
01 Recap
Input:
Graph with node features
● Adj. matrix A
● Data matrix X
Structural
information
01 Recap
Input:
Graph with node features
● Adj. matrix A
● Data matrix X
Structural Feature
information information
01 Recap
Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X
01 Recap
Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X
Goal:
● Embedding
● Generation
● ...
01 Recap
Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X
Goal:
● Embedding
● Generation
Continuous feature space
● ...
01 Recap
GAE vs VGAE:
● Embedding on
nodes
● Each nodes is
mapped to its
latent
representation
01 Recap
GAE vs VGAE:
● Embedding on
nodes
● Each nodes is
mapped to its
latent
representation
GAE:
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, Adversarially regularized graph
autoencoder for graph embedding. in Proc. of IJCAI, 2018, pp. 2609–2615.
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
We have a look at
adversarial training
S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, Adversarially regularized graph
autoencoder for graph embedding. in Proc. of IJCAI, 2018, pp. 2609–2615.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
04 ARGA & ARGVA
Architecture as in GAE/VGAE:
- Encoder: 2-layer GCN (with 2x
for mean and logstd in VGAE)
- Decoder: inner product
→ Same loss as GAE/VGAE:
- GAE: reconstruction loss
- VGAE: rec. + KL regularization
→ Adversarial loss:
Real: samples from N(0, 1)
Working on the latent space Fake: samples from the latent encoding
→ continuous values!
Jupyter Notebook