0% found this document useful (0 votes)
8 views47 pages

Tutorial 7

The document discusses Adversarially Regularized Graph Autoencoder (ARGA) and Adversarially Regularized Variational Graph Autoencoder (ARVGA), focusing on their architecture, algorithms, and the motivation behind using adversarial training for improved latent representations. It highlights the differences between traditional Graph Autoencoders (GAE) and Variational Graph Autoencoders (VGAE) compared to ARGA and ARVGA, which incorporate adversarial techniques for better performance. The document also outlines the architecture of the encoders and discriminators used in these models.

Uploaded by

Mohammed Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views47 pages

Tutorial 7

The document discusses Adversarially Regularized Graph Autoencoder (ARGA) and Adversarially Regularized Variational Graph Autoencoder (ARVGA), focusing on their architecture, algorithms, and the motivation behind using adversarial training for improved latent representations. It highlights the differences between traditional Graph Autoencoders (GAE) and Variational Graph Autoencoders (VGAE) compared to ARGA and ARVGA, which incorporate adversarial techniques for better performance. The document also outlines the architecture of the encoders and discriminators used in these models.

Uploaded by

Mohammed Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Adversarially regularized GAE (ARGA)

&
Adversarially regularized VGAE (ARVGA)
Gabriele Santin1

MobS1 Lab, Fondazione Bruno Kessler,Trento, Italy


01
Recap
GAE & VGAE

Motivation
ARGA & ARVGA
02
TABLE OF
Ideas from adversarial
models
03 CONTENTS 04
ARGA & ARVGA
Architecture and algorithm

05 ARGA & ARVGA


Practice
01 Recap
01 Recap

Input:
Graph with node features
● Adj. matrix A
● Data matrix X
01 Recap

Input:
Graph with node features
● Adj. matrix A
● Data matrix X

Structural
information
01 Recap

Input:
Graph with node features
● Adj. matrix A
● Data matrix X

Structural Feature
information information
01 Recap

Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X
01 Recap

Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X

Goal:
● Embedding
● Generation
● ...
01 Recap

Input: Output:
Graph with node features Graph
● Adj. matrix A ● Approx. of A
● Data matrix X

Goal:
● Embedding
● Generation
Continuous feature space
● ...
01 Recap
GAE vs VGAE:
● Embedding on
nodes
● Each nodes is
mapped to its
latent
representation
01 Recap
GAE vs VGAE:
● Embedding on
nodes
● Each nodes is
mapped to its
latent
representation

GAE:
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation

AE and GAE: only reconstruction loss

VAE and VGAE: regularize to have


continuous latent representation
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation

AE and GAE: only reconstruction loss

VAE and VGAE: regularize to have


continuous latent representation

ARGA & ARVGA improve it


02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation

Adversarially regularized graph autoencoder (ARGA)


Adversarially regularized variational graph autoencoder (ARVGA)

S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, Adversarially regularized graph
autoencoder for graph embedding. in Proc. of IJCAI, 2018, pp. 2609–2615.
02 Motivation ARGA & ARVGA
Motivation:
The importance of the latent representation

Adversarially regularized graph autoencoder (ARGA)


Adversarially regularized variational graph autoencoder (ARVGA)

We have a look at
adversarial training

S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, Adversarially regularized graph
autoencoder for graph embedding. in Proc. of IJCAI, 2018, pp. 2609–2615.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents

Generator: maps noise z to a fake object x


Discriminator: maps object x to probability of real/fake
Game: The generator tries to fool the discriminator
The discriminator tries to detect the fake objects

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models
Goal: generate fake objects (e.g. images) similar to real ones
Idea: play an adversarial game with two agents

Generator: maps noise z to a fake object x


Discriminator: maps object x to probability of real/fake
Game: The generator tries to fool the discriminator
The discriminator tries to detect the fake objects

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

The Discriminator wants to max:

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

The Discriminator wants to max:


- Recall that D(x) is in [0, 1]
- First term:
→ large if D(x) is close to 1
→ assign high probability to real objects

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

The Discriminator wants to max:


- Recall that D(x) is in [0, 1]
- First term:
→ large if D(x) is close to 1
→ assign high probability to real objects
- Second term:
→ large if 1-D(G(z) is close to 1
→ large if D(G(z)) is close to 0
→ assign low probability to fake objects

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

The Generator wants to min:

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
03 Ideas from adversarial models

The Generator wants to min:


- Second term:
→ small if 1-D(G(z) is close to 0
→ small if D(G(z) is close to 1
→ fool the discriminator into assigning high probability to fake objects

l. Goodfellow et al., Generative Adversarial Nets. in Proc. of NIPS, 2014, pp. 2672--2680.
04 ARGA & ARGVA

Picture from S. Pan et al.


04 ARGA & ARGVA

Architecture as in GAE/VGAE:
- Encoder: 2-layer GCN (with 2x
for mean and logstd in VGAE)
- Decoder: inner product
→ Same loss as GAE/VGAE:
- GAE: reconstruction loss
- VGAE: rec. + KL regularization

Picture from S. Pan et al.


04 ARGA & ARGVA

Picture from S. Pan et al.


04 ARGA & ARGVA

Architecture of the discriminator:


- Standard fully connected NN with 3 layers

Picture from S. Pan et al.


04 ARGA & ARGVA

Architecture of the discriminator:


- Standard fully connected NN with 3 layers

Working on the latent space


→ continuous values!

Picture from S. Pan et al.


04 ARGA & ARGVA

Architecture of the discriminator:


- Standard fully connected NN with 3 layers

→ Adversarial loss:
Real: samples from N(0, 1)
Working on the latent space Fake: samples from the latent encoding
→ continuous values!

Picture from S. Pan et al.


04 ARGA & ARGVA

This is: Z = E(X, A)

Picture from S. Pan et al.


04 ARGA & ARGVA

This is: Z = E(X, A)

These are the usual GAE/VGAE losses

Picture from S. Pan et al.


04 ARGA & ARGVA

K training loops of the discriminator

Picture from S. Pan et al.


04 ARGA & ARGVA

K training loops of the discriminator

Sample fake gaussians

Picture from S. Pan et al.


04 ARGA & ARGVA

K training loops of the discriminator

Sample fake gaussians

Sample true gaussians

Picture from S. Pan et al.


04 ARGA & ARGVA

K training loops of the discriminator

Sample fake gaussians

Sample true gaussians

Update the discriminator

Picture from S. Pan et al.


04 ARGA & ARGVA

K training loops of the discriminator

Sample fake gaussians

Sample true gaussians

Update the discriminator

Missing: update the encoder


(written in the text)

Picture from S. Pan et al.


05 ARGA & ARGVA
05 ARGA & ARGVA
05 ARGA & ARGVA
05 ARGA & ARGVA
05 ARGA & ARGVA
05 ARGA & ARGVA

Jupyter Notebook

You might also like