12 Variational Autoencoder v2.07
12 Variational Autoencoder v2.07
v2.07
Course materials (pdf)
Videos (YouTube)
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) (*) Procedure via Docket or pip
https://creativecommons.org/licenses/by-nc-nd/4.0/ Remember to get the latest version !
Questions and answers :
https://fidle.cnrs.fr/q2a
Accompanied by :
IA Support (dream) Team of IDRIS
Directed by :
Agathe, Baptiste et Yanis - UGA/DAPI
Thibaut, Kamel - IDRIS
https://fidle.cnrs.fr/listeinfo
Fidle information list
New !
http://fidle.cnrs.fr/agoria
AI exchange list
[email protected]
(*) ESR is Enseignement Supérieur et Recherche, french universities and public academic research organizations
https://listes.services.cnrs.fr/wws/info/devlog
List of ESR* « Software developers » group
https://listes.math.cnrs.fr/wws/info/calcul
List of ESR* « Calcul » group
(*) ESR is Enseignement Supérieur et Recherche, french universities and public academic research organizations
Previously on Fidle !
10
Autoencoder
inputs z
Encoder
z = encoder(inputs)
11
Autoencoder
inputs z outputs
Decoder
outputs = decoder(z)
12
Autoencoder
inputs z outputs
z = encoder(inputs)
ae = keras.Model(inputs, outputs)
outputs = decoder(z)
13
Autoencoder
inputs z outputs
z = encoder(inputs)
ae = keras.Model(inputs, outputs)
outputs = decoder(z)
inputs outputs
Latent space
z
15
Autoencoder
Region
Example of of the « 1 » Region
MNIST of the « 0 » Clusters
dataset appear, but
distribution many of them
in its latent are nested or
space : very spread out
How can
Only two we make our
dimensions network better
are represented separate the
in abscissa and different
ordinate clusters?
z=encoder(inputs) Region
of the « 6 »
Loss
20
Variational Autoencoder (VAE) Easy !
21
Variational Autoencoder
Variational Autoencoder (VAE)(VAE) Nice !
22
Variational Autoencoder (VAE)
Objectives :
implementating a VAE, using Keras 3 functional API
and model subclass, using real PyTorch !
Dataset :
MNIST
28
VAE1 | VAE2 |VAE3
#1 Using MNIST
module
ImagesCallBack
module
20 ‘ on a CPU
Custom Layer BestModelCallBack
module With scale=1
MNIST
END
Parameters Build
models Training
Objectives : latent_dim = 2
loss_weights = [1,.001] review
VAE using scale = .1
layer 29
VAE1 | VAE2 |VAE3
SamplingLayer
VariationalLossLayer
30
VAE1 | VAE2 |VAE3
#1 Using MNIST ImagesCallBack
module
module
20 ‘ on a CPU
Custom Layer BestModelCallBack
module With scale=1
MNIST
END
Parameters Build
models Training
Objectives : latent_dim = 2
loss_weights = [1,.001] review
VAE using scale = .1
layer 31
VAE1 | VAE2 |VAE3
MNIST
START
END
Parameters Build Training
latent_dim = 2
model review
Objectives : loss_weights = [1,.001]
dataset VAE
32
module
VAE1 | VAE2 |VAE3
SamplingLayer
VAE
33
VAE1 | VAE2 |VAE3
MNIST
START
END
Parameters Build Training
latent_dim = 2
model review
Objectives : loss_weights = [1,.001]
dataset VAE
34
module
VAE1 | VAE2 | VAE3
MNIST
module
Few seconds !
MNIST
Import
and init Retrieve the Images Generate fom
dataset reconstruction latent space
START
Reload END
Parameters Visualizing
scale = .1
model latent space
seed = 123
Objectives :
Reload a
saved model SamplingLayer
and visualize module
36
Jeudi
Next, on Fidle :
21
Mars
à 14h00
L’IA
comme
un outil