0% found this document useful (0 votes)
17 views

Junction Tree Variational Autoencoder For Molecular Graph Generation

Uploaded by

guosibei97
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Junction Tree Variational Autoencoder For Molecular Graph Generation

Uploaded by

guosibei97
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Junction Tree Variational Autoencoder for Molecular Graph Generation

Wengong Jin 1 Regina Barzilay 1 Tommi Jaakkola 1

Abstract
We seek to automate the design of molecules
based on specific chemical properties. In com-
putational terms, this task involves continuous
embedding and generation of molecular graphs.
Our primary contribution is the direct realization
of molecular graphs, a task previously approached Figure 1. Two almost identical molecules with markedly different
canonical SMILES in RDKit. The edit distance between two
by generating linear SMILES strings instead of
strings is 22 (50.5% of the whole sequence).
graphs. Our junction tree variational autoencoder
generates molecular graphs in two phases, by first Prior work on drug design formulated the graph genera-
generating a tree-structured scaffold over chemi- tion task as a string generation problem (Gómez-Bombarelli
cal substructures, and then combining them into a et al., 2016; Kusner et al., 2017) in an attempt to side-step
molecule with a graph message passing network. direct generation of graphs. Specifically, these models start
This approach allows us to incrementally expand by generating SMILES (Weininger, 1988), a linear string
molecules while maintaining chemical validity notation used in chemistry to describe molecular structures.
at every step. We evaluate our model on multi- SMILES strings can be translated into graphs via deter-
ple tasks ranging from molecular generation to ministic mappings (e.g., using RDKit (Landrum, 2006)).
optimization. Across these tasks, our model out- However, this design has two critical limitations. First, the
performs previous state-of-the-art baselines by a SMILES representation is not designed to capture molec-
significant margin. ular similarity. For instance, two molecules with similar
chemical structures may be encoded into markedly different
SMILES strings (e.g., Figure 1). This prevents generative
1. Introduction models like variational autoencoders from learning smooth
molecular embeddings. Second, essential chemical proper-
The key challenge of drug discovery is to find target ties such as molecule validity are easier to express on graphs
molecules with desired chemical properties. Currently, this rather than linear SMILES representations. We hypothesize
task takes years of development and exploration by expert that operating directly on graphs improves generative mod-
chemists and pharmacologists. Our ultimate goal is to au- eling of valid chemical structures.
tomate this process. From a computational perspective, we
decompose the challenge into two complementary subtasks: Our primary contribution is a new generative model of
learning to represent molecules in a continuous manner that molecular graphs. While one could imagine solving the
facilitates the prediction and optimization of their properties problem in a standard manner – generating graphs node
(encoding); and learning to map an optimized continuous by node – the approach is not ideal for molecules. This is
representation back into a molecular graph with improved because creating molecules atom by atom would force the
properties (decoding). While deep learning has been exten- model to generate chemically invalid intermediaries (see,
sively investigated for molecular graph encoding (Duvenaud e.g., Figure 2), delaying validation until a complete graph
et al., 2015; Kearnes et al., 2016; Gilmer et al., 2017), the is generated. Instead, we propose to generate molecular
harder combinatorial task of molecular graph generation graphs in two phases by exploiting valid subgraphs as com-
from latent representation remains under-explored. ponents. The overall generative approach, cast as a junction
tree variational autoencoder, first generates a tree struc-
1
MIT Computer Science & Artificial Intelligence Lab. Corre- tured object (a junction tree) whose role is to represent the
spondence to: Wengong Jin <[email protected]>. scaffold of subgraph components and their coarse relative
Proceedings of the 35 th International Conference on Machine arrangements. The components are valid chemical substruc-
Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 tures automatically extracted from the training set using tree
by the author(s). decomposition and are used as building blocks. In the sec-
Junction Tree Variational Autoencoder for Molecular Graph Generation

Figure 2. Comparison of two graph generation schemes: Structure


by structure approach is preferred as it avoids invalid intermediate
states (marked in red) encountered in node by node approach.

ond phase, the subgraphs (nodes in the tree) are assembled


together into a coherent molecular graph.
We evaluate our model on multiple tasks ranging from
molecular generation to optimization of a given molecule
according to desired properties. As baselines, we utilize
state-of-the-art SMILES-based generation approaches (Kus-
ner et al., 2017; Dai et al., 2018). We demonstrate that
our model produces 100% valid molecules when sampled
from a prior distribution, outperforming the top perform-
ing baseline by a significant margin. In addition, we show
that our model excels in discovering molecules with desired
properties, yielding a 30% relative gain over the baselines.

2. Junction Tree Variational Autoencoder Figure 3. Overview of our method: A molecular graph G is first
decomposed into its junction tree TG , where each colored node in
Our approach extends the variational autoencoder (Kingma the tree represents a substructure in the molecule. We then encode
& Welling, 2013) to molecular graphs by introducing a suit- both the tree and graph into their latent embeddings zT and zG .
able encoder and a matching decoder. Deviating from pre- To decode the molecule, we first reconstruct junction tree from zT ,
vious work (Gómez-Bombarelli et al., 2016; Kusner et al., and then assemble nodes in the tree back to the original molecule.
2017), we interpret each molecule as having been built from
molecule. However, our clusters are built on the basis of the
subgraphs chosen out of a vocabulary of valid components.
molecules in the training set to ensure that a corresponding
These components are used as building blocks both when
junction tree can be found. Empirically, our clusters cover
encoding a molecule into a vector representation as well
most of the molecules in the test set.
as when decoding latent vectors back into valid molecular
graphs. The key advantage of this view is that the decoder The original molecular graph and its associated junction tree
can realize a valid molecule piece by piece by utilizing the offer two complementary representations of a molecule. We
collection of valid components and how they interact, rather therefore encode the molecule into a two-part latent repre-
than trying to build the molecule atom by atom through sentation z = [zT , zG ] where zT encodes the tree structure
chemically invalid intermediaries (Figure 2). An aromatic and what the clusters are in the tree without fully captur-
bond, for example, is chemically invalid on its own unless ing how exactly the clusters are mutually connected. zG
the entire aromatic ring is present. It would be therefore encodes the graph to capture the fine-grained connectivity.
challenging to learn to build rings atom by atom rather than Both parts are created by tree and graph encoders q(zT |T )
by introducing rings as part of the basic vocabulary. and q(zG |G). The latent representation is then decoded
back into a molecular graph in two stages. As illustrated in
Our vocabulary of components, such as rings, bonds and
Figure 3, we first reproduce the junction tree using a tree
individual atoms, is chosen to be large enough so that a
decoder p(T |zT ) based on the information in zT . Second,
given molecule can be covered by overlapping components
we predict the fine grain connectivity between the clusters
or clusters of atoms. The clusters serve the role analogous to
in the junction tree using a graph decoder p(G|T , zG ) to
cliques in graphical models, as they are expressive enough
realize the full molecular graph. The junction tree approach
that a molecule can be covered by overlapping clusters with-
allows us to maintain chemical feasibility during generation.
out forming cluster cycles. In this sense, the clusters serve
as cliques in a (non-optimal) triangulation of the molecular Notation A molecular graph is defined as G = (V, E)
graph. We form a junction tree of such clusters and use it where V is the set of atoms (vertices) and E the set of bonds
as the tree representation of the molecule. Since our choice (edges). Let N (x) be the neighbor of x. We denote sigmoid
of cliques is constrained a priori, we cannot guarantee that function as σ(·) and ReLU function as τ (·). We use i, j, k
a junction tree exists with such clusters for an arbitrary for nodes in the tree and u, v, w for nodes in the graph.
Junction Tree Variational Autoencoder for Molecular Graph Generation

2.1. Junction Tree those messages as the latent vector of each vertex, which
captures its local graphical structure:
A tree decomposition maps a graph G into a junction tree
by contracting certain vertices into a single node so that G hu = τ (Ug1 xu +
X
Ug2 ν (T )
vu ) (2)
becomes cycle-free. Formally, given a graph G, a junction v∈N (u)
tree TG = (V, E, X ) is a connected labeled tree whose P
The final graph representation is hG = i hi /|V |. The
node set is V = {C1 , · · · , Cn } and edge set is E. Each
mean µG and log variance log σ G of the variational poste-
node or cluster Ci = (Vi , Ei ) is an induced subgraph of G,
rior approximation are computed from hG with two separate
satisfying the following constraints:
affine layers. zG is sampled from a Gaussian N (µG , σ G ).
S
1. The Sunion of all clusters equals G. That is, i Vi = V
and i Ei = E. 2.3. Tree Encoder
2. Running intersection: For all clusters Ci , Cj and Ck , We similarly encode TG with a tree message passing net-
Vi ∩ Vj ⊆ Vk if Ck is on the path from Ci to Cj . work. Each cluster Ci is represented by a one-hot encoding
xi representing its label type. Each edge (Ci , Cj ) is associ-
Viewing induced subgraphs as cluster labels, junction trees ated with two message vectors mij and mji . We pick an
are labeled trees with label vocabulary X . By our molecule arbitrary leaf node as the root and propagate messages in
tree decomposition, X contains only cycles (rings) and sin- two phases. In the first bottom-up phase, messages are initi-
gle edges. Thus the vocabulary size is limited (|X | = 780 ated from the leaf nodes and propagated iteratively towards
for a standard dataset with 250K molecules). root. In the top-down phase, messages are propagated from
Tree Decomposition of Molecules Here we present our the root to all the leaf nodes. Message mij is updated as:
tree decomposition algorithm tailored for molecules, which
finds its root in chemistry (Rarey & Dixon, 1998). Our mij = GRU(xi , {mki }k∈N (i)\j ) (3)
cluster vocabulary X includes chemical structures such as where GRU is a Gated Recurrent Unit (Chung et al., 2014;
bonds and rings (Figure 3). Given a graph G, we first find all Li et al., 2015) adapted for tree message passing:
its simple cycles, and its edges not belonging to any cycles.
Two simple rings are merged together if they have more than
X
sij = mki (4)
two overlapping atoms, as they constitute a specific structure k∈N (i)\j
called bridged compounds (Clayden et al., 2001). Each of zij = σ(Wz xi + Uz sij + bz ) (5)
those cycles or edges is considered as a cluster. Next, a rki r
= σ(W xi + U mki + b )r r
(6)
cluster graph is constructed by adding edges between all X
intersecting clusters. Finally, we select one of its spanning m
e ij = tanh(Wxi + U rki mki ) (7)
k∈N (i)\j
trees as the junction tree of G (Figure 3). As a result of ring
merging, any two clusters in the junction tree have at most mij = (1 − zij ) sij + zij m
e ij (8)
two atoms in common, facilitating efficient inference in the
graph decoding phase. The detailed procedure is described The message passing follows the schedule where mij is
in the supplementary. computed only when all its precursors {mki | k ∈ N (i)\j}
have been computed. This architectural design is motivated
by the belief propagation algorithm over trees and is thus
2.2. Graph Encoder
different from the graph encoder.
We first encode the latent representation of G by a graph
After the message passing, we obtain the latent representa-
message passing network (Dai et al., 2016; Gilmer et al.,
tion of each node hi by aggregating its inward messages:
2017). Each vertex v has a feature vector xv indicating the
atom type, valence, and other properties. Similarly, each X
hi = τ (Wo xi + Uo mki ) (9)
edge (u, v) ∈ E has a feature vector xuv indicating its k∈N (i)
bond type, and two hidden vectors ν uv and ν vu denoting The final tree representation is hTG = hroot , which encodes
the message from u to v and vice versa. Due to the loopy a rooted tree (T , root). Unlike the graph encoder, we do
structure of the graph, messages are exchanged in a loopy not apply node average pooling because it confuses the tree
belief propagation fashion: decoder which node to generate first. zTG is sampled in
g g g
X a similar way as in the graph encoder. For simplicity, we
ν (t)
uv = τ (W1 xu + W2 xuv + W3 ν (t−1)
wu ) (1) abbreviate zTG as zT from now on.
w∈N (u)\v
This tree encoder plays two roles in our framework. First, it
(t)
where ν uv is the message computed in t-th iteration, initial- is used to compute zT , which only requires the bottom-up
(0)
ized with ν uv = 0. After T steps of iteration, we aggregate phase of the network. Second, after a tree Tb is decoded
Junction Tree Variational Autoencoder for Molecular Graph Generation

Algorithm 1 Tree decoding at sampling time


Require: Latent representation zT
1: Initialize: Tree Tb ← ∅
2: function SampleTree(i, t)
3: Set Xi ← all cluster labels that are chemically com-
patible with node i and its current neighbors.
4: Set dt ← expand with probability pt . . Eq.(11)
5: if dt = expand and Xi 6= ∅ then
6: Create a node j and add it to tree Tb .
7: Sample the label of node j from Xi .. Eq.(12)
8: SampleTree(j, t + 1)
9: end if
10: end function
Figure 4. Illustration of the tree decoding process. Nodes are la-
beled in the order in which they are generated. 1) Node 2 expands zT , node features xit and inward messages hk,it via a one
child node 4 and predicts its label with message h24 . 2) As node 4 hidden layer network followed by a sigmoid function:
is a leaf node, decoder backtracks and computes message h42 . 3) X
Decoder continues to backtrack as node 2 has no more children. 4) pt = σ(ud ·τ (W1d xit +W2d zT +W3d hk,it ) (11)
Node 1 expands node 5 and predicts its label.
(k,it )∈Ẽt

from zT , it is used to compute messages m b ij over the en- Label Prediction When a child node j is generated from
tire Tb , to provide essential contexts of every node during its parent i, we predict its node label with
graph decoding. This requires both top-down and bottom-up qj = softmax(Ul τ (W1l zT + W2l hij )) (12)
phases. We will elaborate this in section 2.5.
where qj is a distribution over label vocabulary X . When j
2.4. Tree Decoder is a root node, its parent i is a virtual node and hij = 0.
Learning The tree decoder aims to maximize the likeli-
We decode a junction tree T from its encoding zT with a tree
hood p(T |zT ). Let p̂t ∈ {0, 1} and q̂j be the ground truth
structured decoder. The tree is constructed in a top-down
topological and label values, the decoder minimizes the
fashion by generating one node at a time. As illustrated
following cross entropy loss:1
in Figure 4, our tree decoder traverses the entire tree from
X X
the root, and generates nodes in their depth-first order. For Lc (T ) = Ld (pt , p̂t ) + Ll (qj , q̂j ) (13)
every visited node, the decoder first makes a topological t j

prediction: whether this node has children to be generated. Similar to sequence generation, during training we perform
When a new child node is created, we predict its label and teacher forcing: after topological and label prediction at
recurse this process. Recall that cluster labels represent each step, we replace them with their ground truth so that
subgraphs in a molecule. The decoder backtracks when a the model makes predictions given correct histories.
node has no more children to generate.
Decoding & Feasibility Check Algorithm 1 shows how a
At each time step, a node receives information from other tree is sampled from zT . The tree is constructed recursively
nodes in the current tree for making those predictions. The guided by topological predictions without any external guid-
information is propagated through message vectors hij ance used in training. To ensure the sampled tree could be
when trees are incrementally constructed. Formally, let realized into a valid molecule, we define set Xi to be cluster
Ẽ = {(i1 , j1 ), · · · , (im , jm )} be the edges traversed in a labels that are chemically compatible with node i and its
depth first traversal over T = (V, E), where m = 2|E| as current neighbors. When a child node j is generated from
each edge is traversed in both directions. The model vis- node i, we sample its label from Xi with a renormalized
its node it at time t. Let Ẽt be the first t edges in Ẽ. The distribution qj over Xi by masking out invalid labels.
message hit ,jt is updated through previous messages:
2.5. Graph Decoder
hit ,jt = GRU(xit , {hk,it }(k,it )∈Ẽt ,k6=jt ) (10)
The final step of our model is to reproduce a molecular graph
where GRU is the same recurrent unit as in the tree encoder. G that underlies the predicted junction tree Tb = (V, b E).
b
Topological Prediction When the model visits node it , it 1
The node ordering is not unique as the order within sibling
makes a binary prediction on whether it still has children to nodes is ambiguous. In this paper we train our model with one
be generated. We compute this probability by combining ordering and leave this issue for future work.
Junction Tree Variational Autoencoder for Molecular Graph Generation

scores. Then we proceed to assemble the neighbors and


their associated clusters (removing the degrees of freedom
set by the root assembly), and so on.
It remains to be specified how each neighborhood realiza-
tion is scored. Let Gi be the subgraph resulting from a
particular merging of cluster Ci in the tree with its neigh-
bors Cj , j ∈ NTb (i). We score Gi as a candidate subgraph
by first deriving a vector representation hGi and then using
fia (Gi ) = hGi · zG as the subgraph score. To this end,
let u, v specify atoms in the candidate subgraph Gi and let
αv = i if v ∈ Ci and αv = j if v ∈ Cj \ Ci . The indices
αv are used to mark the position of the atoms in the junction
tree, and to retrieve messages m b i,j summarizing the sub-
tree under i along the edge (i, j) obtained by running the
tree encoding algorithm. The neural messages pertaining
to the atoms and bonds in subgraph Gi are obtained and
aggregated into hGi , similarly to the encoding step, but with
different (learned) parameters:

µ(t)
uv e (t−1)
= τ (W1a xu + W2a xuv + W3a µ uv ) (15)
(P (t−1)
w∈N (u)\v µwu αu = αv
e (t−1)
µ uv = P (t−1)
Figure 5. Decode a molecule from a junction tree. 1) Ground truth m
b αu ,αv + w∈N (u)\v µwu αu 6= αv
molecule G. 2) Predicted junction tree Tb . 3) We enumerate differ-
ent combinations between red cluster C and its neighbors. Crossed The major difference from Eq. (1) is that we augment the
arrows indicate combinations that lead to chemically infeasible model with tree messages m b αu ,αv derived by running the
molecules. Note that if we discard tree structure during enumera-
tree encoder over the predicted tree Tb . m
b αu ,αv provides a
tion (i.e., ignoring subtree A), the last two candidates will collapse
into the same molecule. 4) Rank subgraphs at each node. The final
tree dependent positional context for bond (u, v) (illustrated
graph is decoded by putting together all the predicted subgraphs. as subtree A in Figure 5).
Learning The graph decoder parameters are learned to
Note that this step is not deterministic since there are poten- maximize the log-likelihood of predicting correct subgraphs
tially many molecules that correspond to the same junction Gi of the ground true graph G at each tree node:
tree. The underlying degree of freedom pertains to how  
neighboring clusters Ci and Cj are attached to each other X X
as subgraphs. Our goal here is to assemble the subgraphs Lg (G) = f a (Gi ) − log exp(f a (G0i )) (16)
(nodes in the tree) together into the correct molecular graph. i G0i ∈Gi

Let G(T ) be the set of graphs whose junction tree is T . De- where Gi is the set of possible candidate subgraphs at tree
coding graph Ĝ from Tb = (V,b E)
b is a structured prediction: node i. During training, we again apply teacher forcing, i.e.
we feed the graph decoder with ground truth trees as input.
Ĝ = arg max f a (G0 ) (14)
G0 ∈G(Tb ) Complexity By our tree decomposition, any two clusters
share at most two atoms, so we only need to merge at most
where f a is a scoring function over candidate graphs. We
two atoms or one bond. By pruning chemically invalid
only consider scoring functions that decompose across the
subgraphs and merging isomorphic graphs, |Gi | ≈ 4 on
clusters and their neighbors. In other words, each term in
average when tested on a standard ZINC drug dataset. The
the scoring function depends only on how a cluster Ci is
computational complexity of JT-VAE is therefore linear in
attached to its neighboring clusters Cj , j ∈ NTb (i) in the
the number of clusters, scaling nicely to large graphs.
tree Tb . The problem of finding the highest scoring graph Ĝ –
the assembly task – could be cast as a graphical model infer-
ence task in a model induced by the junction tree. However, 3. Experiments
for efficiency reasons, we will assemble the molecular graph Our evaluation efforts measure various aspects of molecular
one neighborhood at a time, following the order in which the generation. The first two evaluations follow previously
tree itself was decoded. In other words, we start by sampling proposed tasks (Kusner et al., 2017). We also introduce a
the assembly of the root and its neighbors according to their third task — constrained molecule optimization.
Junction Tree Variational Autoencoder for Molecular Graph Generation
O- N
N

O NH NH NH N N N N NH
N NH2+ N N N
O H H
H
NH
N N
NH
N N N N NH
N NH NH NH NH N
O N N
N N N N N
O NH
N N
S H H
NH2+ NH2+ N
NH+ O
N N
NH O H N N
N N N
N N N N N
H3N+ NH S
N
O Cl
O

O S
O N N N
S S
N N
N N N N NH N N
N N
NH NH NH O NH NH
N N NH N NH
N
N N N N N
N N
N
N O NH
O O
N N
S S N N N
N N
N N N
N
N H
NH N N
N
NH

S
O O N
O
O
NH2+ O
S S S
N N N N N N
N N NH NH NH
NH NH
N N N N
N N N
N N
O O O
N N N
NH NH NH NH
N N N N N N N N
N N N N N
N N N

NH
N
N H2N
O F
N Cl
O
N
N N N N
NH
N NH+
O
NH
N N
O N O S S
N O NH NH
N N N N N N
NH

N N N N N N
O N
N N O
N
NH NH NH NH NH NH
N N N N N N N N N N N N N N
N
N N N

O O
NH3+ S
O N N
O
N N N N N N N
O H2N NH
NH O S
N N N N N N N N

S N NH
N
N NH NH NH NH NH NH NH
NH NH N N N N N N N N N N N N N N N N

O O N

NH N N
N N N N N N N
N O NH N NH N
N N N N N N N

S S S S
N NH O NH2 NH2
NH NH NH NH NH NH NH
N N N N N N N
O O N N N N N N N
S
O NH
O
O O
O
OH
H2N O
NH O

NH2 NH

N N
N N N N N
N N NH+ N

NH N NH N NH S
N N N N N N
NH
S S N NH2
S S S
NH NH NH NH NH
NH2 NH2
N N N N N N N N N N
NH 2
S
N

N O OH O
N
NH
N NH2+ NH+ N
N
O NH
S O
N N
N NH2+ N N
O S N N
N N N
NH+ NH+ N
NH NH
N N N S
N S
N
N N
O NH NH
N N NH
NH2 S S NH2 N N NH2
NH NH
O N
S S NH NH N
S S N N N N NH
2 2
N N
N N

O
NH2 S N

O NH
O N
H O
N N
NH2+ NH2+ N
S N N O O H 2N H 2N
N N
N N O S S S S NH NH+ N NH+ N
N N N N
H O N S S

NH S N NH N NH NH
N N
NH NH2 NH2 S
N N NH NH
O NH
S
NH
S N N NH2 NH2
Cl OH N N
S NH NH
2
N N
N N
N N

Figure 6. Left: Random molecules sampled from prior distribution N (0, I). Right: Visualization of the local neighborhood of a molecule
in the center. Three molecules highlighted in red dashed box have the same tree structure as the center molecule, but with different graph
structure as their clusters are combined differently. The same phenomenon emerges in another group of molecules (blue dashed box).

• Molecule reconstruction and validity We test the VAE by a context-free grammar; 3) Syntax-directed VAE (SD-
models on the task of reconstructing input molecules from VAE) (Dai et al., 2018) that incorporates both syntactic
their latent representations, and decoding valid molecules and semantic constraints of SMILES via attribute gram-
when sampling from prior distribution. (Section 3.1) mar. For molecule generation task, we also compare with
• Bayesian optimization Moving beyond generating valid GraphVAE (Simonovsky & Komodakis, 2018) that directly
molecules, we test how the model can produce novel generates atom labels and adjacency matrices of graphs.
molecules with desired properties. To this end, we per- Model Configuration To be comparable with the above
form Bayesian optimization in the latent space to search baselines, we set the latent space dimension as 56, i.e., the
molecules with specified properties. (Section 3.2) tree and graph representation hT and hG have 28 dimen-
• Constrained molecule optimization The task is to mod- sions each. Full training details and model configurations
ify given molecules to improve specified properties, while are provided in the appendix.
constraining the degree of deviation from the original
molecule. This is a more realistic scenario in drug discov- 3.1. Molecule Reconstruction and Validity
ery, where development of new drugs usually starts with
known molecules such as existing drugs (Besnard et al., Setup The first task is to reconstruct and sample molecules
2012). Since it is a new task, we cannot compare to any from latent space. Since both encoding and decoding pro-
existing baselines. (Section 3.3) cess are stochastic, we estimate reconstruction accuracy by
Monte Carlo method used in (Kusner et al., 2017): Each
Below we describe the data, baselines and model configura- molecule is encoded 10 times and each encoding is de-
tion that are shared across the tasks. Additional setup details coded 10 times. We report the portion of the 100 decoded
are provided in the task-specific sections. molecules that are identical to the input molecule.
Data We use the ZINC molecule dataset from Kusner et al. To compute validity, we sample 1000 latent vectors from
(2017) for our experiments, with the same training/testing the prior distribution N (0, I), and decode each of these
split. It contains about 250K drug molecules extracted from vectors 100 times. We report the percentage of decoded
the ZINC database (Sterling & Irwin, 2015). molecules that are chemically valid (checked by RDKit).
Baselines We compare our approach with SMILES-based For ablation study, we also report the validity of our model
baselines: 1) Character VAE (CVAE) (Gómez-Bombarelli without validity check in decoding phase.
et al., 2016) which generates SMILES strings character by Results Table 1 shows that JT-VAE outperforms previ-
character; 2) Grammar VAE (GVAE) (Kusner et al., 2017) ous models in molecule reconstruction, and always pro-
that generates SMILES following syntactic constraints given
Junction Tree Variational Autoencoder for Molecular Graph Generation

Table 1. Reconstruction accuracy and prior validity results. Base- Table 2. Best molecule property scores found by each method.
line results are copied from Kusner et al. (2017); Dai et al. (2018); Baseline results are from Kusner et al. (2017); Dai et al. (2018).
Simonovsky & Komodakis (2018).
Method 1st 2nd 3rd
Method Reconstruction Validity
CVAE 1.98 1.42 1.19
CVAE 44.6% 0.7%
GVAE 2.94 2.89 2.80
GVAE 53.7% 7.2%
SD-VAE 4.04 3.50 2.96
SD-VAE 76.2% 43.5%
JT-VAE 5.30 4.93 4.49
GraphVAE - 13.5%
JT-VAE (w/o check) 76.4% 93.5%
JT-VAE (full) 76.7% 100.0%

duces valid molecules when sampled from prior distribution.


When validity check is removed, our model could still gen-
erates 93.5% valid molecules. This shows our method does
not heavily rely on prior knowledge. As shown in Figure 6,
the sampled molecules have non-trivial structures such as
simple chains. We further sampled 5000 molecules from Figure 7. Best three molecules and their property scores found by
prior and found they are all distinct from the training set. JT-VAE using Bayesian optimization.
Thus our model is not a simple memorization.
Results As shown in Table 2, JT-VAE finds molecules with
Analysis We qualitatively examine the latent space of JT-
significantly better scores than previous methods. Figure 7
VAE by visualizing the neighborhood of molecules. Given
lists the top-3 best molecules found by JT-VAE. In fact,
a molecule, we follow the method in Kusner et al. (2017)
JT-VAE finds over 50 molecules with scores over 3.50 (the
to construct a grid visualization of its neighborhood. Fig-
second best molecule proposed by SD-VAE). Moreover, the
ure 6 shows the local neighborhood of the same molecule
SGP yields better predictive performance when trained on
visualized in Dai et al. (2018). In comparison, our neighbor-
JT-VAE embeddings (Table 3).
hood does not contain molecules with huge rings (with more
than 7 atoms), which rarely occur in the dataset. We also
3.3. Constrained Optimization
highlight two groups of closely resembling molecules that
have identical tree structures but vary only in how clusters Setup The third task is to perform molecule optimization
are attached together. This demonstrates the smoothness of in a constrained scenario. Given a molecule m, the task is
learned molecular embeddings. to find a different molecule m0 that has the highest property
value with the molecular similarity sim(m, m0 ) ≥ δ for
3.2. Bayesian Optimization some threshold δ. We use Tanimoto similarity with Morgan
fingerprint (Rogers & Hahn, 2010) as the similarity metric,
Setup The second task is to produce novel molecules with
and penalized logP coefficient as our target chemical prop-
desired properties. Following (Kusner et al., 2017), our
erty. For this task, we jointly train a property predictor F
target chemical property y(·) is octanol-water partition coef-
(parameterized by a feed-forward network) with JT-VAE to
ficients (logP) penalized by the synthetic accessibility (SA)
predict y(m) from the latent embedding of m. To optimize
score and number of long cycles.2 To perform Bayesian
a molecule m, we start from its latent representation, and
optimization (BO), we first train a VAE and associate each
apply gradient ascent in the latent space to improve the pre-
molecule with a latent vector, given by the mean of the vari-
dicted score F (·), similar to (Mueller et al., 2017). After
ational encoding distribution. After the VAE is learned, we
applying K = 80 gradient steps, K molecules are decoded
train a sparse Gaussian process (SGP) to predict y(m) given
from resulting latent trajectories, and we report the molecule
its latent representation. Then we perform five iterations of
with the highest F (·) that satisfies the similarity constraint.
batched BO using the expected improvement heuristic.
A modification succeeds if one of the decoded molecules
For comparison, we report 1) the predictive performance of satisfies the constraint and is distinct from the original.
SGP trained on latent encodings learned by different VAEs,
To provide the greatest challenge, we selected 800 molecules
measured by log-likelihood (LL) and root mean square er-
with the lowest property score y(·) from the test set. We
ror (RMSE) with 10-fold cross validation. 2) The top-3
report the success rate (how often a modification succeeds),
molecules found by BO under different models.
and among success cases the average improvement y(m0 ) −
2
y(m) = logP (m) − SA(m) − cycle(m) where cycle(m) y(m) and molecular similarity sim(m, m0 ) between the
counts the number of rings that have more than six atoms. original and modified molecules m and m0 .
Junction Tree Variational Autoencoder for Molecular Graph Generation

Table 3. Predictive performance of sparse Gaussian Processes


trained on different VAEs. Baseline results are copied from Kusner
et al. (2017) and Dai et al. (2018).

Method LL RMSE
Figure 8. A molecule modification that yields an improvement of
CVAE −1.812 ± 0.004 1.504 ± 0.006 4.0 with molecular similarity 0.617 (modified part is in red).
GVAE −1.739 ± 0.004 1.404 ± 0.006
graphs, Lei et al. (2017) designed Weisfeiler-Lehman kernel
SD-VAE −1.697 ± 0.015 1.366 ± 0.023
network inspired by graph kernels. Dai et al. (2016) consid-
JT-VAE −1.658 ± 0.023 1.290 ± 0.026 ered a different architecture where graphs were viewed as la-
tent variable graphical models, and derived their model from
Table 4. Constrained optimization result of JT-VAE: mean and message passing algorithms. Our tree and graph encoder are
standard deviation of property improvement, molecular similarity closely related to this graphical model perspective, and to
and success rate under constraints sim(m, m0 ) ≥ δ with varied δ. neural message passing networks (Gilmer et al., 2017). For
δ Improvement Similarity Success convolutional architectures, Duvenaud et al. (2015) intro-
duced a convolution-like propagation on molecular graphs,
0.0 1.91 ± 2.04 0.28 ± 0.15 97.5% which was generalized to other domains by Niepert et al.
0.2 1.68 ± 1.85 0.33 ± 0.13 97.1% (2016). Bruna et al. (2013); Henaff et al. (2015) developed
0.4 0.84 ± 1.45 0.51 ± 0.10 83.6% graph convolution in spectral domain via graph Laplacian.
0.6 0.21 ± 0.71 0.69 ± 0.06 46.4% For applications, graph neural networks are used in semi-
supervised classification (Kipf & Welling, 2016), computer
vision (Monti et al., 2016), and chemical domains (Kearnes
Results Our results are summarized in Table 4. The uncon-
et al., 2016; Schütt et al., 2017; Jin et al., 2017).
strained scenario (δ = 0) has the best average improvement,
but often proposes dissimilar molecules. When we tighten Tree-structured Models Our tree encoder is related to re-
the constraint to δ = 0.4, about 80% of the time our model cursive neural networks and tree-LSTM (Socher et al., 2013;
finds similar molecules, with an average improvement 0.84. Tai et al., 2015; Zhu et al., 2015). These models encode
This also demonstrates the smoothness of the learned latent tree structures where nodes in the tree are bottom-up trans-
space. Figure 8 illustrates an effective modification resulting formed into vector representations. In contrast, our model
in a similar molecule with great improvement. propagates information both bottom-up and top-down.
On the decoding side, tree generation naturally arises in
4. Related Work natural language parsing (Dyer et al., 2016; Kiperwasser &
Goldberg, 2016). Different from our approach, natural lan-
Molecule Generation Previous work on molecule gen-
guage parsers have access to input words and only predict
eration mostly operates on SMILES strings. Gómez-
the topology of the tree. For general purpose tree generation,
Bombarelli et al. (2016); Segler et al. (2017) built gener-
Vinyals et al. (2015); Aharoni & Goldberg (2017) applied re-
ative models of SMILES strings with recurrent decoders.
current networks to generate linearized version of trees, but
Unfortunately, these models could generate invalid SMILES
their architectures were entirely sequence-based. Dong &
that do not result in any molecules. To remedy this issue,
Lapata (2016); Alvarez-Melis & Jaakkola (2016) proposed
Kusner et al. (2017); Dai et al. (2018) complemented the
tree-based architectures that construct trees top-down from
decoder with syntactic and semantic constraints of SMILES
the root. Our model is most closely related to Alvarez-Melis
by context free and attribute grammars, but these grammars
& Jaakkola (2016) that disentangles topological prediction
do not fully capture chemical validity. Other techniques
from label prediction, but we generate nodes in a depth-first
such as active learning (Janz et al., 2017) and reinforcement
order and have additional steps that propagate information
learning (Guimaraes et al., 2017) encourage the model to
bottom-up. This forward-backward propagation also ap-
generate valid SMILES through additional training signal.
pears in Parisotto et al. (2016), but their model is node
Very recently, Simonovsky & Komodakis (2018) proposed
based whereas ours is based on message passing.
to generate molecular graphs by predicting their adjacency
matrices, and Li et al. (2018) generated molecules node by
node. In comparison, our method enforces chemical validity 5. Conclusion
and is more efficient due to the coarse-to-fine generation.
In this paper we present a junction tree variational autoen-
Graph-structured Encoders The neural network formu- coder for generating molecular graphs. Our method signifi-
lation on graphs was first proposed by Gori et al. (2005); cantly outperforms previous work in molecule generation
Scarselli et al. (2009), and later enhanced by Li et al. (2015) and optimization. For future work, we attempt to generalize
with gated recurrent units. For recurrent architectures over our method for general low-treewidth graphs.
Junction Tree Variational Autoencoder for Molecular Graph Generation

Acknowledgement Gómez-Bombarelli, R., Wei, J. N., Duvenaud, D.,


Hernández-Lobato, J. M., Sánchez-Lengeling, B., She-
We thank Jonas Mueller, Chengtao Li, Tao Lei and MIT berla, D., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams,
NLP Group for their helpful comments. This work was R. P., and Aspuru-Guzik, A. Automatic chemical de-
supported by the DARPA Make-It program under contract sign using a data-driven continuous representation of
ARO W911NF-16-2-0023. molecules. ACS Central Science, 2016. doi: 10.1021/
acscentsci.7b00572.
References
Gori, M., Monfardini, G., and Scarselli, F. A new model
Aharoni, R. and Goldberg, Y. Towards string-to-tree neural for learning in graph domains. In Neural Networks, 2005.
machine translation. arXiv preprint arXiv:1704.04743, IJCNN’05. Proceedings. 2005 IEEE International Joint
2017. Conference on, volume 2, pp. 729–734. IEEE, 2005.
Alvarez-Melis, D. and Jaakkola, T. S. Tree-structured de-
Guimaraes, G. L., Sanchez-Lengeling, B., Farias, P. L. C.,
coding with doubly-recurrent neural networks. 2016.
and Aspuru-Guzik, A. Objective-reinforced generative ad-
Besnard, J., Ruda, G. F., Setola, V., Abecassis, K., Ro- versarial networks (organ) for sequence generation mod-
driguiz, R. M., Huang, X.-P., Norval, S., Sassano, M. F., els. arXiv preprint arXiv:1705.10843, 2017.
Shin, A. I., Webster, L. A., et al. Automated design of lig-
ands to polypharmacological profiles. Nature, 492(7428): Henaff, M., Bruna, J., and LeCun, Y. Deep convolu-
215–220, 2012. tional networks on graph-structured data. arXiv preprint
arXiv:1506.05163, 2015.
Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spec-
tral networks and locally connected networks on graphs. Janz, D., van der Westhuizen, J., and Hernández-Lobato,
arXiv preprint arXiv:1312.6203, 2013. J. M. Actively learning what makes a discrete sequence
valid. arXiv preprint arXiv:1708.04465, 2017.
Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. Empirical
evaluation of gated recurrent neural networks on sequence Jin, W., Coley, C., Barzilay, R., and Jaakkola, T. Predict-
modeling. arXiv preprint arXiv:1412.3555, 2014. ing organic reaction outcomes with weisfeiler-lehman
network. In Advances in Neural Information Processing
Clayden, J., Greeves, N., Warren, S., and Wothers, P. Or- Systems, pp. 2604–2613, 2017.
ganic Chemistry. Oxford University Press, 2001.
Kearnes, S., McCloskey, K., Berndl, M., Pande, V., and
Dai, H., Dai, B., and Song, L. Discriminative embeddings of
Riley, P. Molecular graph convolutions: moving beyond
latent variable models for structured data. In International
fingerprints. Journal of computer-aided molecular design,
Conference on Machine Learning, pp. 2702–2711, 2016.
30(8):595–608, 2016.
Dai, H., Tian, Y., Dai, B., Skiena, S., and Song, L. Syntax-
directed variational autoencoder for structured data. In- Kingma, D. P. and Welling, M. Auto-encoding variational
ternational Conference on Learning Representations, bayes. arXiv preprint arXiv:1312.6114, 2013.
2018. URL https://openreview.net/forum? Kiperwasser, E. and Goldberg, Y. Easy-first dependency
id=SyqShMZRb. parsing with hierarchical tree lstms. arXiv preprint
Dong, L. and Lapata, M. Language to logical form with arXiv:1603.00375, 2016.
neural attention. arXiv preprint arXiv:1601.01280, 2016.
Kipf, T. N. and Welling, M. Semi-supervised classifica-
Duvenaud, D. K., Maclaurin, D., Iparraguirre, J., Bombarell, tion with graph convolutional networks. arXiv preprint
R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. P. Con- arXiv:1609.02907, 2016.
volutional networks on graphs for learning molecular fin-
gerprints. In Advances in neural information processing Kusner, M. J., Paige, B., and Hernández-Lobato, J. M.
systems, pp. 2224–2232, 2015. Grammar variational autoencoder. arXiv preprint
arXiv:1703.01925, 2017.
Dyer, C., Kuncoro, A., Ballesteros, M., and Smith, N. A.
Recurrent neural network grammars. arXiv preprint Landrum, G. Rdkit: Open-source cheminformatics. Online).
arXiv:1602.07776, 2016. http://www. rdkit. org. Accessed, 3(04):2012, 2006.

Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Lei, T., Jin, W., Barzilay, R., and Jaakkola, T. Deriving
Dahl, G. E. Neural message passing for quantum chem- neural architectures from sequence and graph kernels.
istry. arXiv preprint arXiv:1704.01212, 2017. arXiv preprint arXiv:1705.09037, 2017.
Junction Tree Variational Autoencoder for Molecular Graph Generation

Li, Y., Tarlow, D., Brockschmidt, M., and Zemel, R. Sterling, T. and Irwin, J. J. Zinc 15–ligand discovery for
Gated graph sequence neural networks. arXiv preprint everyone. J. Chem. Inf. Model, 55(11):2324–2337, 2015.
arXiv:1511.05493, 2015.
Tai, K. S., Socher, R., and Manning, C. D. Improved seman-
Li, Y., Vinyals, O., Dyer, C., Pascanu, R., and Battaglia, tic representations from tree-structured long short-term
P. Learning deep generative models of graphs. memory networks. arXiv preprint arXiv:1503.00075,
2018. URL https://openreview.net/forum? 2015.
id=Hy1d-ebAb.
Vinyals, O., Kaiser, Ł., Koo, T., Petrov, S., Sutskever, I.,
Monti, F., Boscaini, D., Masci, J., Rodolà, E., Svoboda, J., and Hinton, G. Grammar as a foreign language. In
and Bronstein, M. M. Geometric deep learning on graphs Advances in Neural Information Processing Systems, pp.
and manifolds using mixture model cnns. arXiv preprint 2773–2781, 2015.
arXiv:1611.08402, 2016.
Weininger, D. Smiles, a chemical language and information
Mueller, J., Gifford, D., and Jaakkola, T. Sequence to better system. 1. introduction to methodology and encoding
sequence: continuous revision of combinatorial structures. rules. Journal of chemical information and computer
In International Conference on Machine Learning, pp. sciences, 28(1):31–36, 1988.
2536–2544, 2017.
Zhu, X., Sobihani, P., and Guo, H. Long short-term memory
Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- over recursive structures. In International Conference on
volutional neural networks for graphs. In International Machine Learning, pp. 1604–1612, 2015.
Conference on Machine Learning, pp. 2014–2023, 2016.
Parisotto, E., Mohamed, A.-r., Singh, R., Li, L., Zhou, D.,
and Kohli, P. Neuro-symbolic program synthesis. arXiv
preprint arXiv:1611.01855, 2016.
Rarey, M. and Dixon, J. S. Feature trees: a new molecular
similarity measure based on tree matching. Journal of
computer-aided molecular design, 12(5):471–490, 1998.
Rogers, D. and Hahn, M. Extended-connectivity finger-
prints. Journal of chemical information and modeling, 50
(5):742–754, 2010.
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and
Monfardini, G. The graph neural network model. IEEE
Transactions on Neural Networks, 20(1):61–80, 2009.
Schütt, K., Kindermans, P.-J., Felix, H. E. S., Chmiela, S.,
Tkatchenko, A., and Müller, K.-R. Schnet: A continuous-
filter convolutional neural network for modeling quantum
interactions. In Advances in Neural Information Process-
ing Systems, pp. 992–1002, 2017.
Segler, M. H., Kogej, T., Tyrchan, C., and Waller, M. P.
Generating focussed molecule libraries for drug dis-
covery with recurrent neural networks. arXiv preprint
arXiv:1701.01329, 2017.
Simonovsky, M. and Komodakis, N. Graphvae: Towards
generation of small graphs using variational autoencoders.
arXiv preprint arXiv:1802.03480, 2018.
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning,
C. D., Ng, A., and Potts, C. Recursive deep models for
semantic compositionality over a sentiment treebank. In
Proceedings of the 2013 conference on empirical methods
in natural language processing, pp. 1631–1642, 2013.

You might also like