Stanford Graph Learning Finance
Stanford Graph Learning Finance
in Financial Networks
Collaboration with Tianyu Du, Fan-yun Sun, Jure Leskovec
Jiaxuan You
Stanford University
Financial Networks
§ Financial Networks: Describe financial entities and their connections
International banking Bitcoin transactions
• Nodes: Countries • Nodes: BTC wallets
• Edges: Capital flows • Edges: Transactions
Image credit: The Political Economy of Global Finance: A Network Model Image credit: https://dailyblockchain.github.io/
Jiaxuan You, Stanford University 2
Graph Learning in Financial Networks
§ Goal: A graph learning framework for financial networks
§ Applications: Fraud detection, Anti-money laundering, Anomaly detection
§ Solution: Graph representation learning!
Edge-level: fraudulent/anomalous
transactions, …
§ Benefits:
§ Represents a transaction with a broader context
§ Requires fewer feature-engineering
NN NN
NN
NN
NN
$100, 01/06 ?
NN
$400, 01/05 NN
$500, 01/03 NN
$200, 01/02
NN NN node
$200, 01/02
NN
embeddings
$100, 01/01
prediction tasks?
Jiaxuan You, Stanford University 7
ROLAND Model: From Static to Dynamic GNN
§ Idea: Recurrently update node
Pred "! Pred "!$#
embeddings at each layer
§ Introduce a new module to a static GNN:
Embedding Embedding
update update
Input:
Pred " GNN Layer 2 GNN Layer 2
§ Previous embeddings from the
Embedding the same layer
GNN Layer 2 … Embedding Embedding … update
update update § Current embeddings from the
GNN Layer 1
previous layer
GNN Layer 1 GNN Layer 1
Output: Updated embeddings
Graph ! Snapshot !!"# Snapshot !!
§ Benefits:
Static GNN Dynamic GNN
§ Simple and effective
§ Benefit from the SOTA designs of a static GNN
Jiaxuan You, Stanford University 8
ROLAND Training: Efficient Training
Live-update evaluation pipeline
§ Incremental training:
Time ! Time ! + #
§ Only keep these in GPU
!##! !##!$#
§ GNN Model 𝐺𝑁𝑁!
Train Train
§ Historical node states 𝐻!"#
Evaluate Evaluate § Incoming new graph snapshot 𝐺!
§ Efficient and work well in practice
"! "$! "!$# "$!$#
§ Meta training:
… State …
!##!"#
%!
!##!
§ Train a meta-GNN that can quickly adapt
to new data
Snapshot Snapshot
!!"# !! § Benefits: ROLAND does not need to be
Only this part in GPU frequently retrained
Jiaxuan You, Stanford University 9
Overview of This Talk
NN NN
NN
NN
NN
ROLAND significantly
outperforms SOTA
baselines
Jiaxuan You, Stanford University 14
Analysis: ROLAND’s Performance over Time
Transaction
amount
# of
retraining Almost no
epochs retraining
needed!
Test MRR
Recall@1
NN NN
NN
NN
NN
edge embeddings
A B A B
C D C D
Jiaxuan You, Stanford University 21
Quantitative Evaluation for Anomaly Detection
+14.7% Precision +14.2% Recall
Deep learning baseline:
NN NN
NN
NN
NN