A PyTorch Library for Deep Continual Learning
A PyTorch Library for Deep Continual Learning
Abstract
Continual learning is the problem of learning from a nonstationary stream of data, a fun-
damental issue for sustainable and efficient training of deep neural networks over time.
Unfortunately, deep learning libraries only provide primitives for offline training, assuming
that model’s architecture and data are fixed. Avalanche is an open source library main-
tained by the ContinualAI non-profit organization that extends PyTorch by providing first-
class support for dynamic architectures, streams of datasets, and incremental training and
evaluation methods. Avalanche provides a large set of predefined benchmarks and training
algorithms and it is easy to extend and modular while supporting a wide range of continual
learning scenarios. Documentation is available at https://avalanche.continualai.org.
Keywords: Continual Learning, lifelong learning, PyTorch, reproducibility, machine
learning software
1. Introduction
Learning continually from non-stationary data streams is a long-sought goal in Artificial
Intelligence. While most deep learning methods are trained offline, there is a growing
interest in Deep Continual Learning (CL) (Lesort et al., 2020) to improve learning efficiency,
robustness and adaptability of deep networks. Deep learning libraries such as PyTorch
and Tensorflow are designed to support offline training, making it difficult to implement
continual learning methods. Avalanche1 , initially proposed in Lomonaco et al. (2021),
provides a comprehensive library to support the development of research-oriented continual
learning methods. The library is maintained by the ContinualAI non-profit organization.
Compared to existing continual learning libraries (Douillard and Lesort, 2021; Wolczyk
et al., 2021; Normandin et al., 2022; Mirzadeh and Ghasemzadeh, 2021; Masana et al.,
c 2023 Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, Vincenzo Lomonaco.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided
at http://jmlr.org/papers/v24/23-0130.html.
Carta, Pellegrini, Cossu, Hemati, Lomonaco
Evaluation
DataLoad Plugin
Benchmarks Training Metrics
OOD/valid.
streams Object
Split, Detection
CORe50 OpenLORIS
Permuted Strategy
Strategies TensorBoard
Benchmarks Plugins
Logging
Generators Stream51 EndlessCL Hierarchical Replay
Interactive
Templates Policies
2
Avalanche
experience, stream). Custom metrics can be defined or they can be easily computed from
the results of the existing metrics.
Logging metrics are collected and serialized automatically by the logging system. The
EvaluationPlugin (doc) connects training strategies, metrics, and loggers, by collecting
all the metrics and dispatching them to all the registered loggers. TensorBoard, Weights
and Biases, CSV files, text files, and standard output loggers are available (doc), but the
logging interface can be easily extended with new loggers for custom needs.
Core Utilities Avalanche offers a checkpointing functionality, allowing to pause and re-
sume experiments. All Avalanche components are serializable.
Dynamic and Standalone Components Avalanche extends many PyTorch static com-
ponents into dynamical objects. For example, CL strategies may require changing the
model’s architecture, optimizer, losses, and datasets during training. In PyTorch, these
are static objects that are not easy to update during training (e.g., the architecture of
nn.Modules is fixed). Avalanche DynamicModules provide a simple API to update the
model’s architecture, ExemplarsBuffer manage replay buffers, regularization plugins up-
date the loss function after each experience, and optimizer are also updated before each
experience. Every component is automatically managed by Avalanche strategies, but they
can also be used standalone in a custom training loop (example).
3
Carta, Pellegrini, Cossu, Hemati, Lomonaco
…
before_backward
after_training after_training_iteration
after_training_exp
after_training_epoch
Testing Avalanche is thoroughly tested with a battery of unit tests. Each pull request
is tested on a subset of the unit tests by the continuous integration pipeline on Github. A
subset of continual-learning-baselines (link) is executed with a regular cadence to ensure
that Avalanche baselines are in line with expected results from the literature.
4. Conclusion
Currently, Avalanche v0.3.1 constitutes the largest software library for deep continual
learning. Its main focus on fast prototyping, re-producibility and portability makes it the
perfect candidate for research-oriented projects. The library is a result of more than two
years of development effort involving more than fourteen different research organizations
across the world. The MIT licensed software and the support of ContinualAI ensure conti-
nuity and alignment with the continual learning research community at large. In the future,
we plan to increase the number of available benchmarks and methods, keeping a strong fo-
cus on reproducibility and continual-learning-baselines(link). We also plan to provide better
support for emphreinforcement learning(link), distributed, and federated training support,
while bringing the toolkit to maturity and its first stable official release v1.0.0.
4
Avalanche
Acknowledgments
Antonio Carta acknowledge support from the Ministry of University and Research (MUR)
as part of the FSE REACT-EU - PON 2014-2020 “Research and Innovation” resources –
Innovation Action - DM MUR 1062/2021.
References
Arthur Douillard and Timothée Lesort. Continuum: Simple Management of Complex Con-
tinual Learning Scenarios. arXiv:2102.06253 [cs], February 2021.
Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, and
Natalia Dı́az-Rodrı́guez. Continual learning for robotics: Definition, framework, learning
strategies, opportunities and challenges. Information Fusion, 58:52–68, June 2020. ISSN
1566-2535. doi: 10.1016/j.inffus.2019.12.004.
Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti,
Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido M. van de
Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone
Calderara, German I. Parisi, Fabio Cuzzolin, Andreas S. Tolias, Simone Scardapane, Luca
Antiga, Subutai Ahmad, Adrian Popescu, Christopher Kanan, Joost van de Weijer, Tinne
Tuytelaars, Davide Bacciu, and Davide Maltoni. Avalanche: An End-to-End Library for
Continual Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 3600–3610, 2021.
Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, and
Joost van de Weijer. Class-Incremental Learning: Survey and Performance Evaluation on
Image Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence,
pages 1–20, 2022. ISSN 1939-3539. doi: 10.1109/TPAMI.2022.3213473.
Seyed Iman Mirzadeh and Hassan Ghasemzadeh. CL-Gym: Full-Featured PyTorch Library
for Continual Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 3621–3627, 2021.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,
Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, An-
dreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank
Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An
Imperative Style, High-Performance Deep Learning Library. In H Wallach, H Larochelle,
A Beygelzimer, F d\textquotesingle Alché-Buc, E Fox, and R Garnett, editors, Advances
in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.,
2019.
5
Carta, Pellegrini, Cossu, Hemati, Lomonaco
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirk-
patrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive Neural
Networks. June 2016.
Maciej Wolczyk, Michal Zajkac, Razvan Pascanu, Lukasz Kuciński, and Piotr Miloś. Contin-
ual World: A Robotic Benchmark For Continual Reinforcement Learning. In Thirty-Fifth
Conference on Neural Information Processing Systems, May 2021.