0% found this document useful (0 votes)
58 views3 pages

Chainer

Chainer is an open source deep learning framework written in Python. It was the first to introduce define-by-run which allows flexible control flows like conditionals and loops. Chainer is used in applications like automatic image colorization and has extension libraries for distributed training and reinforcement learning.

Uploaded by

ava939
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views3 pages

Chainer

Chainer is an open source deep learning framework written in Python. It was the first to introduce define-by-run which allows flexible control flows like conditionals and loops. Chainer is used in applications like automatic image colorization and has extension libraries for distributed training and reinforcement learning.

Uploaded by

ava939
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Chainer

Chainer is an open source deep learning framework written


purely in Python on top of NumPy and CuPy Python libraries. The
Chainer
development is led by Japanese venture company Preferred Original author(s) Seiya Tokui
Networks in partnership with IBM, Intel, Microsoft, and Developer(s) Community,
Nvidia.[4][5][6][7]
Preferred
Chainer is notable for its early adoption of "define-by-run" Networks, Inc.
scheme, as well as its performance on large scale systems.[1] The Initial release June 9,
first version was released in June 2015 and has gained large 2015.[1][2]
popularity in Japan since then.[1][2] Furthermore, in 2017, it was Stable release 7.8.1[3] / 5
listed by KDnuggets in top 10 open source machine learning
January 2022
Python projects.[8]
Repository github.com
In December 2019, Preferred Networks announced the transition /chainer
of its development effort from Chainer to PyTorch and it will only /chainer (http
provide maintenance patches after releasing v7.[9] s://github.com/
chainer/chaine
Define-by-run r)
Written in Python
Chainer was the first deep learning framework to introduce the
Platform cross-platform
define-by-run approach.[10][11] The traditional procedure to train a
network was in two phases: define the fixed connections between Available in Python
mathematical operations (such as matrix multiplication and Type Deep learning
nonlinear activations) in the network, and then run the actual library
training calculation. This is called the define-and-run or static-
graph approach. Theano and TensorFlow are among the notable License MIT
frameworks that took this approach. In contrast, in the define-by- Website chainer.org (htt
run or dynamic-graph approach, the connection in a network is not ps://chainer.or
determined when the training is started. The network is determined g/)
during the training as the actual calculation is performed.

One of the advantages of this approach is that it is intuitive and flexible.[12] If the network has complicated
control flows such as conditionals and loops, in the define-and-run approach, specially designed operations
for such constructs are needed. On the other hand, in the define-by-run approach, programming language's
native constructs such as if statements and for loops can be used to describe such flow. This flexibility is
especially useful to implement recurrent neural networks.[13][14]

Another advantage is ease of debugging.[12] In the define-and-run approach, if an error (such as numeric
error) has occurred in the training calculation, it is often difficult to inspect the fault, because the code
written to define the network and the actual place of the error are separated. In the define-by-run approach,
you can just suspend the calculation with the language's built-in debugger and inspect the data that flows on
your code of the network.
Define-by-run has gained popularity since the introduction by Chainer and is now implemented in many
other frameworks, including PyTorch[15] and TensorFlow.[12]

Extension libraries
Chainer has four extension libraries, ChainerMN, ChainerRL, ChainerCV and ChainerUI. ChainerMN
enables Chainer to be used on multiple GPUs with performance significantly faster than other deep learning
frameworks.[1] A supercomputer running Chainer on 1024 GPUs processed 90 epochs of ImageNet dataset
on ResNet-50 network in 15 minutes, which is four times faster than the previous record held by
Facebook.[16][17] ChainerRL adds state of art deep reinforcement learning algorithms, and ChainerUI is a
management and visualization tool.

Applications
Chainer is used as the framework for PaintsChainer, a service which does automatic colorization of black
and white, line only, draft drawings with minimal user input.[18][19]

See also
Comparison of deep learning software
Machine learning
Artificial neural network

References
1. "Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs" (https://www.theregister.
co.uk/2017/04/07/intel_chainer_ai_day/). The Register. 2017-04-07. Retrieved 2017-12-24.
2. "Deep Learning のフレームワーク Chainer を公開しました " (https://research.preferred.jp/20
15/06/deep-learning-chainer/) (in Japanese). 2015-06-09. Retrieved 2017-12-24.
3. "Release 7.8.1" (https://github.com/chainer/chainer/releases/tag/v7.8.1). 5 January 2022.
Retrieved 3 October 2022.
4. "Chainer Homepage" (https://chainer.org/#parters). Retrieved 2017-12-24.
5. "IBM Wants to be "Red Hat" of Deep Learning" (https://www.hpcwire.com/2017/01/26/ibm-w
ants-red-hat-deep-learning/). HPCwire. 2017-01-26. Retrieved 2017-09-08.
6. "Intel Collaborating with Preferred Networks in Japan on Deep Learning" (https://newsroom.i
ntel.com/news/intel-collaborating-preferred-networks-japan-deep-learning/). 2017-04-06.
Retrieved 2017-12-24.
7. "Microsoft partners with Preferred Networks to bring Chainer deep learning technology to
Azure - MSPoweruser" (https://mspoweruser.com/microsoft-partners-with-preferred-networks
-to-bring-chainer-deep-learning-technology-to-azure/). MSPoweruser. 2017-05-23.
Retrieved 2017-09-08.
8. "Top 20 Python Machine Learning Open Source Projects" (https://www.kdnuggets.com/201
6/11/top-20-python-machine-learning-open-source-updated.html). KDnuggets. 2017-11-24.
9. "Preferred Networks Migrates its Deep Learning Research Platform to PyTorch" (https://prefe
rred.jp/en/news/pr20191205/). Preferred Networks, Inc. 2019-12-05. Retrieved 2019-12-27.
10. Tokui, Seiya; et al. (2015). "Chainer: a next-generation open source framework for deep
learning". 29th Annual Conference on Neural Information Processing Systems (NIPS). 5.
11. Shimada, Naoki (September 14, 2017). Deep Learning with Chainer. Gijutsu-Hyohron. p. 61.
ISBN 4774191868.
12. "Eager Execution: An imperative, define-by-run interface to TensorFlow" (https://research.go
ogleblog.com/2017/10/eager-execution-imperative-define-by.html). Google Research Blog.
13. "Deep Learning With Dynamic Computation Graphs (ICLR 2017)" (http://muratbuffalo.blogsp
ot.jp/2017/01/deep-learning-with-dynamic-computation.html). Metadata.
14. Hido, Shohei (8 November 2016). "Complex neural networks made easy by Chainer" (http
s://www.oreilly.com/learning/complex-neural-networks-made-easy-by-chainer). O'Reilly
Media. Retrieved 26 June 2018.
15. Perez, Carlos E. (20 January 2017). "PyTorch, Dynamic Computational Graphs and Modular
Deep Learning" (https://medium.com/intuitionmachine/pytorch-dynamic-computational-graph
s-and-modular-deep-learning-7e7f89f18d1). Medium.
16. "Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes" (https://w
ww.preferred-networks.jp/docs/imagenet_in_15min.pdf) (pdf). Retrieved 2017-12-24.
17. Greene, Tristan (20 November 2017). "Facebook's nerds bested by Japan's in the race to
train AI" (https://thenextweb.com/artificial-intelligence/2017/11/20/researchers-did-in-15-min
utes-what-takes-facebook-an-hour/). The Next Web. Retrieved 24 November 2017.
18. Know, Now You (2017-02-15). "This neural network-based software will add colour to your
drawings for free" (http://www.techly.com.au/2017/02/15/neural-network-based-software-will-
add-colour-drawings-free/). Techly. Retrieved 2017-09-08.
19. "Drawing app "pixiv Sketch" and automatic coloring service "PaintsChainer" collaborate to
provide a new function for automatic coloring of illustrations!" (https://www.preferred-network
s.jp/en/news/pr20170524). 2017-05-24. Retrieved 2017-12-24.

External links
Official website (http://chainer.org/)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Chainer&oldid=1109254166"

You might also like