Deep Learning Artificial Intelligence
Deep Learning Artificial Intelligence
Applications
Technology
Deep Learning
Editors Pick
Guest Post
Machine Learning
Resources
Today, we are building ever increasing networks that are built on top of
previous generations of network topologies. As neural networks are inherently
compatible with other neural networks, we can combine and adapt them to new
purposes. If you aim to tackle a new problem, there are no clear guidelines that
define an appropriate network topology. The most common approaches are to
have a look at the work of others that attempted to solve similar problems or to
design an entirely new topology on your own. This new design is often inspired
by classical methods, but it is up to the network and the training data to learn
the correct weights such that they converge to a plausible solution. As such
they are even networks that learn well-known functions such as the Fourier
transform from scratch. With the discrete Fourier transform being a matrix
multiplication, it is often modeled as a fully connected layer. With this approach
it is immediately clear that the two disadvantages cannot be avoided: First, the
fully connected layer introduces a lot of free parameters that may model
entirely different functions. Second, the computational efficiency of a fast
Fourier transform can never be reached with this approach.
-Advertisement-
Interestingly, this piece of the theory was only published in 2018. It was
developed for the theoretical analysis of embedding of prior physical knowledge
into neural networks. The observations also very nicely explain why we see the
tremendous success of convolutional neural networks and pooling layers. In
analogy to biology, we could argue that convolution and pooling operations are
prior knowledge of perception. Recent work goes even further: there exist
approaches that even include complicated filter functions such as Vesselness
filter or the guided filter into a neural network.
The theoretical analysis also shows that modeling errors in earlier layers are
amplified by subsequent layers. This observation is also in line with the
importance of feature extraction in classical machine learning and pattern
analysis. Combination of feature extraction and classification as it is done in
deep learning, allows us to synchronize both processes and therewith reduces
the expected error after training.
We think that these new approaches are interesting towards the community of
deep learning that is going well beyond only modeling perceptual tasks today.
To us, it is exciting to see that traditional approaches are inherently compatible
with everything that is done today in deep learning. Hence, we believe that
there are many more new developments to come in the field of machine and
deep learning in the near future and it will be exciting to follow up on them.
If you think that these observations are interesting and exciting, we recommend
reading our gentle introduction into deep learning as a follow up on
this article or our free online video course.
Text and images of this article are licensed under Creative Commons
License 4.0 Attribution. So feel free to reuse and share any part
of this work.
Note: This is a guest post, and opinion in this article is of the guest writer. If
you have any issues with any of the articles posted at www.marktechpost.com
please contact at [email protected]
Share this:
Related
Do We Still Need Traditional Pattern Recognition and Signal Processing in the Age of
Deep Learning?November 28, 2018In "Artificial Intelligence"
(182)
(1)
Neural Networks and Deep Learning: A Textbook
$45.45$69.99
(12)
(7)
Deep Learning: A Practitioner's Approach
$23.19$49.99
(20)
(10)
Deep Learning By Example: A hands-on guide to impl…
$38.21$39.99
(2)
Previous articleModernizing the Supply Chain Management through the Blockchain Technology