Deep Learning
Deep Learning
.
Machine learning and
data mining
Problems[show]
Supervised learning
(classification • regression)
[show]
Clustering[show]
Dimensionality reduction[show]
Structured prediction[show]
Anomaly detection[show]
Reinforcement learning[show]
Theory[show]
Machine-learning venues[show]
Glossary of artificial
intelligence[show]
Related articles[show]
v
t
e
Deep learning (also known as deep structured learning) is part of a
broader family of machine learning methods based on artificial neural
networks with representation learning. Learning can
be supervised, semi-supervised or unsupervised.[1][2][3]
Deep learning architectures such as deep neural networks, deep belief
networks, recurrent neural networks and convolutional neural
networks have been applied to fields including computer vision, speech
recognition, natural language processing, audio recognition, social
network filtering, machine translation, bioinformatics, drug design,
medical image analysis, material inspection and board game programs,
where they have produced results comparable to and in some cases
surpassing human expert performance.[4][5][6]
Artificial neural networks (ANNs) were inspired by information
processing and distributed communication nodes in biological systems.
ANNs have various differences from biological brains. Specifically,
neural networks tend to be static and symbolic, while the biological
brain of most living organisms is dynamic (plastic) and analog. [7][8][9]
The adjective "deep" in deep learning comes from the use of multiple
layers in the network. Early work showed that a
linear perceptron cannot be a universal classifier, and then that a
network with a nonpolynomial activation function with one hidden layer
of unbounded width can on the other hand so be. Deep learning is a
modern variation which is concerned with an unbounded number of
layers of bounded size, which permits practical application and
optimized implementation, while retaining theoretical universality under
mild conditions. In deep learning the layers are also permitted to be
heterogeneous and to deviate widely from biologically
informed connectionist models, for the sake of efficiency, trainability
and understandability, whence the "structured" part.
Definition[edit]
Representing Images on Multiple Layers of Abstraction in Deep
Learning [10]
Deep learning is a class of machine learning algorithms that[11](pp199–
200)
uses multiple layers to progressively extract higher level features
from the raw input. For example, in image processing, lower layers
may identify edges, while higher layers may identify the concepts
relevant to a human such as digits or letters or faces.
Overview[edit]
Most modern deep learning models are based on artificial neural
networks, specifically, Convolutional Neural Networks (CNN)s,
although they can also include propositional formulas or latent
variables organized layer-wise in deep generative models such as the
nodes in deep belief networks and deep Boltzmann machines.[12]
In deep learning, each level learns to transform its input data into a
slightly more abstract and composite representation. In an image
recognition application, the raw input may be a matrix of pixels; the first
representational layer may abstract the pixels and encode edges; the
second layer may compose and encode arrangements of edges; the
third layer may encode a nose and eyes; and the fourth layer may
recognize that the image contains a face. Importantly, a deep learning
process can learn which features to optimally place in which level on
its own. (Of course, this does not completely eliminate the need for
hand-tuning; for example, varying numbers of layers and layer sizes
can provide different degrees of abstraction.) [1][13]
The word "deep" in "deep learning" refers to the number of layers
through which the data is transformed. More precisely, deep learning
systems have a substantial credit assignment path (CAP) depth. The
CAP is the chain of transformations from input to output. CAPs
describe potentially causal connections between input and output. For
a feedforward neural network, the depth of the CAPs is that of the
network and is the number of hidden layers plus one (as the output
layer is also parameterized). For recurrent neural networks, in which a
signal may propagate through a layer more than once, the CAP depth
is potentially unlimited.[2] No universally agreed upon threshold of depth
divides shallow learning from deep learning, but most researchers
agree that deep learning involves CAP depth higher than 2. CAP of
depth 2 has been shown to be a universal approximator in the sense
that it can emulate any function.[14] Beyond that, more layers do not add
to the function approximator ability of the network. Deep models (CAP
> 2) are able to extract better features than shallow models and hence,
extra layers help in learning the features effectively.
Deep learning architectures can be constructed with a greedy layer-by-
layer method.[15] Deep learning helps to disentangle these abstractions
and pick out which features improve performance. [1]
For supervised learning tasks, deep learning methods eliminate feature
engineering, by translating the data into compact intermediate
representations akin to principal components, and derive layered
structures that remove redundancy in representation.
Deep learning algorithms can be applied to unsupervised learning
tasks. This is an important benefit because unlabeled data are more
abundant than the labeled data. Examples of deep structures that can
be trained in an unsupervised manner are neural history
compressors[16] and deep belief networks.[1][17]
Interpretations[edit]
Deep neural networks are generally interpreted in terms of
the universal approximation theorem[18][19][20][21][22][23] or probabilistic
inference.[11][12][1][2][17][24][25]
The classic universal approximation theorem concerns the capacity
of feedforward neural networks with a single hidden layer of finite size
to approximate continuous functions.[18][19][20][21][22] In 1989, the first proof
was published by George Cybenko for sigmoid activation
functions[19] and was generalised to feed-forward multi-layer
architectures in 1991 by Kurt Hornik.[20] Recent work also showed that
universal approximation also holds for non-bounded activation
functions such as the rectified linear unit.[26]
The universal approximation theorem for deep neural
networks concerns the capacity of networks with bounded width but the
depth is allowed to grow. Lu et al.[23] proved that if the width of a deep
neural network with ReLU activation is strictly larger than the input
dimension, then the network can approximate any Lebesgue integrable
function; If the width is smaller or equal to the input dimension,
then deep neural network is not a universal approximator.
The probabilistic interpretation[24] derives from the field of machine
learning. It features inference,[11][12][1][2][17][24] as well as
the optimization concepts of training and testing, related to fitting
and generalization, respectively. More specifically, the probabilistic
interpretation considers the activation nonlinearity as a cumulative
distribution function.[24] The probabilistic interpretation led to the
introduction of dropout as regularizer in neural networks.[27] The
probabilistic interpretation was introduced by researchers
including Hopfield, Widrow and Narendra and popularized in surveys
such as the one by Bishop.[28]
History[edit]
The first general, working learning algorithm for supervised, deep,
feedforward, multilayer perceptrons was published by Alexey
Ivakhnenko and Lapa in 1967.[29] A 1971 paper described already a
deep network with 8 layers trained by the group method of data
handling algorithm.[30] Other deep learning working architectures,
specifically those built for computer vision, began with
the Neocognitron introduced by Kunihiko Fukushima in 1980.[31]
The term Deep Learning was introduced to the machine learning
community by Rina Dechter in 1986,[32][16] and to artificial neural
networks by Igor Aizenberg and colleagues in 2000, in the context
of Boolean threshold neurons.[33][34]
In 1989, Yann LeCun et al. applied the standard backpropagation
algorithm, which had been around as the reverse mode of automatic
differentiation since 1970,[35][36][37][38] to a deep neural network with the
purpose of recognizing handwritten ZIP codes on mail. While the
algorithm worked, training required 3 days.[39]
By 1991 such systems were used for recognizing isolated 2-D hand-
written digits, while recognizing 3-D objects was done by matching 2-D
images with a handcrafted 3-D object model. Weng et al. suggested
that a human brain does not use a monolithic 3-D object model and in
1992 they published Cresceptron,[40][41][42] a method for performing 3-D
object recognition in cluttered scenes. Because it directly used natural
images, Cresceptron started the beginning of general-purpose visual
learning for natural 3D worlds. Cresceptron is a cascade of layers
similar to Neocognitron. But while Neocognitron required a human
programmer to hand-merge features, Cresceptron learned an open
number of features in each layer without supervision, where each
feature is represented by a convolution kernel. Cresceptron segmented
each learned object from a cluttered scene through back-analysis
through the network. Max pooling, now often adopted by deep neural
networks (e.g. ImageNet tests), was first used in Cresceptron to reduce
the position resolution by a factor of (2x2) to 1 through the cascade for
better generalization.
In 1994, André de Carvalho, together with Mike Fairhurst and David
Bisset, published experimental results of a multi-layer boolean neural
network, also known as a weightless neural network, composed of a 3-
layers self-organising feature extraction neural network module (SOFT)
followed by a multi-layer classification neural network module (GSN),
which were independently trained. Each layer in the feature extraction
module extracted features with growing complexity regarding the
previous layer.[43]
In 1995, Brendan Frey demonstrated that it was possible to train (over
two days) a network containing six fully connected layers and several
hundred hidden units using the wake-sleep algorithm, co-developed
with Peter Dayan and Hinton.[44] Many factors contribute to the slow
speed, including the vanishing gradient problem analyzed in 1991
by Sepp Hochreiter.[45][46]
Simpler models that use task-specific handcrafted features such
as Gabor filters and support vector machines (SVMs) were a popular
choice in the 1990s and 2000s, because of artificial neural network's
(ANN) computational cost and a lack of understanding of how the brain
wires its biological networks.
Both shallow and deep learning (e.g., recurrent nets) of ANNs have
been explored for many years.[47][48][49] These methods never
outperformed non-uniform internal-handcrafting Gaussian mixture
model/Hidden Markov model (GMM-HMM) technology based on
generative models of speech trained discriminatively. [50] Key difficulties
have been analyzed, including gradient diminishing [45] and weak
temporal correlation structure in neural predictive models. [51]
[52]
Additional difficulties were the lack of training data and limited
computing power.
Most speech recognition researchers moved away from neural nets to
pursue generative modeling. An exception was at SRI International in
the late 1990s. Funded by the US government's NSA and DARPA, SRI
studied deep neural networks in speech and speaker recognition. The
speaker recognition team led by Larry Heck reported significant
success with deep neural networks in speech processing in the
1998 National Institute of Standards and Technology Speaker
Recognition evaluation.[53] The SRI deep neural network was then
deployed in the Nuance Verifier, representing the first major industrial
application of deep learning.[54]
The principle of elevating "raw" features over hand-crafted optimization
was first explored successfully in the architecture of deep autoencoder
on the "raw" spectrogram or linear filter-bank features in the late 1990s,
[54]
showing its superiority over the Mel-Cepstral features that contain
stages of fixed transformation from spectrograms. The raw features of
speech, waveforms, later produced excellent larger-scale results.[55]
Many aspects of speech recognition were taken over by a deep
learning method called long short-term memory (LSTM), a recurrent
neural network published by Hochreiter and Schmidhuber in 1997.
[56]
LSTM RNNs avoid the vanishing gradient problem and can learn
"Very Deep Learning" tasks[2] that require memories of events that
happened thousands of discrete time steps before, which is important
for speech. In 2003, LSTM started to become competitive with
traditional speech recognizers on certain tasks. [57] Later it was
combined with connectionist temporal classification (CTC) [58] in stacks
of LSTM RNNs.[59] In 2015, Google's speech recognition reportedly
experienced a dramatic performance jump of 49% through CTC-trained
LSTM, which they made available through Google Voice Search.[60]
In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero
and Teh[61] [62][63] showed how a many-layered feedforward neural
network could be effectively pre-trained one layer at a time, treating
each layer in turn as an unsupervised restricted Boltzmann machine,
then fine-tuning it using supervised backpropagation.[64] The papers
referred to learning for deep belief nets.
Deep learning is part of state-of-the-art systems in various disciplines,
particularly computer vision and automatic speech recognition (ASR).
Results on commonly used evaluation sets such as TIMIT (ASR)
and MNIST (image classification), as well as a range of large-
vocabulary speech recognition tasks have steadily improved. [65][66]
[67]
Convolutional neural networks (CNNs) were superseded for ASR by
CTC[58] for LSTM.[56][60][68][69][70][71][72] but are more successful in computer
vision.
The impact of deep learning in industry began in the early 2000s, when
CNNs already processed an estimated 10% to 20% of all the checks
written in the US, according to Yann LeCun. [73] Industrial applications of
deep learning to large-scale speech recognition started around 2010.
The 2009 NIPS Workshop on Deep Learning for Speech
Recognition[74] was motivated by the limitations of deep generative
models of speech, and the possibility that given more capable
hardware and large-scale data sets that deep neural nets (DNN) might
become practical. It was believed that pre-training DNNs using
generative models of deep belief nets (DBN) would overcome the main
difficulties of neural nets.[75] However, it was discovered that replacing
pre-training with large amounts of training data for straightforward
backpropagation when using DNNs with large, context-dependent
output layers produced error rates dramatically lower than then-state-
of-the-art Gaussian mixture model (GMM)/Hidden Markov Model
(HMM) and also than more-advanced generative model-based
systems.[65][76] The nature of the recognition errors produced by the two
types of systems was characteristically different,[77][74] offering technical
insights into how to integrate deep learning into the existing highly
efficient, run-time speech decoding system deployed by all major
speech recognition systems.[11][78][79] Analysis around 2009–2010,
contrasted the GMM (and other generative speech models) vs. DNN
models, stimulated early industrial investment in deep learning for
speech recognition,[77][74] eventually leading to pervasive and dominant
use in that industry. That analysis was done with comparable
performance (less than 1.5% in error rate) between discriminative
DNNs and generative models.[65][77][75][80]
In 2010, researchers extended deep learning from TIMIT to large
vocabulary speech recognition, by adopting large output layers of the
DNN based on context-dependent HMM states constructed by decision
trees.[81][82][83][78]
Advances in hardware have enabled renewed interest in deep learning.
In 2009, Nvidia was involved in what was called the “big bang” of deep
learning, “as deep-learning neural networks were trained with
Nvidia graphics processing units (GPUs).”[84] That year, Google
Brain used Nvidia GPUs to create capable DNNs. While there, Andrew
Ng determined that GPUs could increase the speed of deep-learning
systems by about 100 times.[85] In particular, GPUs are well-suited for
the matrix/vector computations involved in machine learning. [86][87]
[88]
GPUs speed up training algorithms by orders of magnitude,
reducing running times from weeks to days.[89][90] Further, specialized
hardware and algorithm optimizations can be used for efficient
processing of deep learning models.[91]
Deep learning revolution[edit]
Neural networks[edit]
Artificial neural networks[edit]
Main article: Artificial neural network
Artificial neural networks (ANNs) or connectionist systems are
computing systems inspired by the biological neural networks that
constitute animal brains. Such systems learn (progressively improve
their ability) to do tasks by considering examples, generally without
task-specific programming. For example, in image recognition, they
might learn to identify images that contain cats by analyzing example
images that have been manually labeled as "cat" or "no cat" and using
the analytic results to identify cats in other images. They have found
most use in applications difficult to express with a traditional computer
algorithm using rule-based programming.
An ANN is based on a collection of connected units called artificial
neurons, (analogous to biological neurons in a biological brain). Each
connection (synapse) between neurons can transmit a signal to
another neuron. The receiving (postsynaptic) neuron can process the
signal(s) and then signal downstream neurons connected to it.
Neurons may have state, generally represented by real numbers,
typically between 0 and 1. Neurons and synapses may also have a
weight that varies as learning proceeds, which can increase or
decrease the strength of the signal that it sends downstream.
Typically, neurons are organized in layers. Different layers may
perform different kinds of transformations on their inputs. Signals travel
from the first (input), to the last (output) layer, possibly after traversing
the layers multiple times.
The original goal of the neural network approach was to solve
problems in the same way that a human brain would. Over time,
attention focused on matching specific mental abilities, leading to
deviations from biology such as backpropagation, or passing
information in the reverse direction and adjusting the network to reflect
that information.
Neural networks have been used on a variety of tasks, including
computer vision, speech recognition, machine translation, social
network filtering, playing board and video games and medical
diagnosis.
As of 2017, neural networks typically have a few thousand to a few
million units and millions of connections. Despite this number being
several order of magnitude less than the number of neurons on a
human brain, these networks can perform many tasks at a level
beyond that of humans (e.g., recognizing faces, playing "Go" [106] ).
Deep neural networks[edit]
This section may be too technical for most
readers to understand. Please help improve
it to make it understandable to non-experts,
without removing the technical details. (July
2016) (Learn how and when to remove this
template message)
A deep neural network (DNN) is an artificial neural network (ANN) with
multiple layers between the input and output layers. [12][2] The DNN finds
the correct mathematical manipulation to turn the input into the output,
whether it be a linear relationship or a non-linear relationship. The
network moves through the layers calculating the probability of each
output. For example, a DNN that is trained to recognize dog breeds will
go over the given image and calculate the probability that the dog in
the image is a certain breed. The user can review the results and
select which probabilities the network should display (above a certain
threshold, etc.) and return the proposed label. Each mathematical
manipulation as such is considered a layer, and complex DNN have
many layers, hence the name "deep" networks.
DNNs can model complex non-linear relationships. DNN architectures
generate compositional models where the object is expressed as a
layered composition of primitives.[107] The extra layers enable
composition of features from lower layers, potentially modeling
complex data with fewer units than a similarly performing shallow
network.[12]
Deep architectures include many variants of a few basic approaches.
Each architecture has found success in specific domains. It is not
always possible to compare the performance of multiple architectures,
unless they have been evaluated on the same data sets.
DNNs are typically feedforward networks in which data flows from the
input layer to the output layer without looping back. At first, the DNN
creates a map of virtual neurons and assigns random numerical
values, or "weights", to connections between them. The weights and
inputs are multiplied and return an output between 0 and 1. If the
network did not accurately recognize a particular pattern, an algorithm
would adjust the weights.[108] That way the algorithm can make certain
parameters more influential, until it determines the correct
mathematical manipulation to fully process the data.
Recurrent neural networks (RNNs), in which data can flow in any
direction, are used for applications such as language modeling.[109][110]
[111][112][113]
Long short-term memory is particularly effective for this use.
[56][114]
Applications[edit]
Automatic speech recognition[edit]
Main article: Speech recognition
Large-scale automatic speech recognition is the first and most
convincing successful case of deep learning. LSTM RNNs can learn
"Very Deep Learning" tasks[2] that involve multi-second intervals
containing speech events separated by thousands of discrete time
steps, where one time step corresponds to about 10 ms. LSTM with
forget gates[114] is competitive with traditional speech recognizers on
certain tasks.[57]
The initial success in speech recognition was based on small-scale
recognition tasks based on TIMIT. The data set contains 630 speakers
from eight major dialects of American English, where each speaker
reads 10 sentences.[124] Its small size lets many configurations be tried.
More importantly, the TIMIT task concerns phone-sequence
recognition, which, unlike word-sequence recognition, allows weak
phone bigram language models. This lets the strength of the acoustic
modeling aspects of speech recognition be more easily analyzed. The
error rates listed below, including these early results and measured as
percent phone error rates (PER), have been summarized since 1991.
Percent phone
Method error rate (PER)
(%)
The debut of DNNs for speaker recognition in the late 1990s and
speech recognition around 2009-2011 and of LSTM around 2003–
2007, accelerated progress in eight major areas: [11][80][78]
Image recognition[edit]
Main article: Computer vision
A common evaluation set for image classification is the MNIST
database data set. MNIST is composed of handwritten digits and
includes 60,000 training examples and 10,000 test examples. As with
TIMIT, its small size lets users test multiple configurations. A
comprehensive list of results on this set is available. [132]
Deep learning-based image recognition has become "superhuman",
producing more accurate results than human contestants. This first
occurred in 2011.[133]
Deep learning-trained vehicles now interpret 360° camera views.
[134]
Another example is Facial Dysmorphology Novel Analysis (FDNA)
used to analyze cases of human malformation connected to a large
database of genetic syndromes.
Visual art processing[edit]
Closely related to the progress that has been made in image
recognition is the increasing application of deep learning techniques to
various visual art tasks. DNNs have proven themselves capable, for
example, of a) identifying the style period of a given painting, b) Neural
Style Transfer - capturing the style of a given artwork and applying it in
a visually pleasing manner to an arbitrary photograph or video, and c)
generating striking imagery based on random visual input fields. [135][136]
Natural language processing[edit]
Main article: Natural language processing
Neural networks have been used for implementing language models
since the early 2000s.[109][137] LSTM helped to improve machine
translation and language modeling.[110][111][112]
Other key techniques in this field are negative sampling [138] and word
embedding. Word embedding, such as word2vec, can be thought of as
a representational layer in a deep learning architecture that transforms
an atomic word into a positional representation of the word relative to
other words in the dataset; the position is represented as a point in
a vector space. Using word embedding as an RNN input layer allows
the network to parse sentences and phrases using an effective
compositional vector grammar. A compositional vector grammar can be
thought of as probabilistic context free grammar (PCFG) implemented
by an RNN.[139] Recursive auto-encoders built atop word embeddings
can assess sentence similarity and detect paraphrasing. [139] Deep
neural architectures provide the best results for constituency parsing,
[140]
sentiment analysis,[141] information retrieval,[142][143] spoken language
understanding,[144] machine translation,[110][145] contextual entity linking,
[145]
writing style recognition,[146] Text classification and others.[147]
Recent developments generalize word embedding to sentence
embedding.
Google Translate (GT) uses a large end-to-end long short-term
memory network.[148][149][150][151][152][153] Google Neural Machine Translation
(GNMT) uses an example-based machine translation method in which
the system "learns from millions of examples."[149] It translates "whole
sentences at a time, rather than pieces. Google Translate supports
over one hundred languages.[149] The network encodes the "semantics
of the sentence rather than simply memorizing phrase-to-phrase
translations".[149][154] GT uses English as an intermediate between most
language pairs.[154]
Drug discovery and toxicology[edit]
For more information, see Drug discovery and Toxicology.
A large percentage of candidate drugs fail to win regulatory approval.
These failures are caused by insufficient efficacy (on-target effect),
undesired interactions (off-target effects), or unanticipated toxic effects.
[155][156]
Research has explored use of deep learning to predict
the biomolecular targets,[92][93] off-targets, and toxic effects of
environmental chemicals in nutrients, household products and drugs. [94]
[95][96]
Commercial activity[edit]
Facebook's AI lab performs tasks such as automatically tagging
uploaded pictures with the names of the people in them.[194]
Google's DeepMind Technologies developed a system capable of
learning how to play Atari video games using only pixels as data input.
In 2015 they demonstrated their AlphaGo system, which learned the
game of Go well enough to beat a professional Go player. [195][196]
[197]
Google Translate uses a neural network to translate between more
than 100 languages.
In 2015, Blippar demonstrated a mobile augmented reality application
that uses deep learning to recognize objects in real time. [198]
In 2017, Covariant.ai was launched, which focuses on integrating deep
learning into factories.[199]
As of 2008,[200] researchers at The University of Texas at Austin (UT)
developed a machine learning framework called Training an Agent
Manually via Evaluative Reinforcement, or TAMER, which proposed
new methods for robots or computer programs to learn how to perform
tasks by interacting with a human instructor.[177] First developed as
TAMER, a new algorithm called Deep TAMER was later introduced in
2018 during a collaboration between U.S. Army Research
Laboratory (ARL) and UT researchers. Deep TAMER used deep
learning to provide a robot the ability to learn new tasks through
observation.[177] Using Deep TAMER, a robot learned a task with a
human trainer, watching video streams or observing a human perform
a task in-person. The robot later practiced the task with the help of
some coaching from the trainer, who provided feedback such as “good
job” and “bad job.”[201]