Introduction to Deep Learning
Introduction to Deep Learning
The perceptron
x1 = 0.9
x2 = 0.7
w1 = 0.2
w2 = 0.9
Training a perceptron
Multilayer perceptrons
A single neuron is capable of learning simple patterns, but
when many neurons are connected together, their abilities
increase dramatically. Each of the 100 billion neurons in the
human brain has, on average, 7,000 connections to other
neurons. It has been estimated that the brain of a three-year-
old child has about one quadrillion connections between
neurons. And, theoretically, there are more possible neural
connections in the brain than there are atoms in the
universe.
▲Figure 6. A multilayer perceptron has multiple layers of neurons
between the input and output. Each neuron’s output is connected to
every neuron in the next layer.
A multilayer perceptron (MLP) is an artificial neural network
with multiple layers of neurons between input and output.
MLPs are also called feedforward neural networks.
Feedforward means that data flow in one direction from the
input to the output layer. Typically, every neuron’s output is
connected to every neuron in the next layer. Layers that
come between the input and output layers are referred to as
hidden layers (Figure 6).
Imagine you are at the top of a hill and you need to get to the
bottom of the hill in the quickest way possible. One approach
could be to look in every direction to see which way has the
steepest grade, and then step in that direction. If you repeat
this process, you will gradually go farther and farther
downhill. That is how gradient descent works: If you can
define a function over all weights that reflects the difference
between the desired output and calculated output, then the
function will be lowest (i.e., the bottom of the hill) when the
MLP’s output matches the desired output. Moving toward
this lowest value will become a matter of calculating the
gradient (or derivative of the function) and taking a small
step in the direction of the gradient.
▲Figure 12. An example MLP with three layers accepts an input of [1,
1] and computes an output of 0.77.
Consider a simple MLP with three layers (Figure 12): two
neurons in the input layer (Xi1, Xi2) connected to three
neurons (Xh1, Xh2, Xh3) in the hidden layer via weights W1–W6,
which are connected to a single output neuron (Xo) via
weights W7–W9. Assume that we are using the sigmoid
activation function, initial weights are randomly assigned,
and input values [1, 1] will lead to an output of 0.77.
Upon repeating this process for all weights, the new output
in this example becomes 0.68, which is a little closer to the
ideal value (0) than what we started with (0.77). By
performing just one such iteration of forward and back
propagation, the network is already learning!
Looking forward
Even with all the amazing progress in AI, such as self-driving
cars, the technology is still very narrow in its
accomplishments and far from autonomous. Today, 99% of
machine learning requires human work and large amounts of
data that need to be normalized and labeled (i.e., this is a
dog; this is a cat). And, people need to supply and fine-tune
the appropriate algorithms. All of this relies on manual labor.
Last but not least, we will discuss social and ethical aspects,
as the recent explosion of progress in AI has created fear
that it will evolve from being a benefit to human society to
taking control. Even Stephen Hawking, who was one of
Britain’s most distinguished scientists, warned of AI’s
threats. “The development of full artificial intelligence could
spell the end of the human race,” said Hawking (17).
Copyright Permissions
Would you like to reuse content from CEP Magazine? It’s
easy to request permission to reuse content. Simply click
here to connect instantly to licensing services, where you can
choose from a list of options regarding how you would like to
reuse the desired content and complete the transaction.
Features
Departments
...
Catalyzing Commercialization: A New Generation of Bio-Based
Adhesives from Bioadvantaged Monomers
...
...
...
...
...