Module 4 Chapter 2
Module 4 Chapter 2
Chapter-2
ARTIFICIAL NEURAL NETWORKS
INTRODUCTION
Artificial neural networks (ANNs) provide a general, practical method for learning real-valued,
discrete-valued, and vector-valued target functions.
Biological Motivation
• The study of artificial neural networks (ANNs) has been inspired by the observation that
biological learning systems are built of very complex webs of interconnected Neurons
• Human information processing system consists of brain neuron: basic building block
cell that communicates information to and from various parts of body
ANN learning is well-suited to problems in which the training data corresponds to noisy,
complex sensor data, such as inputs from cameras and microphones.
• One type of ANN system is based on a unit called a perceptron. Perceptron is a single
layer neural network.
Figure: A perceptron
Learning a perceptron involves choosing values for the weights w 0 , . . . , wn . Therefore, the
space H of candidate hypotheses considered in perceptron learning is the set of all possible
real-valued weight vectors
Representational Power of Perceptrons
Perceptrons can represent all of the primitive Boolean functions AND, OR, NAND (~ AND),
and NOR (~OR)
Some Boolean functions cannot be represented by a single perceptron, such as the XOR
function whose value is 1 if and only if x1 ≠ x2
The learning problem is to determine a weight vector that causes the perceptron to produce the
correct + 1 or - 1 output for each of the given training examples.
• The role of the learning rate is to moderate the degree to which weights are changed at
each step. It is usually set to some small value (e.g., 0.1) and is sometimes made to decay
as the number of weight-tuning iterations increases
Drawback:
The perceptron rule finds a successful weight vector when the training examples are linearly
separable, it can fail to converge if the examples are not linearly separable.
Gradient Descent and the Delta Rule
• If the training examples are not linearly separable, the delta rule converges toward a
best-fit approximation to the target concept.
• The key idea behind the delta rule is to use gradient descent to search the hypothesis
space of possible weight vectors to find the weights that best fit the training examples.
To understand the delta training rule, consider the task of training an unthresholded perceptron.
That is, a linear unit for which the output O is given by
To derive a weight learning rule for linear units, specify a measure for the training error of a
hypothesis (weight vector), relative to the training examples.
Where,
• D is the set of training examples,
• td is the target output for training example d,
• od is the output of the linear unit for training example d
• E(w ⃗⃗⃗→ ) is simply half the squared difference between the target output td and the linear
unit output od, summed over all training examples.
Gradient descent search determines a weight vector that minimizes E by starting with an
arbitrary initial weight vector, then repeatedly modifying it in small steps.
At each step, the weight vector is altered in the direction that produces the steepest descent
along the error surface depicted in above figure. This process continues until the global
minimum error is reached.
How to calculate the direction of steepest descent along the error surface?
The direction of steepest can be found by computing the derivative of E with respect to each
component of the vector w⃗⃗⃗→ . This vector derivative is called the gradient of E with respect to
⃗w
⃗⃗→ , written as
The gradient specifies the direction of steepest increase of E, the training rule for
gradient descent is
• Here η is a positive constant called the learning rate, which determines the step
size in the gradient descent search.
• The negative sign is present because we want to move the weight vector in the
direction that decreases E.
To summarize, the gradient descent algorithm for training linear units is as follows:
• Pick an initial random weight vector.
• Apply the linear unit to all training examples, then compute Δwi for each weight
according to Equation (7).
• Update each weight wi by adding Δwi, then repeat this process
Gradient descent is an important general paradigm for learning. It is a strategy for searching
through a large or infinite hypothesis space that can be applied whenever
1. The hypothesis space contains continuously parameterized hypotheses
2. The error can be differentiated with respect to these hypothesis parameters
• The gradient descent training rule presented in Equation (7) computes weight updates
after summing over all the training examples in D
• The idea behind stochastic gradient descent is to approximate this gradient descent
search by updating weights incrementally, following the calculation of the error for
each individual example
∆wi = η (t – o) xi
• where t, o, and xi are the target value, unit output, and ith input for the training example
in question
One way to view this stochastic gradient descent is to consider a distinct error function
Ed( ⃗w
⃗⃗→ ) for each individual training example d as follows
• Where, td and od are the target value and the unit output value for training example d.
• Stochastic gradient descent iterates over the training examples d in D, at each iteration
altering the weights according to the gradient with respect to Ed( w
⃗⃗⃗→ )
• The sequence of these weight updates, when iterated over all training examples,provides
a reasonable approximation to descending the gradient with respect to our original error
function Ed( w
⃗⃗⃗→ )
• By making the value of η sufficiently small, stochastic gradient descent can be made to
approximate true gradient descent arbitrarily closely
The key differences between standard gradient descent and stochastic gradient descent are
• In standard gradient descent, the error is summed over all examples before updating
weights, whereas in stochastic gradient descent weights are updated upon examining
each training example.
• Summing over multiple examples in standard gradient descent requires more
computation per weight update step. On the other hand, because it uses the true gradient,
standard gradient descent is often used with a larger step size per weight update than
stochastic gradient descent.
• In cases where there are multiple local minima with respect to stochastic gradient
descent can sometimes avoid falling into these local minima because it uses the various
∇E ( ⃗w⃗⃗→ ) rather than ∇ E( w
⃗⃗⃗→ ) to guide its search
d
• Sigmoid unit-a unit very much like a perceptron, but based on a smoothed, differentiable
threshold function.
• The sigmoid unit first computes a linear combination of its inputs, then applies a
threshold to the result and the threshold output is a continuous function of its input.
• More precisely, the sigmoid unit computes its output O as
Algorithm:
• Create a feed-forward network with ni inputs, nhidden hidden units, and nout output
units.
• Initialize all network weights to small random numbers
• Until the termination condition is met, Do
• For each (⃗𝑥→, 𝑡→ ), in training examples, Do
by making the weight update on the nth iteration depend partially on the update that occurred
during the (n - 1)th iteration, as follows:
Where, Downstream(r) is the set of units immediately downstream from unit r in the network:
that is, all units whose inputs include the output of unit r
• Deriving the stochastic gradient descent rule: Stochastic gradient descent involves
iterating through the training examples one at a time, for each training example d
descending the gradient of the error Ed with respect to this single example
• For each training example d every weight wji is updated by adding to it Δwji
Here outputs is the set of output units in the network, tk is the target value of unit k for training
example d, and ok is the output of unit k given training example d.
The derivation of the stochastic gradient descent rule is conceptually straightforward, but
requires keeping track of a number of subscripts and variables
• Hypothesis space is the n-dimensional Euclidean space of the n network weights and
hypothesis space is continuous.
• As it is continuous, E is differentiable with respect to the continuous parameters of the
hypothesis, results in a well-defined error gradient that provides a very useful structure
for organizing the search for the best hypothesis.
• It is difficult to characterize precisely the inductive bias of BACKPROPAGATION
algorithm, because it depends on the interplay between the gradient descent search and
the way in which the weight space spans the space of representable functions. However,
one can roughly characterize it as smooth interpolation between data points.
BACKPROPAGATION can define new hidden layer features that are not explicit in the input
representation, but which capture properties of the input instances that are most relevant to
learning the target function.
What is an appropriate condition for terminating the weight update loop? One choice is to
continue training until the error E on the training examples falls below some predetermined
threshold.
To see the dangers of minimizing the error over the training data, consider how the error E
varies with the number of weight iterations
• Consider first the top plot in this figure. The lower of the two lines shows the
monotonically decreasing error E over the training set, as the number of gradient descent
iterations grows. The upper line shows the error E measured over a different validation
set of examples, distinct from the training examples. This line measures the
generalization accuracy of the network-the accuracy with which it fits examples beyond
the training data.
• The generalization accuracy measured over the validation examples first decreases, then
increases, even as the error over the training examples continues to decrease. How can
this occur? This occurs because the weights are being tuned to fit idiosyncrasies of the
training examples that are not representative of the general distribution of examples.
The large number of weight parameters in ANNs provides many degrees of freedom for
fitting such idiosyncrasies
• Why does overfitting tend to occur during later iterations, but not during earlier
iterations?
By giving enough weight-tuning iterations, BACKPROPAGATION will often be able
to create overly complex decision surfaces that fit noise in the training data or
unrepresentative characteristics of the particular training sample.