0% found this document useful (0 votes)
59 views62 pages

Unit 2-Ann

This document discusses artificial neural networks (ANNs). It begins by introducing ANNs and their biological inspiration. It then discusses neural network representations, including how neurons and connections between neurons are modeled in ANNs. It also discusses appropriate problems for neural network learning, including when ANNs are well-suited. It introduces perceptrons as the simplest type of ANN and describes their learning rule. Finally, it discusses multilayer networks and the backpropagation algorithm for training more complex ANNs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views62 pages

Unit 2-Ann

This document discusses artificial neural networks (ANNs). It begins by introducing ANNs and their biological inspiration. It then discusses neural network representations, including how neurons and connections between neurons are modeled in ANNs. It also discusses appropriate problems for neural network learning, including when ANNs are well-suited. It introduces perceptrons as the simplest type of ANN and describes their learning rule. Finally, it discusses multilayer networks and the backpropagation algorithm for training more complex ANNs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

UNIT II:

ARTIFICIAL NEURAL NETWORKS


CONTENTS
• Introduction
• Neural Network Representation
• Appropriate Problems for Neural Network Learning
• Perceptrons
• Multilayer Networks and BACKPROPAGATION Algorithm
INTRODUCTION

Artificial neural networks (ANNs) provide a general, practical method


for learning real-valued, discrete-valued, and vector-valued target
functions from examples.
Basic work on NNs are influenced by these studies
• Neurobiological studies:
• How do nerves behave when stimulated by different magnitudes of electric current?
• Is there a minimal threshold needed for nerves to be activated?
• How do different nerve cells communicate among each other?

• Psychological studies:
• How do animals learn, forget, recognize and perform various types of tasks?

• Psycho-physical:experiments help to understand how individual neurons and groups of


neurons work.

• McCulloch and Pitts introduced the first mathematical model of single neuron, widely
applied in subsequent work.
Biological Motivation
• The study of artificial neural networks (ANNs) has been inspired by the
observation that biological learning systems are built of very complex webs of
interconnected Neurons

• Human information processing system consists of brain neuron: basic building


block cell that communicates information to and from various parts of body.

• Simplest model of a neuron: considered as a threshold unit –a processing element


(PE)

• Collects inputs & produces output if the sum of the input exceeds an internal
threshold value
Neuron :

synapse axon

nucleus

cell body

dendrites
Properties of Neural Networks

• Many neuron-like threshold switching units

• Many weighted interconnections among units

• Highly parallel, distributed process

• Emphasis on tuning weights automatically

• Input is a high-dimensional discrete or real-valued


When to consider Neural Networks ?

• Input is a high-dimensional discrete or real-valued (e.g., sensor input)


• Output is discrete or real-valued
• Output is a vector of values
• Possibly noisy data
• Form of target function is unknown
• Human readability of result is unimportant

Examples:
1. Speech recognition
2. Image classification
3. Financial perdition etc..s
Neuron
Neuron structure is used to represent a Node in NN
Artificial Neuron
NEURAL NETWORK REPRESENTATIONS
• A prototypical example of ANN learning is provided by Pomerleau's (1993)
system ALVINN, which uses a learned ANN to steer an autonomous vehicle
driving at normal speeds on public highways.

• The input to the neural network is a 30x32 grid of pixel intensities obtained from
a forward-pointed camera mounted on the vehicle.

• The network output is the direction in which the vehicle is steered.


• Figure illustrates the neural network representation.
• The network is shown on the left side of the figure, with the input camera image
depicted below it.
• Each node (i.e., circle) in the network diagram corresponds to the output of a
single network unit, and the lines entering the node from below are its inputs.
• There are four units that receive inputs directly from all of the 30 x 32 pixels in
the image. These are called "hidden" units because their output is available only
within the network and is not available as part of the global network output. Each
of these four hidden units computes a single real-valued output based on a
weighted combination of its 960 inputs
• These hidden unit outputs are then used as inputs to a second layer of 30 "output"
units.
• Each output unit corresponds to a particular steering direction, and the output
values of these units determine which steering direction is recommended most
strongly.
• The diagrams on the right side of the figure depict the learned weight values
associated with one of the four hidden units in this ANN.
• The large matrix of black and white boxes on the lower right depicts the weights
from the 30 x 32 pixel inputs into the hidden unit. Here, a white box indicates a
positive weight, a black box a negative weight, and the size of the box indicates
the weight magnitude.
• The smaller rectangular diagram directly above the large matrix shows the
weights from this hidden unit to each of the 30 output units.
APPROPRIATE PROBLEMS FOR
NEURAL NETWORK LEARNING

ANN is appropriate for problems with the following characteristics :


•Instances are represented by many attribute-value pairs.
•The target function output may be discrete-valued, real-
valued,or a vector of several real- or discrete-valued attributes.
•The training examples may contain errors.
•Long training times are acceptable.
•Fast evaluation of the learned target function may be required
•The ability of humans to understand the learned target function is not important
Architectures of Artificial Neural
Networks
An artificial neural network can be divided into three parts (layers), which are
known as:
•Input layer: This layer is responsible for receiving information (data), signals,
features, or measurements from the external environment. These inputs are usually
normalized within the limit values produced by activation functions
•Hidden, intermediate, or invisible layers: These layers are composed of neurons
which are responsible for extracting patterns associated with the process or system
being analysed. These layers perform most of the internal processing from a
network.
•Output layer : This layer is also composed of neurons, and thus is responsible for
producing and presenting the final network outputs, which result from the
processing performed by the neurons in the previous layers.
PERCEPTRONS
• Perceptron is a single layer neural network.
• A perceptron takes a vector of real-valued inputs, calculates a linear combination
of these inputs, then outputs a 1 if the result is greater than some threshold and -1
otherwise
• Given inputs x1 through xn, the output O(x1, . . . , xn) computed by the perceptron
is

• where each wi is a real-valued constant, or weight, that determines the contribution


of input xi to the perceptron output.
• -w0 is a threshold that the weighted combination of inputs w1x1 + . . . + wnxn must
surpass in order for the perceptron to output a 1.
Sometimes, the perceptron function is written as,

Learning a perceptron involves choosing values for the weights w0 , . . . , wn .


Therefore, the space H of candidate hypotheses considered in perceptron learning is
the set of all possible real-valued weight vectors

Why do we need Weights and Bias?


Weights shows the strength of the particular node.
A bias value allows you to shift the activation function curve up or down
Representational Power of
Perceptrons

*The perceptron can be


viewed as representing a
hyperplane decision surface
in the n-dimensional space
of instances.
*The perceptron outputs a
1 for instances lying on one
side of the hyperplane and
outputs a -1 for instances
lying on the other side
The Perceptron Training Rule

The learning problem is to determine a weight vector that causes the perceptron to
produce the correct + 1 or - 1 output for each of the given training examples.

To learn an acceptable weight vector


•Begin with random weights, then iteratively apply the perceptron to each training
example, modifying the perceptron weights whenever it misclassifies an example.
•This process is repeated, iterating through the training examples as many times as
needed until the perceptron classifies all training examples correctly.
•Weights are modified at each step according to the perceptron training rule, which
revises the weight wi associated with input xi according to the rule.
• The role of the learning rate is to moderate the degree to which weights are
changed at each step. It is usually set to some small value (e.g., 0.1) and is
sometimes made to decay as the number of weight-tuning iterations increases
Drawback: The perceptron rule finds a successful weight vector when the training
examples are linearly separable, it can fail to converge if the examples are not
linearly separable.
Gradient Descent and the Delta Rule

• If the training examples are not linearly separable, the delta rule converges toward
a best-fit approximation to the target concept.
• The key idea behind the delta rule is to use gradient descent to search the
hypothesis space of possible weight vectors to find the weights that best fit the
training examples.
To understand the delta training rule, consider the task of training an unthresholded
perceptron. That is, a linear unit for which the output O is given by
To derive a weight learning rule for linear units, specify a measure for the training
error of a hypothesis (weight vector), relative to the training examples.

Where,
•D is the set of training examples,
•td is the target output for training example d,
•od is the output of the linear unit for training example d
•E [ w ] is simply half the squared difference between the target output td and the linear unit output
od, summed over all training examples.
Visualizing the Hypothesis Space

• To understand the gradient descent algorithm, it is helpful to visualize the entire


hypothesis space of possible weight vectors and their associated E values as
shown in below figure.
• Here the axes w0 and wl represent possible values for the two weights of a simple
linear unit. The w0, wl plane therefore represents the entire hypothesis space.
• The vertical axis indicates the error E relative to some fixed set of training
examples.
• The arrow shows the negated Gradient at one particular point, indicatingthe
direction in the w0, wl plane producing steepest descent along the error surface.
• The error surface shown in the figure thus summarizes the desirability of every
weight vector in the hypothesis space
• Given the way in which we chose to define E, for linear units this error surface must always be
parabolic with a single global minimum.
Gradient descent search determines a weight vector that minimizes E by starting with an arbitrary
initial weight vector, then repeatedly modifying it in small steps.
At each step, the weight vector is altered in the direction that produces the steepest descent along the
error surface depicted in above figure. This process continues until the global minimum error is
reached.
Derivation of the Gradient Descent Rule

How to calculate the direction of steepest descent along the error surface?
The direction of steepest can be found by computing the derivative of E with respect
to each component of the vector w . This vector derivative is called the gradient of E
with respect to w , written as
• The gradient specifies the direction of steepest increase of E, the training rule for
gradient descent is

• Here η is a positive constant called the learning rate, which determines the step
size in the gradient descent search.
• The negative sign is present because we want to move the weight vector in the
direction that decreases E
• This training rule can also be written in its component form
derivatives that form the gradient can be obtained by
𝜕𝐸
Calculate the gradient at each step. The vector of
𝜕𝑤
differentiating E from Equation (2), as 𝑖
GRADIENT DESCENT algorithm for training a linear unit
To summarize, the gradient descent algorithm for training linear units is as follows:
•Pick an initial random weight vector.
•Apply the linear unit to all training examples, then compute Δwi for each weight
according to Equation (7).
•Update each weight wi by adding Δwi, then repeat this process
Features of Gradient Descent Algorithm

•Gradient descent is an important general paradigm for learning.


It is a strategy for searching through a large or infinite hypothesis
space that can be applied whenever
1.The hypothesis space contains continuously parameterized
hypotheses
2.The error can be differentiated with respect to these hypothesis
parameters
•The key practical difficulties in applying gradient descent are
1.Converging to a local minimum can sometimes be quite slow
2.If there are multiple local minima in the error surface, then
there is no guarantee that the procedure will find the global
minimum
Stochastic Approximation to Gradient Descent
• The gradient descent training rule presented in Equation (7) computes weight
updates after summing over all the training examples in D
• The idea behind stochastic gradient descent is to approximate this gradient descent
search by updating weights incrementally, following the calculation of the error
for each individual example

where t, o, and xi are the target value, unit output, and ith input for the training
example in question
One way to view this stochastic gradient descent is to consider a distinct error
function Ed ( w ) for each individual training example d as follows

Where, td and od are the target value and the unit output value for training example
d.
•Stochastic gradient descent iterates over the training examples d in D, at each
iteration altering the weights according to the gradient with respect to Ed( w )
•The sequence of these weight updates, when iterated over all training examples,
provides a reasonable approximation to descending the gradient with respect to our
original error function Ed ( w )
•By making the value of η sufficiently small, stochastic gradient descent can be
made to approximate true gradient descent arbitrarily closely
The key differences between standard gradient descent and stochastic gradient
descent are
•In standard gradient descent, the error is summed over all examples before
updating weights, whereas in stochastic gradient descent weights are updated upon
examining each training example.
•Summing over multiple examples in standard gradient descent requires more
computation per weight update step. On the other hand, because it uses the true
gradient, standard gradient descent is often used with a larger step size per weight
update than stochastic gradient descent.
•In cases where there are multiple local minima with respect to stochastic gradient
descent can sometimes avoid falling into these local minima because it uses the
various 𝛻Ed ( w ) rather than 𝛻E ( w ) to guide its search
MULTILAYER NETWORKS AND THE BACKPROPAGATION
ALGORITHM

Multilayer networks learned by the BACKPROPACATION algorithm are capable


of expressing a rich variety of nonlinear decision surfaces
• Decision regions of a multilayer feedforward network. The network shown here was trained to recognize 1 of
10 vowel sounds occurring in the context "h_d" (e.g., "had," "hid"). The network input consists of two
parameters, F1 and F2, obtained from a spectral analysis of the sound. The 10 network outputs correspond to
the 10 possible vowel sounds. The network prediction is the output whose value is highest.
• The plot on the right illustrates the highly nonlinear decision surface represented by the learned network.
Points shown on the plot are test examples distinct from the examples used to train the network.
A Differentiable Threshold Unit

• Sigmoid unit-a unit very much like a perceptron, but based on a smoothed,
differentiable threshold function.
• The sigmoid unit first computes a linear combination of its inputs, then applies a
threshold to the result. In the case of the sigmoid unit, however, the threshold
output is a continuous function of its input.
• More precisely, the sigmoid unit computes its output O as

σ is the sigmoid function


The BACKPROPAGATION Algorithm
• The BACKPROPAGATION Algorithm learns the weights for a multilayer
network, given a network with a fixed set of units and interconnections.
• It employs gradient descent to attempt to minimize the squared error between the
network output values and the target values for these outputs.
• In BACKPROPAGATION algorithm, we consider networks with multiple output
units rather than single units as before, so we redefine E to sum the errors over all
of the network output units.

where,
• outputs - is the set of output units in the network
• tkd and Okd - the target and output values associated with the kth output unit
• d - training example
How Back Propagation works?

• Inputs X, arrive through the preconnected path


• Input is modeled using real weights W. The weights are usually
randomly selected.
• Calculate the output for every neuron from the input layer, to
the hidden layers, to the output layer.
• Calculate the error in the outputs
• ErrorB= Actual Output – Desired Output Travel back from the
output layer to the hidden layer to adjust the weights such that
the error is decreased.
• Keep repeating the process until the desired output is achieved
How Back Propagation works?
Derivation of the BACKPROPAGATION Rule

• Deriving the stochastic gradient descent rule: Stochastic gradient descent involves
iterating through the training examples one at a time, for each training example d
descending the gradient of the error Ed with respect to this single example
• For each training example d every weight wji is updated by adding to it Δ wji

Here outputs is the set of output units in the network, tk is the target value of unit k for
training example d, and ok is the output of unit k given training example d.
The derivation of the stochastic gradient descent rule is conceptually
straightforward, but requires keeping track of a number of subscripts and variables

xji = the ith input to unit j


wji = the weight associated with the ith input to unit j netj = Σi wjixji (the weighted
sum of inputs for unit j )
oj = the output computed by unit j tj = the target output for unit j
σ = the sigmoid function
outputs = the set of units in the final layer of the network
Downstream(j) = the set of units whose immediate inputs include the output of
unit j
Consider two cases in turn: the case where unit j is an output unit for the network, and the case where j is an
internal unit (hidden unit).
Case 1: Training Rule for Output Unit Weights.
• wji can influence the rest of the network only through netj , netj can influence the network only throughoj.
Therefore, we can invoke the chain rule again to write
Case 2: Training Rule for Hidden Unit Weights.
•In the case where j is an internal, or hidden unit in the network, the derivation of
the training rule for wji must take into account the indirect ways in which wji can
influence the network outputs and hence Ed.
•For this reason, we will find it useful to refer to the set of all units immediately
downstream of unit j in the network and denoted this set of units by Downstream(
j).
•netj can influence the network outputs only through the units in Downstream(j).
Therefore, we can write
THANK YOU

You might also like