0% found this document useful (0 votes)
11 views90 pages

New Module 3 Part2

The document discusses artificial neural networks (ANNs). It provides an introduction to ANNs and notes they can be used to model complex patterns and prediction problems. The document then discusses the biological motivation for ANNs by comparing them to the human brain. It describes properties of neural networks like having many threshold switching units and weighted interconnections. The document concludes by noting when ANNs are appropriate, such as when the input is high-dimensional sensor data and the target function is unknown.

Uploaded by

Ashish Prasad R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views90 pages

New Module 3 Part2

The document discusses artificial neural networks (ANNs). It provides an introduction to ANNs and notes they can be used to model complex patterns and prediction problems. The document then discusses the biological motivation for ANNs by comparing them to the human brain. It describes properties of neural networks like having many threshold switching units and weighted interconnections. The document concludes by noting when ANNs are appropriate, such as when the input is high-dimensional sensor data and the target function is unknown.

Uploaded by

Ashish Prasad R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 90

Department of Computer

Department Science
of Computer Science & Engineering
& Engineering

MODULE -3

CHAPTER-2
ARTIFICIAL NEURAL NETWORKS
Department of Computer Science & Engineering

CONTEN
T
• Introduction
• Neural Network Representation
• Appropriate Problems for Neural Network Learning
• Perceptrons
• Multilayer Networks and BACKPROPAGATION Algorithms
• Remarks on the BACKPROPAGATION Algorithms

2
Department of Computer Science & Engineering

INTRODUCTION

Artificial neural networks (ANNs) provide a general, practical method


for learning real-valued, discrete-valued, and vector-valued target
functions from examples.

3
Departmentof
Department of Computer
Computer Science & Engineering
Science & Engineering
Biological Motivation

• The study of artificial neural networks (ANNs) has been inspired by the

observation that biological learning systems are built of very complex webs of

interconnected Neurons

• Human brain is estimated to contain a densely interconnected network of

approximately 1011 neurons each connected on average to 104 others.

• Human information processing system consists of brain neuron: basic building

block cell that communicates information to and from various parts of body

4
Department of Computer Science & Engineering

5
Department of Computer Science & Engineering

synapse
axon
nucleu
s

cell body

dendrites
6
Department of Computer Science & Engineering

• Artificial Neural Network(ANN) uses the processing of the brain as a basis to develop algorithms that can

be used to model complex patterns and prediction problems.

• In our brain, there are billions of cells called neurons, which processes information in the form of electric

signals.

• External information/stimuli is received by the dendrites of the neuron, processed in the neuron cell body,

converted to an output and passed through the Axon to the next neuron.

• The next neuron can choose to either accept it or reject it depending on the strength of the signal.
Department of Computer Science & Engineering
Facts of Human Neurobiology

• Number of neurons ~ 1011

• Connection per neuron ~ 10 4 – 5

• Neuron switching time ~ 0.001 second or 10 -3

• Scene recognition time ~ 0.1 second

• 100 inference steps doesn’t seem like enough

• Highly parallel computation based on distributed representation

8
Department of Computer Science & Engineering

Properties of Neural Networks

• Many neuron-like threshold switching units

• Many weighted interconnections among units

• Highly parallel, distributed process

• Emphasis on tuning weights automatically

• Input is a high-dimensional discrete or real-valued (e.g, sensor input)

9
Department of Computer Science & Engineering
When to consider Neural Networks ?

• Input is a high-dimensional discrete or real-valued (e.g., sensor


input)
• Output is discrete or real-valued
• Output is a vector of values
• Possibly noisy data
• Form of target function is unknown
• Human readability of result is unimportant
Examples:
1. Speech phoneme recognition
2. Image classification 10
Department of Computer Science & Engineering

Neuron

11
Department of Computer Science & Engineering
Department of Computer Science & Engineering

The term "Artificial Neural Network" is derived from Biological neural networks that develop the structure of
a human brain. Similar to the human brain that has neurons interconnected to one another, artificial neural
networks also have neurons that are interconnected to one another in various layers of the networks. These
neurons are known as nodes.
The typical Artificial Neural Network looks something like the given figure.
Department of Computer Science & Engineering

Neuron

14
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Neuron

15
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• A prototypical example of ANN learning is provided by Pomerleau's (1993)


system ALVINN, which uses a learned ANN to steer an autonomous vehicle
driving at normal speeds on public highways.

• The input to the neural network is a 30x32 grid of pixel intensities obtained from
a forward-pointed camera mounted on the vehicle.

• The network output is the direction in which the vehicle is steered.

16
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
NEURAL NETWORK REPRESENTATIONS

17
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Figure illustrates the neural network representation.


• The network is shown on the left side of the figure, with the input camera image depicted below it.
• Each node (i.e., circle) in the network diagram corresponds to the output of a single network unit, and
the lines entering the node from below are its inputs.
• There are four units that receive inputs directly from all of the 30 x 32 pixels in the image. These
are called "hidden" units because their output is available only within the network and is not
available as part of the global network output. Each of these four hidden units computes a single real-
valued output based on a weighted combination of its 960 inputs
• These hidden unit outputs are then used as inputs to a second layer of 30 "output” units.
• Each output unit corresponds to a particular steering direction, and the output values of these units
determine which steering direction is recommended most strongly.
18
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• The diagrams on the right side of the figure depict the learned weight values
associated with one of the four hidden units in this ANN.
• The large matrix of black and white boxes on the lower right depicts the weights
from the 30 x 32 pixel inputs into the hidden unit. Here, a white box indicates a
positive weight, a black box a negative weight, and the size of the box indicates the
weight magnitude.
• The smaller rectangular diagram directly above the large matrix shows the weights
from this hidden unit to each of the 30 output units.

19
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

APPROPRIATE PROBLEMS FOR NEURAL NETWORK LEARNING


ANN is appropriate for problems with the following characteristics :
• Instances are represented by many attribute-value pairs- These input attributes can be
highly correlated or independent of one another. Input values can be real values.
• The target function output may be discrete-valued, real-valued, or a vector of several real-
or discrete-valued attributes- the value of each output is some real number between 0 and
1.
• The training examples may contain errors-ANN is robust to noise in the training data.
• Long training times are acceptable.
• Fast evaluation of the learned target function may be required- ANN training time is long
but in terms of evaluation time it is very fast.
• The ability of humans to understand the learned target function is not important-The
weights learned by neural networks are often difficult for humans to interpret. 20
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Architectures of Artificial Neural Networks

Neural Network Representations


An artificial neural network can be divided into three parts (layers), which are known as:

•Input layer: This layer is responsible for receiving information (data), signals, features, or measurements
from the external environment. These inputs are usually normalized within the limit values produced by
activation functions

•Hidden, intermediate, or invisible layers: These layers are composed of neurons which are responsible for
extracting patterns associated with the process or system being analyzed. These layers perform most of the
internal processing from a network.

•Output layer : This layer is also composed of neurons, and thus is responsible for producing and presenting
the final network outputs, which result from the processing performed by the neurons in the previous layers.

21
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Architectures of Artificial Neural Networks
The main architectures of artificial neural networks, considering the neuron
disposition, how they are interconnected and how its layers are composed, can be
divided as follows:

1. Single-layer feedforward network


2. Multi-layer feedforward networks
3. Recurrent or Feedback networks
4. Mesh networks

23
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Single-Layer Feedforward Architecture
• This artificial neural network has just one input layer and a single neural layer, which is also the
output layer.
• Figure illustrates a simple-layer feedforward network composed of n inputs and m outputs.
• The information always flows in a single direction (thus, unidirectional), which is from the input
layer to the output layer

24
Department of Computer Science & Engineering
Multi-Layer Feedforward Architecture
• This artificial neural feedforward networks with multiple layers are composed of one or more hidden
neural layers.
• Figure shows a feedforward network with multiple layers composed of one input layer with n sample
signals, two hidden neural layers consisting of n1 and n2 neurons respectively, and, finally, one output
neural layer composed of .m neurons representing the respective output values of the problem being
analyzed.

25
Department
Departmentof Computer
of Computer Science
Science & Engineering
& Engineering

Recurrent or Feedback Architecture

• In these networks, the outputs of the neurons are used as feedback inputs for other
neurons.
• Figure illustrates an example of a Perceptron network with feedback, where one of its
output signals is fed back to the middle layer.

26
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Mesh Architectures
• The main features of networks with mesh structures reside in considering the spatial
arrangement of neurons for pattern extraction purposes, that is, the spatial localization of the
neurons is directly related to the process of adjusting their synaptic weights and thresholds.
• Figure illustrates an example of the Kohonen network where its neurons are arranged within a
two- dimensional space

27
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
PERCEPTRONS
• Basic unit used to build ANN. Perceptron is a single layer neural network.
• A perceptron takes a vector of real-valued inputs, calculates a linear
combination of these inputs, then outputs a 1 if the result is greater than some
threshold and -1 otherwise
• Given inputs x1 through xn , the output O(x1, . . . , xn) computed by the perceptron
is

• where each wi is a real-valued constant, or weight, that determines the contribution


of input xi to the perceptron output.
• w0 is a threshold that the weighted combination of inputs w1x1 + . . . + wnxn must
surpass in order for the perceptron to output a 1.

28
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

OR

Learning a perceptron involves choosing values for the weights w0 , . . . , wn . Therefore,


the space H of candidate hypotheses considered in perceptron learning is the set of all
possible real-valued weight vectors

29
Department of Computer Science & Engineering

Summation Activation
function function

30
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

31
Department of Computer Science & Engineering

32
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Representational Power of Perceptrons
*The perceptron can be viewed
as representing a hyperplane
decision surface in the n-
dimensional space of instances.

*The perceptron outputs a 1 for


instances lying on one side of
the hyperplane and outputs a -1
for instances lying on the other
side

33
Department of Computer Science & Engineering
Department
A single perceptron can of Computer
be used to represent Science
many Boolean & Engineering
functions
AND function

• If A=0 & B=0 → 0*0.6 + 0*0.6 = 0.


This is not greater than the threshold of 1, so the output = 0.
• If A=0 & B=1 → 0*0.6 + 1*0.6 = 0.6.
This is not greater than the threshold, so the output = 0.
• If A=1 & B=0 → 1*0.6 + 0*0.6 = 0.6. AND and OR can be viewed
This is not greater than the threshold, so the output = 0. as a special case of m-of-n
• If A=1 & B=1 → 1*0.6 + 1*0.6 = 1.2. functions: where at least m of
This exceeds the threshold, so the output = 1. the n inputs must be true. For
OR m=1 and for AND m=n

34
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
The Perceptron Training Rule

The learning problem is to determine a weight vector that causes the perceptron to produce
the correct + 1 or - 1 output for each of the given training examples.

To learn an acceptable weight vector

• Begin with random weights, then iteratively apply the perceptron to each training
example, modifying the perceptron weights whenever it misclassifies an example.

• This process is repeated, iterating through the training examples as many times as needed
until the perceptron classifies all training examples correctly.

• Weights are modified at each step according to the perceptron training rule, which revises
the weight wi associated with input xi according to the rule. 35
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• The role of the learning rate is to moderate the degree to which weights are changed at each
step. It is usually set to some small value (e.g., 0.1) and is sometimes made to decay as the
number of weight-tuning iterations increases

Drawback: The perceptron rule finds a successful weight vector when the training examples
are linearly separable, it can fail to converge if the examples are not linearly separable. A
36
second training rule called the delta rule is designed to overcome this difficulty.
Back
Department of Computer Science & Engineering

Gradient Descent and the Delta Rule

• If the training examples are not linearly separable, the delta rule converges toward a best-fit
approximation to the target concept.

• The key idea behind the delta rule is to use gradient descent to search the hypothesis space
of possible weight vectors to find the weights that best fit the training examples.

To understand the delta training rule, consider the task of training an unthresholded
perceptron. That is, a linear unit for which the output O is given by

37
Departmentof
Department ofComputer
Computer Science
Science & Engineering
& Engineering

To derive a weight learning rule for linear units, specify a measure for the training error of a
hypothesis (weight vector), relative to the training examples.

Where,
• D is the set of training examples,

• td is the target output for training example d,

• od is the output of the linear unit for training example d

• E [ w ] is simply half the squared difference between the target output t d and the linear
Back
unit output od, summed over all training examples. 38
Department of Computer Science & Engineering

Visualizing the Hypothesis Space


• To understand the gradient descent algorithm, it is helpful to visualize the entire hypothesis space of
possible weight vectors and their associated E values as shown in below figure.

• Here the axes w0 and wl represent possible values for the two weights of a simple linear unit. The w 0,

wl plane therefore represents the entire hypothesis space.

• The vertical axis indicates the error E relative to some fixed set of training examples.

• The arrow shows the negated gradient at one particular point, indicating the direction in the w 0, wl

plane producing steepest descent along the error surface.

• The error surface shown in the figure thus summarizes the desirability of every weight vector in the
hypothesis space. 39
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• Given the way in which we chose to define E, for linear

units this error surface must always be parabolic with a

single global minimum.

• Gradient descent search determines a weight vector that

minimizes E by starting with an arbitrary initial weight

vector, then repeatedly modifying it in small steps.

• At each step, the weight vector is altered in the direction

that produces the steepest descent along the error surface

depicted in above figure. This process continues until the


40
global minimum error is reached.
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Derivation of the Gradient Descent Rule
How to calculate the direction of steepest descent along the error surface?

The direction of steepest descent can be found by computing the derivative of E with respect to each component
of the vector w . This vector derivative is called the gradient of E with respect to w , written as

41
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• The gradient specifies the direction of steepest increase of E, the training rule for
gradient descent is

• Here η is a positive constant called the learning rate, which determines the
step
size in the gradient descent search.
• The negative sign is present because we want to move the weight vector in the
direction that decreases E
• This training rule can also be written in its component form
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

𝜕𝐸
Calculate the gradient at each step. The vector 𝜕𝑤𝑖
derivatives that form the gradient can be obtained
of differentiating E from Equation (2), as by
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

GRADIENT DESCENT algorithm for training a linear unit

44
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

To summarize, the gradient descent algorithm for training linear units is as


follows:
• Pick an initial random weight vector.
• Apply the linear unit to all training examples, then compute Δwi for each weight
according to Equation (7).
• Update each weight wi by adding Δwi, then repeat this process

45
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Features of Gradient Descent Algorithm

Gradient descent is an important general paradigm for learning. It is a strategy for


searching through a large or infinite hypothesis space that can be applied whenever
1.The hypothesis space contains continuously parameterized hypotheses
2.The error can be differentiated with respect to these hypothesis parameters

The key practical difficulties in applying gradient descent are


3. Converging to a local minimum can sometimes be quite slow
4. If there are multiple local minima in the error surface, then there is no guarantee that the
procedure will find the global minimum

46
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Stochastic Approximation to Gradient Descent

• The gradient descent training rule presented in Equation (7) computes weight updates
after summing over all the training examples in D

• The idea behind stochastic gradient descent is to approximate this gradient descent
search by updating weights incrementally, following the calculation of the error for each
individual example

where t, o, and xi are the target value, unit output, and ith input for the training example in
question
47
Department of Computer Science & Engineering

48
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

One way to view this stochastic gradient descent is to consider a distinct error function
Ed ( w ) for each individual training example d as follows

Where, td and od are the target value and the unit output value for training example d.

• Stochastic gradient descent iterates over the training examples d in D, at each iteration

altering the weights according to the gradient with respect to E d( w )

• The sequence of these weight updates, when iterated over all training examples, provides
a reasonable approximation to descending the gradient with respect to our original error
function Ed ( w )

• By making the value of η sufficiently small, stochastic gradient descent can be made to
49
Department of Computer Science & Engineering

The key differences between standard gradient descent and stochastic gradient
descent are
• In standard gradient descent, the error is summed over all examples before
updating weights, whereas in stochastic gradient descent weights are updated
upon examining each training example.
• Summing over multiple examples in standard gradient descent requires more
computation per weight update step. On the other hand, because it uses the true
gradient, standard gradient descent is often used with a larger step size per weight
update than stochastic gradient descent.
• In cases where there are multiple local minima with respect to stochastic gradient
descent can sometimes avoid falling into these local minima because it uses the
various 𝛻Ed ( w ) rather than 𝛻E ( w ) to guide its search

50
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

51
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

MULTILAYER NETWORKS AND THE BACKPROPAGATION ALGORITHM

Multilayer networks learned by the BACKPROPAGATION algorithm are capable


of expressing a rich variety of nonlinear decision surfaces

52
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

• Decision regions of a multilayer feedforward network. The network shown here was trained to recognize 1 of 10 vowel
sounds occurring in the context "h_d" (e.g., "had," "hid"). The network input consists of two parameters, F1 and F2,
obtained from a spectral analysis of the sound. The 10 network outputs correspond to the 10 possible vowel sounds. The
network prediction is the output whose value is highest.
• The plot on the right illustrates the highly nonlinear decision surface represented by the learned network. Points shown
on the plot are test examples distinct from the examples used to train the network.
53
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
A Differentiable Threshold Unit
• Sigmoid unit- is much like a perceptron, but based on a smoothed, differentiable
threshold function.
• The sigmoid unit first computes a linear combination of its inputs, then applies a
threshold to the result. In the case of the sigmoid unit, however, the threshold
output is a continuous function of its input.
• More precisely, the sigmoid unit computes its output O as

σ is the sigmoid function


Value for a sigmoid function range between 0 to 1. Because it maps a very large input domain to
54
a small range of outputs, it is often referred to as squashing function of the unit.
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Its derivative is easily


expressed in terms of its
output.
55
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
The BACKPROPAGATION Algorithm

• The BACKPROPAGATION Algorithm learns the weights for a multilayer network, given a network
with a fixed set of units and interconnections. It employs gradient descent to attempt to minimize the
squared error between the network output values and the target values for these outputs.
• In BACKPROPAGATION algorithm, we consider networks with multiple output units rather than
single units as before, so we redefine E to sum the errors over all of the network output units.

where,
• outputs - is the set of output units in the network
• tkd and Okd - the target and output values associated with the kth output unit
Slide 38
• d - training example
56
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

57
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

58
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Derivation of the BACKPROPAGATION Rule
• Deriving the stochastic gradient descent rule: Stochastic gradient descent involves
iterating through the training examples one at a time, for each training example d
descending the gradient of the error Ed with respect to this single example
• For each training example d every weight wji is updated by adding to it Δ wji

Here outputs is the set of output units in the network, tk is the target value of unit k for
training example d, and ok is the output of unit k given training example d.

64
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

The derivation of the stochastic gradient descent rule is conceptually


straightforward, but requires keeping track of a number of subscripts and
variables

xji = the ith input to unit j


wji = the weight associated with the ith input to unit j netj = Σi wjixji (the weighted
sum of inputs for unit j )
oj = the output computed by unit j tj = the target output for unit j
σ = the sigmoid function
outputs = the set of units in the final layer of the network
Downstream(j) = the set of units whose immediate inputs include the output of
unit j
65
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Consider two cases in turn: the case where unit j is an output unit for the network, and the case where j is an
internal unit (hidden unit).
66
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering
Case 1: Training Rule for Output Unit Weights.
• wji can influence the rest of the network only through netj , netj can influence the network only through oj.
Therefore, we can invoke the chain rule again to write

67
Department of Computer Science & Engineering
Department of Computer Science & Engineering

68
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

Case 2: Training Rule for Hidden Unit Weights.


•In the case where j is an internal, or hidden unit in the network, the derivation of
the training rule for wji must take into account the indirect ways in which wji can
influence the network outputs and hence Ed.
•For this reason, we will find it useful to refer to the set of all units immediately
downstream of unit j in the network and denoted this set of units by
Downstream( j).
•netj can influence the network outputs only through the units in Downstream(j).
Therefore, we can write

69
Department
Department of
of Computer
Computer Science
Science &
& Engineering
Engineering

70
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering

Forward Pass
Department of Computer Science & Engineering

Forward Pass
Department of Computer Science & Engineering

Calculating Total Error


Department of Computer Science & Engineering

Backward Pass
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering
Department of Computer Science & Engineering

THANK
YOU
89
Department of Computer Science & Engineering
1. Define ANN, Explain biological learning systems. 5M
2. List the appropriate problems for neural network learning. 5M
3. Explain representation of Neural Network. 5M
4. Discuss the application of neural network which is used to steer an autonomous vehicle. 6M
5. Define Perceptron. Explain representational power of Perceptron. 5M
6. Define Perceptron and discuss its training rule. 5M
7. Discuss the perceptron training rule and delta rule that solves the learning problem of perceptron. 10M
8. Explain Gradient Descent algorithm. Why stochastic approximation to gradient descent is needed.
10M
9. Draw the perceptron network with the notation. Derive an equation of gradient descent rule to minimize the
error. 8M.
10. What is squashing function? Why is it needed. 4M
11. Describe the multilayer neural network. Explain why back propagation algorithm is required. 6M
12. Write a remark on representation of feed forward network. 4M
13. Describe the characteristics of Back Propagation algorithm. 6M
14. Describe derivation of the back propagation rule. 5M
15. Design Perceptron that implements AND Function. Why is that a single layer perceptron cannot be used to
represent XOR function?
16. Show the derivation of back propagation training rule for output unit weights. 10M
17. Write an algorithm for back propagation algorithm which uses stochastic gradient descent method.
Comment on the field of adding momentum to the network. 10M

You might also like