Ch1-Fundamental of Neural Network
Ch1-Fundamental of Neural Network
Yin= Z1+Z2
X1 X2 Out Put
0 0 0
0 1 1
1 0 1
1 1 0
Example of McCulloch Pitts
• XOR gate:
Let w1=1, w2=1
Z1= X1
For input (0,0), yin=x1w1+x2w2=(0X1)+(0X1)=0
X1 X2 Out Put For input (0,1), yin=x1w1+x2w2=(0X1)+(1X1)=1
0 0 0 For input (1,0), yin=x1w1+x2w2=(1X1)+(0X1)=1
For input (1,1), yin=x1w1+x2w2=(1X1)+(1X1)=2
0 1 0 and Threshold=1,2,0 Output is not satisfy
1 0 1
Let w1=1, w2=-1
1 1 0
For input (0,0), yin=x1w1+x2w2=(0X1)+(0X-1)=0
For input (0,1), yin=x1w1+x2w2=(0X1)+(1X-1)=-1
For input (1,0), yin=x1w1+x2w2=(1X1)+(0X-1)=1
For input (1,1), yin=x1w1+x2w2=(1X1)+(1X-1)=0
If Threshold=1 Output is satisfy
Example of McCulloch Pitts
• XOR gate:
Let w1=1, w2=1
Z2= X2
For input (0,0), yin=x1w1+x2w2=(0X1)+(0X1)=0
X1 X2 Out For input (0,1), yin=x1w1+x2w2=(0X1)+(1X1)=1
Put
For input (1,0), yin=x1w1+x2w2=(1X1)+(0X1)=1
0 0 0 For input (1,1), yin=x1w1+x2w2=(1X1)+(1X1)=2
and Threshold=1,2,0 Output is not satisfy
0 1 1
1 0 0 Let w1=-1, w2=1
1 1 0 For input (0,0), yin=x1w1+x2w2=(0X-1)+(0X1)=0
For input (0,1), yin=x1w1+x2w2=(0X-1)+(1X1)=1
For input (1,0), yin=x1w1+x2w2=(1X-1)+(0X1)=-1
For input (1,1), yin=x1w1+x2w2=(1X-1)+(1X1)=0
If Threshold=1 Output is satisfy
Example of McCulloch Pitts
XOR gate Model
W1=1
x1 Z1>1 W1=1
W2=-1
y
Y>1
W1=-1
W2=1
x2 Z2>1
W2=1
What is Perceptron?
Neural Network
w12
x2 Y2
Yj=f(netj)
w13 wm1
xn Ym
wm2
wm3
Feed Forward Network
• Single layer Perceptron: Input and output
layers are present. It consist of single layer.
Input directly connected to the output.
• Sum of product of input and weight matrix is
calculated. If value is above threshold answer
is 1 and otherwise -1.
Feed Forward Network
• Multi layer Perceptron Network
Hidden layer 2
Hidden layer 1
x1 w11 Y1
x2 w12
Y2
wm1
x3 wm2
Ym
Feed Forward Network
• Radial Basis function : Single Hidden layer is
present Hidden layer
x1 w11 Y1
x2
Y2
wm1
x3 wm2
Ym
Feedback/ Recurrent Network
• Feedback/ Recurrent Network: output is feed
back to the input Hidden layer 1 biases
x1 w11
x2
Y1
x3 wm2 Feedback
Feedback/ Recurrent Network
• Competitive networks: neuron of the output layers
compete between themselves to find maximum
output.
• Self organizing map: input neuron activate closest
output neuron
• Hopfield network: Each neuron is connected to the
every other neuron but not back to itself. It is
generally used in performing auto association and
optimization tasks to identify patterns.
Feedback/ Recurrent Network
Competitive networks Self organizing map
Hopfield network
Types of Learning
• Supervised learning: Input is applied to supervisor and it
produced desired output. Difference between actual response
and desired response is calculated as error which is used to
correct the network parameters.
• Unsupervised learning: In this type supervisor is not present
hence there is no idea or guess of output. Network weights based
on pattern of input and output.
• Hybrid learning: Combination of supervised and unsupervised.
• Competitive learning: neuron of the output layers compete
between themselves to find maximum output. The neuron having
the maximum response is declared as a winner neuron and the
weight of winner neurons are modified else remain unchanged.
• Example: If system has 6 input and 2 output
How many neurons are required?
• Answer->8 neurons
• What is size of weight matrix
• Size=(2X6)= 2 output and 6 input
• Which output function should be used
• Uni-polar
• Compute output of the following network using
unipolar condition
-2.82
0 4.83 H1 (net)=(4.83*0)+(-4.83*1)-2.82=-7.65
H1 5.73
-4.83 H1(out)=1/(1+e^-H1(net))=4.759X10^-4
-4.6 O
1 H2(net)=(-4.63*0)+(4.6*1)-2.74=1.86
H2 5.83
H2(output)=0.865
4.6 -2.86
O(net)=(4.758*(10^-4)*5.73)+(0.865*5.83)-
2.86
-2.74
O(net)=2.167
O(output)=0.899
Components of a Perceptron
Components of a Perceptron
Each Perceptron comprises four different parts:
• Input Values: A set of values or a dataset for predicting the output
value. They are also described as a dataset’s features and dataset.
• Weights: The real value of each feature is known as weight. It tells
the importance of that feature in predicting the final value.
• Bias: The activation function is shifted towards the left or right
using bias. You may understand it simply as the y-intercept in the
line equation.
• Summation Function: The summation function binds the weights
and inputs together. It is a function to find their sum.
• Activation Function: It introduces non-linearity in the perceptron
model.
Why do we Need Weight and Bias?
Hidden layer 2
Hidden layer 2
Visible layer
• What are Boltzmann Machines?
• It is a network of neurons in which all the neurons are
connected to each other. In this machine, there are two layers
named visible layer or input layer and hidden layer. The visible
layer is denoted as v and the hidden layer is denoted as the h.
In Boltzmann machine, there is no output layer. Boltzmann
machines are random and generative neural networks capable
of learning internal representations and are able to represent
and (given enough time) solve tough combinatory problems.
• Restricted Boltzmann Machine.
• A restricted term refers to that we are not allowed to connect the
same type layer to each other. In other words, the two neurons of
the input layer or hidden layer can’t connect to each other.
Although the hidden layer and visible layer can be connected to
each other.
• As in this machine, there is no output layer so the question arises
how we are going to identify, adjust the weights and how to
measure the that our prediction is accurate or not. All the questions
have one answer, that is Restricted Boltzmann Machine.
• The RBM algorithm was proposed by Geoffrey Hinton (2007), which
learns probability distribution over its sample training data inputs.
It has seen wide applications in different areas of
supervised/unsupervised machine learning such as feature
learning, dimensionality reduction, classification, collaborative
filtering, and topic modeling.
• Consider the example movie rating discussed in the recommender
system section. Movies like Avengers, Avatar, and Interstellar have
strong associations with the latest fantasy and science fiction
factor. Based on the user rating RBM will discover latent factors
that can explain the activation of movie choices. In short, RBM
describes variability among correlated variables of input dataset in
terms of a potentially lower number of unobserved variables.
• Deep Boltzmann Machines (DBMs):
• DBMs are similar to DBNs except that apart from the
connections within layers, the connections between the
layers are also undirected (unlike DBN in which the
connections between layers are directed). DBMs can
extract more complex or sophisticated features and
hence can be used for more complex tasks.
• Deep auto encoders: A deep auto encoder is.
One of the networks recomposed of two
symmetrical deep-belief networks having four
to five shallow layers resents the encoding half
of the net and the second network makes up
the decoding half.
Thank You