0% found this document useful (0 votes)
9 views35 pages

Artificial Neural Networks and Representation of Neural Networks

Uploaded by

SWAROOPA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views35 pages

Artificial Neural Networks and Representation of Neural Networks

Uploaded by

SWAROOPA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Artificial neural networks and

representation of neural networks


INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS

• Human brain- Millions of Neurons

• Biological- Neurons

• Formal- Nodes

• Natural function are done artificially


Input Σ f
Output
Three Layers to represent

1.Input Layer
2.Hidden Layer
3.Output Layer
Input Layer
Output Layer
Hidden Layer
A Node has 2 parts

1.SUMMATION

2. ACTIVATION FUNCTION
SUMMATION : Calculates the weighted sum of all the inputs.

= x1w1+x2w2+…………………….+xnwn = Σ xiwi

ONCE WEIGHTED SUM IS CALCULATED SENT TO


ACTIVATION FUNCTION

ACTIVATION FUNCTION : Generate the output based on input


given
Appropriate problems for learning
neural networks
1. Instances have many attribute value pairs.
2. Target function has discrete values, continuous
values or combination of both.
3. Training examples with errors or missing
values.
4. Long training times are acceptable.
5. Fast evaluation of the target function learnt.
6. Ability for humans to understand the target
function learnt by machine is not important.
PERCEPTRON
• Basic unit used to build ANN

• Takes real valued input , Calculates linear


combination of these inputs and generates
output

Output = 1 if result > threshold


-1 otherwise
O(x1,x2,………xn)= { 1 if w0+w1x1+w2x2+……+wnxn > 0}
-1 otherwise

function O/P
O = Actual Output

t = Target output
If Actual output = Target output => Weights are fixed

If Actual output = Target output => Change the weights using formula
How to change the weights?

Initially random weights

After, Keep on applying iterations and check if (O=t)


Wi  W i + Wi

New(Wi)  Old(Wi)+ Wi
Wi =n (t-0)xi
n = Learning rate
t = Target Output
O = Actual Output
Xi = Input value
PERCEPTRON RULE
ALGORITHM
AND Gate
Perceptron
Rule
2. A = 0 , B=1 and Target = 0

•Wi.Xi = 0*1.2 + 1*0.6=0.6

This is not greater than the threshold of 1 , So the Output = 0


3. A = 1 , B=0 and Target = 0

•Wi.Xi = 1*1.2 + 0*0.6 = 1.2 Target Output

This is greater than the threshold of 1 , So the Output = 1


Actual Output
Actual Output is not equal to Target Output
New (Wi) = Old(Wi) + n ( t-o) Xi

New(W1)=Old(W1)+ n( t-o)Xi

New Weights
1. A = 0 , B=0 and Target = 0

•Wi.Xi = 0*0.7 + 0*0.6=0.6

This is not greater than the threshold of 1 , So the Output = 0

2. A = 0 , B=1 and Target = 0

•Wi.Xi = 0*0.7 + 1*0.6=0.6

This is not greater than the threshold of 1 , So the Output = 0


3. A = 1 , B=0 and Target = 0

•Wi.Xi = 1*0.7 + 0*0.6=0.7

This is not greater than the threshold of 1 , So the Output = 0

4. A = 1 , B=1 and Target = 1

•Wi.Xi = 1*0.7 + 1*0.6=1.3

This is greater than the threshold of 1 , So the Output = 1


Logical OR Gate
Perceptron Training Rule

You might also like