Neural Network

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Soft Computing

300 (3)
Dr. Lekshmi R. R.
Asst. Prof.
Department of Electrical & Electronics
Engineering
Amrita School of Engineering 1
Artificial Neural
Network
Artificial Neural Networks
• Efficient information processing system
• Resembles in characteristics with biological neuron
• Machines that are like brains
• Like human brain, ANN is Characterized by the ability to
– Learn
– Recall
– Generalize training patterns/data
Structure
• Consist of highly interconnected processing elements called as nodes/neurons
– Operate in parallel
• Each neuron is connected to other by a connection link
• Each link is associated with weights which contain information about input signal
• Information is used by neurons to solve particular problem
Neural Network
Nucleus

Soma

Axon Synapse

Fine Strands
Dendrites
Artificial Neuron
𝑋1 𝑤1

𝑋2 𝑤2 𝑚 1 𝑖𝑓 ෍ 𝑥𝑤 ≥ 𝜀
෍ 𝑥𝑖 𝑤𝑖 𝑓 𝑥 = 𝑌
0 𝑖𝑓 ෍ 𝑥𝑤 < 𝜀
𝑖=1
𝑤𝑚
𝑚

𝑋𝑚 𝑦𝑖𝑛 = ෍ 𝑥𝑖 𝑤𝑖 𝑦 = 𝑓 𝑦𝑖𝑛
𝑖=1

Inputs Weights Summation Activation Output


Activation Functions
Identity Function Binary Function Ramp Sigmoid

𝑓 𝑥 =𝑥 1 𝑖𝑓 𝑥 ≥ 𝜃 1 𝑖𝑓 𝑥 > 1
𝑓 𝑥 =ቊ
0 𝑖𝑓 𝑥 < 𝜃 𝑓 𝑥 = ቐ𝑥 𝑖𝑓 0 ≤ 𝑥 ≤ 1
0 𝑖𝑓 𝑥 < 0
Activation Functions
Binary Sigmoid Function Bipolar Sigmoid Function ReLu
Softmax Tanh function

1 1 − 𝑒 −λ𝑥 𝑓 𝑥 = max(0, 𝑥)
𝑓 𝑥 = 𝑓 𝑥 =
1 + 𝑒 −λ𝑥 1 + 𝑒 −λ𝑥

During back propagation, derivative is considered Derivative of positive value=1


Slope or derivative is less for small or large value of x Derivative of negative value=0
Derivative of f(x) is [0,0.25] for sigmoid and [0,1] for tanh
Weight change is less and takes long time to converge [4]
Activation Functions
Rectified Linear Unit (ReLU)

3 0
3 2

0 2 1
1
C : No. of classes =4 (0, 1, 2, 3) No. of output layer neuron = 4

𝑋1 𝑌1 0.367
𝑍1
𝑋2 𝑌2 1
𝑍2
Cannot infer
𝑋3 𝑌3 20.09
𝑍𝑘
𝑋4 𝑌4 148.41
Activation Functions
Softmax

3 0
3 2

0 2 1
1
C : No. of classes =4 (0, 1, 2, 3) No. of output layer neuron = 4

𝑋1 𝑌1 0.367
P(chick|x)
𝑍1 Max probability
can be considered
𝑋2 𝑌2 1P(cat|x) as ans
𝑍2
Cannot infer
𝑋3 𝑌3 P(dog|x)
20.09 Sum to 1
𝑍𝑘
𝑋4 𝑌4 148.41
P(Quala|x)
Activation Functions
Softmax
• NN model uses soft max activation function to
– Convert real time values to probabilities
[𝐿]
𝑒𝑌
• 𝑌 [𝐿] = 𝑣𝑗𝑘 𝑌 [𝐿−1] 𝑓 𝑥 =
σ 𝑒 𝑌 [𝐿]

𝑋1 𝑌1 -1 0.367 0.002
𝑍1
𝑋2 𝑌2 0 1 0.006
𝑍2
𝑋3 𝑌3 3 20.09 0.12
𝑍𝑘
𝑋4 𝑌4 5 148.41 0.87

169.87 1
Activation Functions
Softmax

3 class problem 3 class problem


Basic ANN models
Single Layer Feed Forward
𝑤11
𝑋1 𝑌1
𝑤12

𝑤21
𝑤22 𝑤1𝑚
𝑋2 𝑌2
𝑤2𝑚 𝑤𝑛1
𝑤𝑛2
𝑤𝑛𝑚 𝑤𝑛𝑚
𝑋𝑛 𝑌𝑚
Multi Layer Feed Forward
𝑤11 𝑣11
𝑋1 𝑍1 𝑌1
𝑤12 𝑣12
𝑤21 𝑣21
𝑤22 𝑤1𝑚 𝑣22 𝑣1𝑚
𝑋2 𝑍2 𝑌2
𝑤2𝑚 𝑣2𝑚
𝑤𝑛1 𝑣𝑘1
𝑤𝑛2 𝑣𝑘2
𝑤𝑛𝑚 𝑣𝑘𝑚
𝑋𝑛 𝑌𝑚
𝑍𝑘
Feed Forward Network
• No neurons in o/p layer is an input to
– Same layer
– Preceding layer

Feed Back Network


• Neurons in o/p layer is an input to
– Same layer
– Preceding layer

Lateral Feed Back Network


• Neurons in o/p layer is an input to
– Same layer
Decision Boundary

Class 1

Class 2
Decision Boundary

Class 1

Class 2
Decision Boundary

Class 1

Class 2
Decision Boundary
Binary Function

1 𝑖𝑓𝑦𝑖𝑛 ≥ 𝜃
Class 1 y=ቊ
0 𝑖𝑓𝑦𝑖𝑛 < 𝜃

Class 2

𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 = 0 𝑚

𝑦 = 𝑓 𝑦𝑖𝑛 𝑦𝑖𝑛 = 𝑚𝑥 = 0 𝑦𝑖𝑛 = ෍ 𝑥𝑖 𝑤𝑖


𝑖=1
𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 ≥ 0
1 𝑖𝑓𝑦𝑖𝑛 ≥ 0
y=ቊ 𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 < 0
0 𝑖𝑓𝑦𝑖𝑛 < 0
Decision Boundary
Binary Function

1 𝑖𝑓𝑦𝑖𝑛 ≥ 𝜃
Class 1 y=ቊ
0 𝑖𝑓𝑦𝑖𝑛 < 𝜃

𝑦 = 𝑚𝑥 +b = 0
Class 2
𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 +b = 0
𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 = 0 𝑚

𝑦 = 𝑓 𝑦𝑖𝑛 𝑦𝑖𝑛 = 𝑚𝑥 = 0 𝑦𝑖𝑛 = ෍ 𝑥𝑖 𝑤𝑖


𝑖=1
𝑚 Bias
𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 ≥ 0
1 𝑖𝑓𝑦𝑖𝑛 ≥ 0 𝑦𝑖𝑛 = ෍ 𝑥𝑖 𝑤𝑖 + 𝑏
y=ቊ 𝑦𝑖𝑛 = 𝑥1 𝑤1 + 𝑥2 𝑤2 … 𝑥𝑛 𝑤𝑛 < 0
0 𝑖𝑓𝑦𝑖𝑛 < 0 𝑖=1
Artificial Neuron
1 b

𝑋1 𝑤1 𝑚 1 𝑖𝑓 ෍ 𝑥𝑤 ≥ 𝜀
෍ 𝑥𝑖 𝑤𝑖 𝑓 𝑥 = Y
0 𝑖𝑓 ෍ 𝑥𝑤 < 𝜀
𝑋2 𝑤2 𝑖=1
𝑤𝑚
𝑚

𝑋𝑚 𝑦𝑖𝑛 = ෍ 𝑥𝑖 𝑤𝑖 𝑦 = 𝑓 𝑦𝑖𝑛
𝑖=1

Inputs Weights Summation Activation Output


Problem#1
1 Obtain the o/p using activation
function as follows:
0.8
𝑋1 0.2 0.35 i) Binary with threshold as 0
ii) Bipolar with threshold as 0
iii) Binary Sigmoid (λ=1)
y
0.6 0.3 𝑌 iv) Bipolar sigmoid (λ=1)
𝑋2 𝑦𝑖𝑛 > 0; 𝑦 = 1
−0.2
𝑦𝑖𝑛 > 0; 𝑦 = 1
0.4 𝑋3

1 1 − 𝑒 −λ𝑦𝑖𝑛
𝑦= −λ𝑦
= 0.64 𝑦= −λ𝑦
= 0.259
1 + 𝑒 𝑖𝑛 1 + 𝑒 𝑖𝑛
𝑦𝑖𝑛 = 0.35 + 0.8 × 0.2 + 0.6 × 0.3 − 0.4 × 0.2 = 0.61
Implement AND Gate with Binary output
Input 1 Input 2 Output No. of inputs: 2 No. of outputs:1
0 0 0 Weight Value Threshold: 2
0 1 0 𝑤1 1
1 0 0 𝑤2 1 Activation function:
1 1 1 Binary

𝑥1 𝑋1 𝑤1 x1 x2 w1 w2 Yin t

y 0 0 1 1 0 0
𝑌
𝑤2 0 1 1 1 1 0
𝑥2 1 0 1 1 0
𝑋2 1
1 1 1 1 2 1
Design a nn for the following truth table
x1 x2 Y No. of inputs: 2
0 0 0 Weight Value

0 1 0 𝑤1 1

1 0 1 𝑤2 1
No. of outputs:1
1 1 0

x1 x2 w1 w2 Yin t

0 0 1 1 0
0 1 1 1 1 Cannot separate
1 0 1 1 1
1 1 1 1 2
Design a nn for the following truth table
x1 x2 Y No. of inputs: 2
0 0 0 Weight Value

0 1 0 𝑤1 1

1 0 1 𝑤2 -1
No. of outputs:1
1 1 0

x1 x2 w1 w2 Yin t
Threshold: 1
0 0 1 -1 0 0
0 1 1 -1 -1 0 Activation function:
Binary
1 0 1 -1 1 1
1 1 1 -1 0 0
Design a nn for the following truth table
No. of inputs: 2 No. of outputs:1

x1 x2 Y 𝑦 = 𝑥1 𝑥2 + 𝑥2 𝑥1 𝑦 = 𝑧1 + 𝑧2
0 0 0 Weight Value
0 1 1 𝑤1 1
1 0 1 𝑤2 1
1 1 0 Threshold: 1 Activation function:
Binary

x1 x2 z1 x1 x2 w11 w21 z1in t

0 0 0
0 0 1 1 0
0 1 0 1
0 1 1 1 Cannot be separated
1 0 1
1 0 1 1 1
1 1 0
1 1 1 1 2
Design a nn for the following truth table
No. of inputs: 2 No. of outputs:1

x1 x2 Y 𝑦 = 𝑥1 𝑥2 + 𝑥2 𝑥1 𝑦 = 𝑧1 + 𝑧2
0 0 0 Weight Value
0 1 1 𝑤11 1
1 0 1 𝑤21 -1
1 1 0 Threshold: 1 Activation function:
Binary

x1 x2 z1 x1 x2 w11 w21 z1in t

0 0 0
0 0 1 -1 0
0 1 0 -1
0 1 1 -1 w11=1; w21=-1
1 0 1
1 0 1 -1 1
1 1 0
1 1 1 -1 0
Design a nn for the following truth table
No. of inputs: 2 No. of outputs:1

x1 x2 Y 𝑦 = 𝑥1 𝑥2 + 𝑥2 𝑥1 𝑦 = 𝑧1 + 𝑧2
0 0 0 Weight Value
0 1 1 𝑤12 1
1 0 1 𝑤22 1
1 1 0 Threshold: 1 Activation function:
Binary

x1 x2 z2 x1 x2 w12 w22 z2in t

0 0 0
0 0 1 1 0
0 1 1 1
0 1 1 1 Cannot be separated
1 0 0
1 0 1 1 1
1 1 0
1 1 1 1 2
Design a nn for the following truth table
No. of inputs: 2 No. of outputs:1

x1 x2 Y 𝑦 = 𝑥1 𝑥2 + 𝑥2 𝑥1 𝑦 = 𝑧1 + 𝑧2
0 0 0 Weight Value
0 1 1 𝑤12 -1
1 0 1 𝑤2 2 1
1 1 0 Threshold: 1 Activation function:
Binary

x1 x2 z2 x1 x2 w12 w22 z2in t

0 0 0
0 0 -1 1 0
0 1 1 1
0 1 -1 1 w12=-1; w22=1
1 0 0
1 0 -1 1 -1
1 1 0
1 1 -1 1 0
Design a nn for the following truth table
No. of inputs: 2 No. of outputs:1

x1 x2 Y 𝑦 = 𝑥1 𝑥2 + 𝑥2 𝑥1 𝑦 = 𝑧1 + 𝑧2
0 0 0
𝑧1 𝑧2
0 1 1
1 0 1 Weight Value
1 1 0 𝑣11 1
𝑣21 1

z1 z2 v11 v21 yin y


ϴ=1
0 0 1 1 0 0 Activation function:
0 1 1 1 1 1 Binary
1 0 1 1 1 1
0 0 1 1 0 0
Multi Layer Feed Forward

1
𝑋1 𝑍1 1
−1 𝑌
−1
1
1
𝑋2 𝑍2
Learning Methods
o Supervised Learning

o Unsupervised Learning

o Reinforcement Learning
Supervised Learning
• Under supervision ∞
• Teach a child to learn letters etc.
𝜎
• Involves:
– Target
– Comparison of output with target
ς ∆
Input Neural
Actual output
Network
y

Error
t-y Error Signal Generator Desired
output, t
Unsupervised Learning
Unsupervised Learning
• Under NO supervision
• Input vectors of similar type are grouped together.
• Network response when i/p is applied
• Indicate which class i/p belongs to
• If class is not found
– New class is created

Input Neural
Actual output y
Network
Reinforcement Learning
• Similar to supervised learning
• Few cases: less information available

Input Neural
Actual output
Network
y

Error
t-y Error Signal Generator Reinforcement
Signal, R
1. https://www.youtube.com/watch?v=8ah-qhvaQqU
2. https://www.youtube.com/watch?v=LLux1SW--oM&t=24s
3. https://www.youtube.com/watch?v=Xvg00QnyaIY
4. https://www.youtube.com/watch?v=DDBk3ZFNtJc
5. https://www.youtube.com/watch?v=68BZ5f7P94E
NN in Python
1. https://www.youtube.com/watch?v=lGLto9Xd7bU&list=PL
QVvvaa0QuDcjD5BAw2DxE6OF2tius3V3&index=2
2. https://www.youtube.com/watch?v=TEWy9vZcxW4&list=
PLQVvvaa0QuDcjD5BAw2DxE6OF2tius3V3&index=4
Thank you

You might also like