Lab2 - Perceptron and Adaline Networks
Lab2 - Perceptron and Adaline Networks
Lab 2
Perceptron
And
Adaline
Networks
Perceptron
Perceptron is a building block of an Artificial Neural Network. Initially, in the
mid of 19th century, Mr. Frank Rosenblatt invented the Perceptron for
performing certain calculations to detect input data capabilities or business
intelligence. Perceptron is a linear algorithm; it is a supervised learning
algorithm of binary classifiers. Hence, we can consider it as a single-layer
neural network with four main parameters, i.e., input values, weights and
Bias, net sum, and an activation function.
Basic Components of Perceptron
The perceptron consists of 4 parts.
1. Input values or One input layer
2. Weights and Bias
3. Net sum
4. Activation Function
Input Nodes or Input Layer:
This is the primary component of Perceptron which accepts the initial data
into the system for further processing. Each input node contains a real
numerical value.
Wight and Bias:
Weight parameter represents the strength of the connection between
units. This is another most important parameter of Perceptron
components. Weight is directly proportional to the strength of the
associated input neuron in deciding the output. Further, Bias can be
considered as the line of intercept in a linear equation.
Activation Function:
These are the final and important components that help to determine
whether the neuron will fire or not (active or not). May use functions like sigmoid
or threshold.
Where we use Perceptron?
Perceptron is usually used to classify the data into two parts. Therefore, it is
also known as a Linear Binary Classifier.
Linear vs Non linear problem.
Perceptron Algorithm :
Step 1 − Initialize the following to start the training −
Weights = {w1 = v1 , w2= v2, … wi= vi)
Bias = { b = 1 }
Learning rate α = 1
Threshold θ =0.2
Inputs = { x1, x2, … x i}
Output = {t0, t1, … t i}
Step 2 – Calculate the summation for number of inputs n :
net = ∑𝑛𝑖=1 𝑤𝑖 𝑥𝑖 + 𝑏
Step 3 – Calculate the threshold for the net :
1 𝑖𝑓(𝑛𝑒𝑡 > θ)
out(net) = { 0 𝑖𝑓(−θ ≤ 𝑛𝑒𝑡 ≤ θ)
−1 𝑖𝑓(𝑛𝑒𝑡 < θ)
it may see as F(net) in some references .
Step 3 – Compare between out and t :
Case 1: if : t == out then
then the weight does not change so :
Wi(new) = Wi(old)
bnew=b(old)
Case 2 : t ≠ out :
Wi(new) = Wi(old) + △ wi
△ wi = α * xi * ti
bnew= b(old) + △b
△b = α ti
x1 x2 b net out t △ w1 △ w2 △b w1 w2 b
1 1 1 0 0 1 1 1 1 1 1 1
1 0 1 2 1 0 0 0 0 1 1 1
0 1 1 2 1 0 0 0 0 1 1 1
0 0 1 1 1 0 0 0 0 1 1 1
The algorithm is the same of the perceptron but adaline calculate the error from the
output of the network and the original target t.
Adaline Algorithm :
Step 1 − Initialize the following to start the training −
Weights = {w1 = v1 , w2= v2, … wi= vi)
Bias = { b = 1 }
Learning rate α = 1
Threshold θ =0.2
Inputs = { x1, x2, … x i}
Output = {t0, t1, … t i}
Step 2 – Calculate the summation for number of inputs n :
net = ∑𝑛𝑖=1 𝑤𝑖 𝑥𝑖 + 𝑏
Step 3 – Calculate the threshold for the net :
1 𝑖𝑓(𝑛𝑒𝑡 > θ)
out(net) = { 0 𝑖𝑓(−θ ≤ 𝑛𝑒𝑡 ≤ θ)
−1 𝑖𝑓(𝑛𝑒𝑡 < θ)
it may see as F(net) in some references .