0% found this document useful (0 votes)
177 views7 pages

Lab2 - Perceptron and Adaline Networks

This document discusses perceptron and Adaline networks. [1] Perceptron is a single-layer neural network that can be used for binary classification. It consists of input nodes, weights, a net summation, and an activation function. [2] Adaline is an improvement on the perceptron that uses a linear activation function and continuous cost function rather than a step function. The Adaline algorithm is similar to the perceptron algorithm but updates weights based on the error between the network output and target rather than just whether the output matches the target. [3] Examples of using perceptrons and Adalines to model AND gates are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
177 views7 pages

Lab2 - Perceptron and Adaline Networks

This document discusses perceptron and Adaline networks. [1] Perceptron is a single-layer neural network that can be used for binary classification. It consists of input nodes, weights, a net summation, and an activation function. [2] Adaline is an improvement on the perceptron that uses a linear activation function and continuous cost function rather than a step function. The Adaline algorithm is similar to the perceptron algorithm but updates weights based on the error between the network output and target rather than just whether the output matches the target. [3] Examples of using perceptrons and Adalines to model AND gates are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Neural Networks Labs

Lab 2

Perceptron
And
Adaline
Networks
Perceptron
Perceptron is a building block of an Artificial Neural Network. Initially, in the
mid of 19th century, Mr. Frank Rosenblatt invented the Perceptron for
performing certain calculations to detect input data capabilities or business
intelligence. Perceptron is a linear algorithm; it is a supervised learning
algorithm of binary classifiers. Hence, we can consider it as a single-layer
neural network with four main parameters, i.e., input values, weights and
Bias, net sum, and an activation function.
Basic Components of Perceptron
The perceptron consists of 4 parts.
1. Input values or One input layer
2. Weights and Bias
3. Net sum
4. Activation Function
Input Nodes or Input Layer:
This is the primary component of Perceptron which accepts the initial data
into the system for further processing. Each input node contains a real
numerical value.
Wight and Bias:
Weight parameter represents the strength of the connection between
units. This is another most important parameter of Perceptron
components. Weight is directly proportional to the strength of the
associated input neuron in deciding the output. Further, Bias can be
considered as the line of intercept in a linear equation.
Activation Function:
These are the final and important components that help to determine
whether the neuron will fire or not (active or not). May use functions like sigmoid
or threshold.
Where we use Perceptron?
Perceptron is usually used to classify the data into two parts. Therefore, it is
also known as a Linear Binary Classifier.
Linear vs Non linear problem.

Perceptron Algorithm :
Step 1 − Initialize the following to start the training −
 Weights = {w1 = v1 , w2= v2, … wi= vi)
 Bias = { b = 1 }
 Learning rate α = 1
 Threshold θ =0.2
 Inputs = { x1, x2, … x i}
 Output = {t0, t1, … t i}
Step 2 – Calculate the summation for number of inputs n :
net = ∑𝑛𝑖=1 𝑤𝑖 𝑥𝑖 + 𝑏
Step 3 – Calculate the threshold for the net :
1 𝑖𝑓(𝑛𝑒𝑡 > θ)
out(net) = { 0 𝑖𝑓(−θ ≤ 𝑛𝑒𝑡 ≤ θ)
−1 𝑖𝑓(𝑛𝑒𝑡 < θ)
it may see as F(net) in some references .
Step 3 – Compare between out and t :
Case 1: if : t == out then
then the weight does not change so :
Wi(new) = Wi(old)
bnew=b(old)
Case 2 : t ≠ out :
Wi(new) = Wi(old) + △ wi
△ wi = α * xi * ti
bnew= b(old) + △b
△b = α ti

Step 4 – Testing by using the final weight W.

Example - AND Gate using Perceptron : for : epoch =1 0 0 0

x1 x2 b net out t △ w1 △ w2 △b w1 w2 b
1 1 1 0 0 1 1 1 1 1 1 1
1 0 1 2 1 0 0 0 0 1 1 1
0 1 1 2 1 0 0 0 0 1 1 1
0 0 1 1 1 0 0 0 0 1 1 1

Iterate epoch until getting out = t for each input xi.


The code is available for the Perceptron network included with name :
Perceptron Network Lab.ipynb
Adaptive Linear Neuron (Adaline) :
An improvement on the original perceptron model is Adaline, which adds a Linear
Activation Function that is used to optimise weights. With this addition, a
continuous Cost Function is used rather than the Unit Step. Adaline is important
because it lays the foundations for much more advanced machine learning models.

The algorithm is the same of the perceptron but adaline calculate the error from the
output of the network and the original target t.
Adaline Algorithm :
Step 1 − Initialize the following to start the training −
 Weights = {w1 = v1 , w2= v2, … wi= vi)
 Bias = { b = 1 }
 Learning rate α = 1
 Threshold θ =0.2
 Inputs = { x1, x2, … x i}
 Output = {t0, t1, … t i}
Step 2 – Calculate the summation for number of inputs n :
net = ∑𝑛𝑖=1 𝑤𝑖 𝑥𝑖 + 𝑏
Step 3 – Calculate the threshold for the net :
1 𝑖𝑓(𝑛𝑒𝑡 > θ)
out(net) = { 0 𝑖𝑓(−θ ≤ 𝑛𝑒𝑡 ≤ θ)
−1 𝑖𝑓(𝑛𝑒𝑡 < θ)
it may see as F(net) in some references .

Step 3 – Compare between out and t :


Case 1: if : t == out then
then the weight does not change so :
Wi(new) = Wi(old)
bnew=b(old)
Case 2 : t ≠ out :
Wi(new) = Wi(old) + △ wi
△ wi = α * xi * (ti-out)
bnew= b(old) + △b
△b = α (ti-out)

Step 4 – Testing by using the final weight W.


The code is available for the Perceptron network included with name :
Adaline Network.ipynb

You might also like