Unit 4
Unit 4
Neural networks are artificial systems that were inspired by biological neural
networks. These systems learn to perform tasks by being exposed to various
datasets and examples without any task-specific rules. The idea is that the
system generates identifying characteristics from the data they have been
passed without being programmed with a pre-programmed understanding of
these datasets. Neural networks are based on computational models for
threshold logic. Threshold logic is a combination of algorithms and mathematics.
Neural networks are based either on the study of the brain or on the application
of neural networks to artificial intelligence. The work has led to improvements in
finite automata theory. Components of a typical neural network involve neurons,
connections which are known as synapses, weights, biases, propagation
function, and a learning rule. Neurons will receive an input from
predecessor neurons that have an activation , threshold , an activation
function f, and an output function. Connections consist of connections, weights
and biases which rules how neuron transfers output to neuron . Propagation
computes the input and outputs the output and sums the predecessor neurons
function with the weight. The learning of neural network basically refers to the
adjustment in the free parameters i.e. weights and bias. There are basically three
sequence of events of learning process.
These includes:
1. The neural network is simulated by an new environment.
2. Then the free parameters of the neural network is changed as a result of this
simulation.
3. The neural network then responds in a new way to the environment because
of the changes in its free parameters.
Perceptron model is also treated as one of the best and simplest types of Artificial Neural
networks. However, it is a supervised learning algorithm of binary classifiers. Hence, we
can consider it as a single-layer neural network with four main parameters, i.e., input
values, weights and Bias, net sum, and an activation function.
Binary classifiers can be considered as linear classifiers. In simple words, we can understand it as
a classification algorithm that can predict linear predictor function in terms of weight and
feature vectors.
Weight parameter represents the strength of the connection between units. This is
another most important parameter of Perceptron components. Weight is directly
proportional to the strength of the associated input neuron in deciding the output.
Further, Bias can be considered as the line of intercept in a linear equation.
o Activation Function:
These are the final and important components that help to determine whether the neuron
will fire or not. Activation Function can be considered primarily as a step function.
o Sign function
o Step function, and
o Sigmoid function
The data scientist uses the activation function to take a subjective decision based on various
problem statements and forms the desired outputs. Activation function may differ (e.g., Sign,
Step, and Sigmoid) in perceptron models by checking whether the learning process is slow or
has vanishing or exploding gradients.
Step-1
In the first step first, multiply all input values with corresponding weight values and then
add them to determine the weighted sum. Mathematically, we can calculate the weighted
sum as follows:
Add a special term called bias 'b' to this weighted sum to improve the model's
performance.
∑wi*xi + b
Step-2
In the second step, an activation function is applied with the above-mentioned weighted
sum, which gives us output either in binary form or a continuous value as follows:
Y = f(∑wi*xi + b)
In a single layer perceptron model, its algorithms do not contain recorded data, so it
begins with inconstantly allocated input for weight parameters. Further, it sums up all
inputs (weight). After adding all inputs, if the total sum of all inputs is more than a pre-
determined value, the model gets activated and shows the output value as +1.
The multi-layer perceptron model is also known as the Backpropagation algorithm, which
executes in two stages as follows:
o Forward Stage: Activation functions start from the input layer in the forward stage and
terminate on the output layer.
o Backward Stage: In the backward stage, weight and bias values are modified as per the
model's requirement. In this stage, the error between actual output and demanded
originated backward on the output layer and ended on the input layer.
Hence, a multi-layered perceptron model has considered as multiple artificial neural networks
having various layers in which activation function does not remain linear, similar to a single
layer perceptron model. Instead of linear, activation function can be executed as sigmoid, TanH,
ReLU, etc., for deployment.
A multi-layer perceptron model has greater processing power and can process linear and non-
linear patterns. Further, it can also implement logic gates such as AND, OR, XOR, NAND,
NOT, XNOR, NOR.
Perceptron Function
Perceptron function ''f(x)'' can be achieved as output by multiplying the input 'x' with the learned
weight coefficient 'w'.
f(x)=1; if w.x+b>0
otherwise, f(x)=0
Characteristics of Perceptron
The perceptron model has the following characteristics.
o The output of a perceptron can only be a binary number (0 or 1) due to the hard limit
transfer function.
o Perceptron can only be used to classify the linearly separable sets of input vectors. If input
vectors are non-linear, it is not easy to classify them properly.
Future of Perceptron
The future of the Perceptron model is much bright and significant as it helps to interpret data by
building intuitive patterns and applying them in the future. Machine learning is a rapidly growing
technology of Artificial Intelligence that is continuously evolving and in the developing phase;
hence the future of perceptron technology will continue to support and facilitate analytical
behavior in machines that will, in turn, add to the efficiency of computers.
The perceptron model is continuously becoming more advanced and working efficiently
on complex problems with the help of artificial neurons.
Conclusion:
In this article, you have learned how Perceptron models are the simplest type of artificial
neural network which carries input and their weights, the sum of all weighted input, and
an activation function. Perceptron models are continuously contributing to Artificial
Intelligence and Machine Learning, and these models are becoming more advanced.
Perceptron enables the computer to work more efficiently on complex problems using
various Machine Learning technologies. The Perceptrons are the fundamentals of artificial
neural networks, and everyone should have in-depth knowledge of perceptron models to
study deep neural networks.