Unit 4
Unit 4
• Linear SVM:
• The working of the SVM
algorithm can be understood
by using an example. Suppose
we have a dataset that has
two tags (green and blue), and
the dataset has two features
x1 and x2. We want a classifier
that can classify the pair(x1,
x2) of coordinates in either
green or blue. Consider the
below image:
• So as it is 2-d space so by
just using a straight line,
we can easily separate
these two classes. But
there can be multiple lines
that can separate these
classes. Consider the
below image:
• Hence, the SVM algorithm helps to
find the best line or decision
boundary; this best boundary or
region is called as a hyperplane.
SVM algorithm finds the closest
point of the lines from both the
classes. These points are called
support vectors. The distance
between the vectors and the
hyperplane is called as margin.
And the goal of SVM is to maximize
this margin. The hyperplane with
maximum margin is called
the optimal hyperplane.
Non-Linear SVM:
Dendrites Inputs
Synapse Weights
Axon Output
• An Artificial Neural Network in the field of Artificial intelligence where it
attempts to mimic the network of neurons makes up a human brain so that
computers will have an option to understand things and make decisions in a
human-like manner. The artificial neural network is designed by programming
computers to behave simply like interconnected brain cells.
• There are around 1000 billion neurons in the human brain. Each neuron has
an association point somewhere in the range of 1,000 and 100,000. In the
human brain, data is stored in such a manner as to be distributed, and we can
extract more than one piece of this data, when necessary, from our memory
parallelly. We can say that the human brain is made up of incredibly amazing
parallel processors.
• We can understand the artificial neural network with an example, consider an
example of a digital logic gate that takes an input and gives an output. “OR”
gate, which takes two inputs. If one or both the inputs are “On,” then we get
“On” in the output. If both the inputs are “Off,” then we get “Off” in output.
Here the output depends upon the input. Our brain does not perform the
same task. The outputs-to-inputs relationship keeps changing because of the
neurons in our brain, which are "learning."
Basic Building Blocks of ANN
ANN depends upon the following three building blocks −
• Network Topology
• Adjustments of Weights or Learning
• Activation Functions
Network Topology
• A network topology is the arrangement of a network along
with its nodes and connecting lines. According to the
topology, ANN can be classified as the following kinds −
• Feedforward Network
• It is a non-recurrent network having processing units/nodes
in layers and all the nodes in a layer are connected with the
nodes of the previous layers. The connection has different
weights upon them. There is no feedback loop means the
signal can only flow in one direction, from input to output. It
may be divided into the following two types −
• Single layer feedforward network
• Multilayer feedforward network
• Single layer feedforward network − The concept is of
feedforward ANN having only one weighted layer. In other
words, we can say the input layer is fully connected to the
output layer.
• Single layer feedforward network − The concept is of
feedforward ANN having only one weighted layer. In other
words, we can say the input layer is fully connected to the
output layer.
• Multilayer feedforward network − The concept is of
feedforward ANN having more than one weighted layer. As
this network has one or more layers between the input and
the output layer, it is called hidden layers.
Feedback Network
• As the name suggests, a feedback network has feedback
paths, which means the signal can flow in both directions
using loops. This makes it a non-linear dynamic system,
which changes continuously until it reaches a state of
equilibrium. It may be divided into the following types −
• Recurrent networks
• Fully recurrent network
• Recurrent networks − They are feedback networks with
closed loops. Following are the two types of recurrent
networks.
• Fully recurrent network − It is the simplest neural
network architecture because all nodes are connected to
all other nodes and each node works as both input and
output.
Jordan network − It is a closed loop network in which the
output will go to the input again as feedback as shown in
the following diagram.
Adjustments of Weights or Learning
• Learning, in artificial neural network, is the method of
modifying the weights of connections between the
neurons of a specified network. Learning in ANN can be
classified into three categories namely supervised
learning, unsupervised learning, and reinforcement
learning.
Supervised Learning
• As the name suggests, this type of learning
is done under the supervision of a teacher.
This learning process is dependent.
• During the training of ANN under
supervised learning, the input vector is
presented to the network, which will give
an output vector. This output vector is
compared with the desired output vector.
An error signal is generated, if there is a
difference between the actual output and
the desired output vector. On the basis of
this error signal, the weights are adjusted
until the actual output is matched with the
desired output.
Unsupervised Learning
• As the name suggests, this type of
learning is done without the supervision of
a teacher. This learning process is
independent.
• During the training of ANN under
unsupervised learning, the input vectors
of similar type are combined to form
clusters. When a new input pattern is
applied, then the neural network gives an
output response indicating the class to
which the input pattern belongs.
• There is no feedback from the
environment as to what should be the
desired output and if it is correct or
incorrect. Hence, in this type of learning,
the network itself must discover the
patterns and features from the input data,
and the relation for the input data over
the output.
Reinforcement Learning
• As the name suggests, this type of learning
is used to reinforce or strengthen the
network over some critic information. This
learning process is similar to supervised
learning, however we might have very less
information.
• During the training of network under
reinforcement learning, the network
receives some feedback from the
environment. This makes it somewhat
similar to supervised learning. However, the
feedback obtained here is evaluative not
instructive, which means there is no
teacher as in supervised learning. After
receiving the feedback, the network
performs adjustments of the weights to get
better critic information in future.
Activation Functions
It may be defined as the extra force or effort applied over
the input to obtain an exact output. In ANN, we can also
apply activation functions over the input to get the exact
output. Followings are some activation functions of interest
−
Linear Activation Function
• It is also called the identity function as it performs no
input editing. It can be defined as −
F(x)=x
Sigmoid Activation Function
Sigmoid Activation Function
• It is of two type as follows −
Binary sigmoidal function − This activation function
performs input editing between 0 and 1. It is positive in
nature. It is always bounded, which means its output
cannot be less than 0 and more than 1. It is also strictly
increasing in nature, which means more the input higher
would be the output. It can be defined as