2 Units 8,12
2 Units 8,12
i) Linear Activation Function: It is also called the identity function as it performs no input
editing. It can be defined as: F(x)=x
ii) Sigmoid Activation Function:
Binary sigmoidal function: This activation function performs input editing between
0 and 1. It is positive in nature. It is always bounded, which means its output cannot
be less than 0 and more than 1. It is also strictly increasing in nature, which means
more the input higher would be the output. It can be defined as
F(
Bipolar sigmoidal function: This activation function performs input editing between
-1 and 1. It can be positive or negative in nature. It is always bounded, which means
its output cannot be less than -1 and more than 1. It is also strictly increasing in
nature like sigmoid function. It can be defined as
1. The sigmoid function has a smooth gradient and outputs values between zero and
one. For very high or low values of the input parameters, the network can be very
slow to reach a prediction, called the vanishing gradient problem.
2. The TanH function is zero-cantered making it easier to model inputs that are
strongly negative strongly positive or neutral.
3. The ReLu function is highly computationally efficient but is not able to process
inputs that approach zero or negative.
4. The Leaky ReLu function has a small positive slope in its negative area, enabling it
to process zero or negative values.
5. The Parametric ReLu function allows the negative slope to be learned, performing
backpropagation to learn the most effective slope for zero and negative input values.
6. Softmax is a special activation function use for output neurons. It normalizes outputs
for each class between 0 and 1, and returns the probability that the input belongs to a
specific class.
7. Swish is a new activation function discovered by Google researchers. It performs
better than ReLu with a similar level of computational efficiency
.
Learning Neural Networks and Learning Rules | Artificial Intelligence
In this article we will discuss about:
Another factor to be considered is the manner in which a neural network (learning
machine), made up of a set of interconnected neurons, reacts to its environment. In this
latter context we speak of a learning paradigm which refers to a model of the
environment in which the neural network operates.
The five learning rules:
1. Error-correction learning,
2. Memory-based learning,
3. Hebbian learning,
4. Competitive learning and
5. Boltzmann learning are basic to design of neural networks.
Some of these algorithms require the use of a teacher and some do not called supervised
and non-supervised learning respectively.
exact corrections to the network outputs when an error occurs. Such a method is not
possible in biological organism which have neither the exact reciprocal nervous
connections needed for the back propagation of error corrections nor the nervous means
for the in position of behaviour from outside.
Nevertheless, supervised learning has established itself as a powerful paradiagram for
the design of artificial neural networks. In contrast self-organised (unsupervised)
learning is motivated by neurobiological considerations.
Learning Rules of Neurons in Neural Networks :
Five basic learning rules of Neuron are:
1. Error correctional earning,
2. Memory based- learning,
3. Hebbian learning,
4. Competitive learning and
5. Boltzmann learning.
Error correction learning is rooted in optimum filtering, Memory-based learning and
competitive learning are both inspired by neurobiological considerations. Boltzmann
learning is different and is based on ideas borrowed from statistical mechanics. Also two
learning paradigms, learning with a teacher and learning without a teacher, including the
credit-assignment problem, so basic to learning process have been discussed.
1. Error-Correction Learning:
Backpropagation of errors k with the target
value Tk to determine the associated error for that unit. It is based on the error, the
factor k(K = 1, ... . m) is computed and is used to distribute the error at the output
unit Yk back to all units in the previous layer. Similarly the factor j(j = 1, ... . p) is
compared for each hidden unit Zj.
It can update the weights and biases.
Types of Backpropagation
There are two t
Static Back Propagation
because of the mapping of static input. It is used to resolve static classification problems like
optical character recognition.
Recurrent Backpropagation
until a specific determined value or threshold value is acquired. After the certain value, the
error is evaluated and propagated backward.
Supervised and Unsupervised learning
Supervised learning: Supervised learning, as the name indicates, has the presence of a
supervisor as a teacher. Basically supervised learning is when we teach or train the machine
using data that is well-labelled. Which means some data is already tagged with the correct
answer. After that, the machine is provided with a new set of examples(data) so that the
supervised learning algorithm analyses the training data(set of training examples) and
produces a correct outcome from labeled data.
For instance, suppose you are given a basket filled with different kinds of fruits. Now the
first step is to train the machine with all the different fruits one by one like this:
If the shape of the object is rounded and has a depression at the top, is red in color, then
it will be labeled as Apple.
If the shape of the object is a long curving cylinder having Green-Yellow color, then it
will be labeled as Banana.
Now suppose after training the data, you have given a new separate fruit, say Banana from
the basket, and asked to identify it.
Since the machine has already learned the things from previous data and this time has to
use it wisely. It will first classify the fruit with its shape and color and would confirm the
fruit name as BANANA and put it in the Banana category. Thus the machine learns the
things from training data(basket containing fruits) and then applies the knowledge to test
data(new fruit).
Supervised learning is classified into two categories of algorithms:
Classification: A classification problem is when the output variable is a category, such
Regression: A regression problem is when the output variable is a real value, such as
Steps
Unsupervised learning
Unsupervised learning is the training of a machine using information that is neither
classified nor labelled and allowing the algorithm to act on that information without
guidance. Here the task of the machine is to group unsorted information according to
similarities, patterns, and differences without any prior training of data.
Unlike supervised learning, no teacher is provided that means no training will be given to
the machine. Therefore, the machine is restricted to find the hidden structure in unlabelled
data by itself.
For instance, suppose it is given an image having both dogs and cats which it has never
seen.
Thus, the machine has no idea about the features of dogs and cats so we
differences, i.e., we can easily categorize the above picture into two parts. The first may
contain all pics having dogs in them and the second part may contain all pics having cats in
It allows the model to work on its own to discover patterns and information that was
previously undetected. It mainly deals with unlabelled data.
Unsupervised learning is classified into two categories of algorithms:
Clustering: A clustering problem is where you want to discover the inherent groupings
in the data, such as grouping customers by purchasing behaviour.
Association: An association rule learning problem is where you want to discover rules
that describe large portions of your data, such as people that buy X also tend to buy Y.
Types of Unsupervised Learning:
Clustering
1. Exclusive (partitioning)
2. Agglomerative
3. Overlapping
4. Probabilistic
Clustering Types:
1. Hierarchical clustering
2. K-means clustering
3. Principal Component Analysis
4. Singular Value Decomposition
5. Independent Component Analysis
Supervised vs. Unsupervised Machine Learning:
Algorithms are trained using Algorithms are used against data that is
Input Data
labelled data. not labelled(Unlabelled data)
Computational
Simpler method Computationally complex
Complexity
Training data Use training data to infer model. No training data is used.
It is not possible to learn larger and It is possible to learn larger and more
Complex model more complex models than with complex models with unsupervised
supervised learning. learning.