Activation Function Learning
Activation Function Learning
BY:MS.ARUNA BAJPAI
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
ITM GOI
ACTIVATION FUNCTION &
LEARNING
Presented By
Aruna Bajpai
Terminology
• Biological NN Artificial Neural Network
1. Soma Node
2. Dendrites Input
3. Synapse Weights or Interconnections
4. Axon Output
Model of ANN
• yin=x1.w1+x2.w2+x3.w3…..xm.wm
• Output Y=F(yin)
Difference B/W ANN & BNN
Criteria BNN ANN
Processing Massively parallel, slow but superior than ANN Massively parallel, fast but
inferior than BNN
Size 1011 neurons and 1015 interconnections 102 to 104 nodes
Fault tolerance Performance degrades with even partial damage It is capable of robust
performance, hence has the
potential to be fault tolerant
Storage capacity Stores the information in the synapse Stores the information in
continuous memory locations
Activation Functions
• This function may be defined as the extra force
or effort applied over the input to obtain an
exact output.
• In ANN, we can also apply activation functions
over the input to get the exact output.
• It is also known as Transfer Function.
Types
1.Binary step Function
•A binary step function is a threshold-based activation
function.
• If the input value is above or below a certain
threshold, the neuron is activated and sends exactly
the same signal to the next layer.
Linear Activation Function
• It takes the inputs, multiplied by the weights for
each neuron, and creates an output signal
proportional to the input.
Sigmoid Activation Function
• It is of two type as follows −
• Binary sigmoidal function − This activation
function performs input editing between 0 and 1. It
is positive in nature. It is always bounded, which
means its output cannot be less than 0 and more
than 1. It is also strictly increasing in nature, which
means more the input higher would be the output.
It can be defined as
• F(x)=sigm(x)=1/1+exp(-x)
• Bipolar sigmoidal function − This activation
function performs input editing between -1 and 1.
It can be positive or negative in nature. It is
always bounded, which means its output cannot be
less than -1 and more than 1. It is also strictly
increasing in nature like sigmoid function. It can
be defined as
• F(x)=sigm(x)=1-exp(x)/1+exp(x)
Step Function:
• Step Function is one of the simplest kind of
activation functions. In this, we consider a
threshold value and if the value of net input say y
is greater than the threshold then the neuron is
activated.
• Mathematically,
• f(x)=1, if x>=0
• f(x)=0, if x<0
ReLU:
The ReLU function is the Rectified linear unit. It is
the most widely used activation function. It is defined
as:
f(x) = max(0, x)
•The main advantage of using the ReLU function
over other activation functions is that it does not
activate all the neurons at the same time.if the input is
negative it will convert it to zero and the neuron does
not get activated.
Tanh Function
The Tanh function is a modified or scaled up
version of the sigmoid function. In Sigmoid was that
the value of f(z) is bounded between 0 and 1;
however, in the case of Tanh the values are bounded
between -1 and 1.
Learning
• Learning rule is a method or a mathematical logic.
It helps a Neural Network to learn from the
existing conditions and improve its performance.
• Learning, in artificial neural network, is the
method of modifying the weights of connections
between the neurons of a specified network.
• Learning in ANN can be classified into three
categories namely supervised learning,
unsupervised learning, and reinforcement
learning.
Suvervised Learning
• As the name suggests, this type of learning is done
under the supervision of a teacher. This learning
process is dependent.
• During the training of ANN under supervised
learning, the input vector is presented to the
network, which will give an output vector.
• This output vector is compared with the desired
output vector. An error signal is generated, if there
is a difference between the actual output and the
desired output vector.
• On the basis of this error signal, the weights
are adjusted until the actual output is matched
with the desired output.
Unsupervised Learning
• As the name suggests, this type of learning is done
without the supervision of a teacher. This learning
process is independent.