SCexp2 C071
SCexp2 C071
Soft Computing
LAB Manual
PRACTICAL 2
PART A
A.1 Aim:
Write a MATLAB/Python code to generate a few activation functions used in neural
networks
1. Identity
2. Binary Step
3. Sigmoidal:
a. Binary Sigmoidal
b. Bipolar Sigmoidal
A.2 Prerequisite:
Basic knowledge about the 4 activation functions.
A.3 Outcome:
After this experiment students will be able to understand what basic activation functions are
and how they are executed.
A.4 Theory:
Activation functions are mathematical equations that determine the output of a neural
network. The function is attached to each neuron in the network and determines whether it
should be activated (“fired”) or not, based on whether each neuron’s input is relevant for the
model’s prediction. Activation functions also help normalize the output of each neuron to a
range between 1 and 0 or between -1 and 1. It can be as simple as a step function that turns
the neuron output on and off, depending on a rule or threshold. Or it can be a transformation
that maps the input signals into output signals that are needed for the neural network to
function.
Identity Function: A linear activation function takes the inputs, multiplied by the weights
for each neuron, and creates an output signal proportional to the input. In one sense, a linear
function is better than a step function because it allows multiple outputs, not just yes and no.
Advantages
Disadvantages
Vanishing gradient—for very high or very low values of X, there is almost no change
to the prediction, causing a vanishing gradient problem. This can result in the network
refusing to learn further, or being too slow to reach an accurate prediction.
Outputs not zero centered.
Computationally expensive
2. Bipolar Sigmoidal (Hyperbolic):
Advantages
Zero centered—making it easier to model inputs that have strongly negative, neutral,
and strongly positive values.
Otherwise like the Binary Sigmoid function.
Disadvantages
Like the Binary Sigmoid function
SC EXP 2
PART B
IDENTITY FUNCTION
Code:
x = -10:0.1:10;
disp(x);
y = x;
plot(x,y);
xlabel("input");
ylabel("output");
title("Identity Function");
Graph:
BIPOLAR SIGMOID
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np
a = np.arange(-7, 7, 0.1)
b = tf.keras.activations. sigmoid (a)
plt.plot(a,b)
plt.show()
Graph:
BINARY SIGMOID
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np
a = np.arange(-7, 7, 0.1)
Graph:
UNIT STEP
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np
a = np.arange(-7, 7, 0.1)
b= [];
for x in a:
if x <0:
b.append(0);
else:
b.append(1);
print (b)
plt.plot(a,b)
plt.show()
Graph:
B.3 Conclusion:
After this experiment I understood what basic activation functions are and how they are
executed.