0% found this document useful (0 votes)
67 views9 pages

SCexp2 C071

This document contains instructions for a MATLAB/Python experiment on generating activation functions used in neural networks. It describes 4 activation functions: identity, binary step, sigmoid (binary and bipolar), and provides code samples to generate graphs of each. The purpose is for students to understand basic activation functions and how to implement them. Key learning is about the different properties and uses of each type of activation function.

Uploaded by

Diya Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views9 pages

SCexp2 C071

This document contains instructions for a MATLAB/Python experiment on generating activation functions used in neural networks. It describes 4 activation functions: identity, binary step, sigmoid (binary and bipolar), and provides code samples to generate graphs of each. The purpose is for students to understand basic activation functions and how to implement them. Key learning is about the different properties and uses of each type of activation function.

Uploaded by

Diya Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

SVKM’S NMIMS

MUKESH PATEL SCHOOL OF TECHNOLOGY MANAGEMENT &


ENGINEERING

Academic Year: 2020-2021

Soft Computing

LAB Manual

PRACTICAL 2

PART A

A.1 Aim:
Write a MATLAB/Python code to generate a few activation functions used in neural
networks
1. Identity
2. Binary Step
3. Sigmoidal:
a. Binary Sigmoidal
b. Bipolar Sigmoidal

A.2 Prerequisite:
Basic knowledge about the 4 activation functions.

A.3 Outcome:
After this experiment students will be able to understand what basic activation functions are
and how they are executed.

A.4 Theory:
Activation functions are mathematical equations that determine the output of a neural
network. The function is attached to each neuron in the network and determines whether it
should be activated (“fired”) or not, based on whether each neuron’s input is relevant for the
model’s prediction. Activation functions also help normalize the output of each neuron to a
range between 1 and 0 or between -1 and 1. It can be as simple as a step function that turns
the neuron output on and off, depending on a rule or threshold. Or it can be a transformation
that maps the input signals into output signals that are needed for the neural network to
function.
Identity Function: A linear activation function takes the inputs, multiplied by the weights
for each neuron, and creates an output signal proportional to the input. In one sense, a linear
function is better than a step function because it allows multiple outputs, not just yes and no.

1. Binary Step Function: A binary step function is a threshold-based activation


function. If the input value is above or below a certain threshold, the neuron is
activated and sends exactly the same signal to the next layer. The problem with a step
function is that it does not allow multi-value outputs—for example, it cannot support
classifying the inputs into one of several categories.

Binary Sigmoidal (Logistic):

Advantages

 Smooth gradient, preventing “jumps” in output values.


 Output values bound between 0 and 1, normalizing the output of each neuron.
 Clear predictions—For X above 2 or below -2, tends to bring the Y value (the
prediction) to the edge of the curve, very close to 1 or 0. This enables clear
predictions.

Disadvantages

 Vanishing gradient—for very high or very low values of X, there is almost no change
to the prediction, causing a vanishing gradient problem. This can result in the network
refusing to learn further, or being too slow to reach an accurate prediction.
 Outputs not zero centered.
 Computationally expensive
2. Bipolar Sigmoidal (Hyperbolic):

Advantages
 Zero centered—making it easier to model inputs that have strongly negative, neutral,
and strongly positive values.
 Otherwise like the Binary Sigmoid function.
Disadvantages
 Like the Binary Sigmoid function
SC EXP 2
PART B

Roll No. C071 Name: DIYA SHAH


Class : A Batch : A2
Date of Experiment: 11/7/20 Date of Submission: 12/7/20
Grade :

IDENTITY FUNCTION
Code:
x = -10:0.1:10;
disp(x);
y = x;
plot(x,y);
xlabel("input");
ylabel("output");
title("Identity Function");

Graph:
BIPOLAR SIGMOID
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np

a = np.arange(-7, 7, 0.1)
b = tf.keras.activations. sigmoid (a)
plt.plot(a,b)
plt.show()
Graph:
BINARY SIGMOID
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np

a = np.arange(-7, 7, 0.1)

b = tf.keras.activations. sigmoid (a)


plt.plot(a,b)
plt.show()

Graph:
UNIT STEP
Code:
import tensorflow as tf
from tensorflow.keras import activations
import matplotlib.pyplot as plt
import numpy as np

a = np.arange(-7, 7, 0.1)
b= [];
for x in a:
if x <0:

b.append(0);
else:
b.append(1);

print (b)
plt.plot(a,b)
plt.show()

Graph:

B.2 Observations and learning:


I have understood the working of the activation functions used in Neural
Networks.

B.3 Conclusion:  

After this experiment I understood what basic activation functions are and how they are
executed. 

B.4 Questions of curiosity:

1. What is the use of hardlim() transfer function?


Hardlim is a neural transfer function and is used to calculate a layer’s output
from its net input.
2. Explore sigmoid() n change the steepness parameter. What is it observation?
In sigmoid() activation function viz. 1/(1+e σx), σ is the steepness parameter. As
the σ increases, the slope becomes steeper, eventually resembling a unit step
graph. And if the steepness parameter is negative, the whole curve inverts.

You might also like