0% found this document useful (0 votes)
14 views8 pages

Activation Function Hoonors

Uploaded by

Arrya Gavas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views8 pages

Activation Function Hoonors

Uploaded by

Arrya Gavas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Activation

Function
and Training a
Perceptron
By
Vanshita
Joshi
Elements of a
Neural Network
Input Layer: This layer accepts input
features. It provides information from the
outside world to the network, no
computation is performed at this layer,
nodes here just pass on the
information(features) to the hidden layer.

Hidden Layer: Nodes of this layer are not


exposed to the outer world, they are part
of the abstraction provided by any neural
network. The hidden layer performs all
sorts of computation on the features
entered through the input layer and
transfers the result to the output layer.

Output Layer: This layer bring up the


information learned by the network to the
outer world.
2
A perceptron, the basic unit of a neural network, comprises

A Perceptron
essential components that collaborate in information
processing.

• Input Features: The perceptron takes multiple input features,


each input feature represents a characteristic or attribute of the
input data.

• Weights: Each input feature is associated with a weight,


determining the significance of each input feature in influencing
the perceptron’s output. During training, these weights are
adjusted to learn the optimal values.

• Summation Function: The perceptron calculates the


weighted sum of its inputs using the summation function. The
summation function combines the inputs with their respective
weights to produce a weighted sum.

• Activation Function: The weighted sum is then passed


through an activation function. Perceptron uses Heaviside step
function functions, which take the summed values as input and
compare with the threshold and provide the output as 0 or 1.

• Output: The final output of the perceptron, is determined by


the activation function’s result. For example, in binary
classification problems, the output might represent a predicted
class (0 or 1).

• Bias: A bias term is often included in the perceptron model.


The bias allows the model to make adjustments that are
Activation
Function
• The activation function of a node in
an artificial neural network is a
function that calculates the output of
the node based on its individual inputs
and their weights.

• The activation function decides


whether a neuron should be activated
or not by calculating the weighted
sum and further adding bias to it.

• The purpose of the activation function


is to introduce non-linearity into the
output of a neuron.

• The activation function does the non-


linear transformation to the input
making it capable to learn and
perform more complex tasks.

4
Variants of Activation Function

Linear Sigmoid Tanh RELU


Function Function Function Function
Training a
Perceptron
1. Let the perceptron accept two parameters:
i)The number of inputs (no)
ii)The learning rate (learning Rate).

2. Set the default learning rate to 0.00001.


Then create random weights between -1 and 1 for each input.
3. Add bias to the z function.

4. Multiply each input with the perceptron's weights

5. Sum the results

6. Compute the outcome


Thank you

You might also like