Lecture 5-Introduction to neural network (1)
Lecture 5-Introduction to neural network (1)
Lecture6
Dr. Lamees Nasser
E-mail: [email protected]
Third Year– Biomedical Engineering Department
Academic Year 2023- 2024
11/13/2023 1
Artificial Neural Networks
Dendrites Inputs
Soma(cell body) Node (Neuron)
Synapse Weights or interconnections
Axon Output
Perceptron
• Input nodes or input layer: takes the initial data into the model for
further processing.
𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑚
• Weights and Bias:
• Weight: shows the effectiveness of a particular input. More the
weight of input, more it will have an impact on the network.
𝑤1 , 𝑤2 , 𝑤3 , … , 𝑤𝑚 and 𝑏
Perceptron (cont’d)
• Net sum: calculates the total sum.
𝑚
𝑥𝑖 𝑤𝑖 + 𝑏
𝑖=1
• Activation Function: decides whether a neuron should be activated
or not by calculating weighted sum and further adding bias with it.
The purpose of the activation function is to introduce non-
linearity into the output of a neuron.
𝑚
𝑦 = 𝑓 𝑥𝑖 𝑤𝑖 + 𝑏
𝑖=1
• Two types of perceptron:
a) Single Layer Perceptron
b) Multi-layer perceptron
Effect of Bias in Neural Network
𝑦 = 𝑎𝑥 + 𝑏
• It is an additional parameter in the neural network which is used to adjust
the output along with the weighted sum of the inputs to the neuron.
𝑚
𝑦 = 𝑥𝑖 𝑤𝑖 + 𝑏
𝑖=1
• Therefore, Bias is a constant which helps the model in a way that it can fit
best for the given data.
• The bias shifts the decision boundary away from the origin
• Recall equation of a straight line.
Y-intercept
• Recall equation of a straight line.
Y-intercept
• Recall equation of a straight line.
Y-intercept
b= -ve
Bias of a Neuron
𝑥𝑖 𝑤𝑖 + 𝑏𝑖𝑎𝑠
𝑖=1
Bias of a Neuron (cont'd)
𝑥𝑖 𝑤𝑖 + 𝑏𝑖𝑎𝑠
𝑖=1
Common activation functions
It is used to determine the output of a neural network like 0 or
1. It maps the resulting values between 0 to 1 or -1 to 1
Step function
Relu function
Sigmoid function
Activation functions
• Step function
▪ Produces a binary output of 0 or 1. Mainly used in binary
classification to give a discrete value.
0 if 𝑤 ⋅ 𝑥 + 𝑏 ≤ 0
output = ቊ
1 if 𝑤 ⋅ 𝑥 + 𝑏 > 0
Activation functions (cont'd)
• Sigmoid/logistic function
▪ Squishes all the values to a
probability between 0 and 1, which
reduces extreme values or outliers in
the data. Usually used to classify
two classes.
1
𝜎 𝑧 =
1 + 𝑒 −𝑧
• SoftMax function
▪ A generalization of the sigmoid
function. Used to obtain
classification probabilities when we
have more than two classes.
Activation functions (cont'd)
• Leaky ReLU
▪ Instead of having the function be zero
when x < 0, leaky ReLU introduces a
small negative slope (around 0.01)
when (x) is negative.
▪ It usually works better than the ReLU
function, although it’s not used as
much in practice.
Activation functions (cont'd)
non-linearly separated
Step 2
• The output neuron will merge the two lines generated previously so
that there is only one output from the network.
Network architecture
• After knowing the number of hidden layers and their neurons, the
network architecture is now complete
Example 2
• There are two classes where each sample has two inputs and one
output.
•