Artificial Neural Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

Introduction to Artificial Neural Networks

Raphael Cóbe
[email protected]
Introduction

Neural Networks were widely used in the 1980s and 1990s aiming to mimic the functioning of
the human brain. Their popularity declined in the late 1990s but came back into the spotlight
with new approaches based on deep learning. But how do they work? Let’s take a look first at
the structure of a neuron.

2023 Introduction to Artificial Neural Networks 2


Neural Networks

• Neurons as structural constituents of the brain [Ramón y Cajál, 1911];


• Five to six orders of magnitude slower than silicon logic gates;
• In a silicon chip happen in the nanosecond (on chip) vs millisecond range (neural events);
• A truly staggering number of neurons (nerve cells) with massive interconnections between
them;

2023 Introduction to Artificial Neural Networks 3


Neural Networks

• Receive input from other units and decides whether or not to fire;
• Approximately 10 billion neurons in the human cortex, and 60 trillion synapses or
connections [Shepherd and Koch, 1990];
• Energy efficiency of the brain is approximately 10−16 joules per operation per second
against ≈ 10−8 in a computer;

2023 Introduction to Artificial Neural Networks 4


Neurons

How do they work?


• Control the influence from one neuron on another:
• Excitatory when weight is positive; or
• Inhibitory when weight is negative;
• Nucleus is responsible for summing the incoming signals;
• If the sum is above some threshold, then fire!

2023 Introduction to Artificial Neural Networks 5


Mathematically speaking, we can represent a neuron as follows (McCulloch-Pitts model):

2023 Introduction to Artificial Neural Networks 6


Artificial Neural Networks

• Model each part of the neuron and interactions;


• Interact multiplicatively (e.g. w0 x0 ) with the dendrites of the other neuron based on the
synaptic strength at that synapse (e.g. w0 );
• Learn synapses strengths;

2023 Introduction to Artificial Neural Networks 7


Artificial Neural Networks

Function Approximation Machines


• Datasets as composite functions: y = f ∗ (x)
• Maps x input to a category (or a value) y;
• Learn synapses weights and approximate y with ŷ:
• ŷ = f (x; w)
• Learn the w parameters;

2023 Introduction to Artificial Neural Networks 8


Artificial Neural Networks

• Can be seen as a directed graph with units (or neurons) situated at the vertices;
• Some are input units;
• Receive signal from the outside world;
• The remaining are named computation units;
• Each unit produces an output
• Transmitted to other units along the arcs of the directed graph;

2023 Introduction to Artificial Neural Networks 9


Artificial Neural Networks

• Input, Output, and Hidden layers;


• Hidden as in ”not defined by the output”;

2023 Introduction to Artificial Neural Networks 10


Artificial Neural Networks
Motivation Example (taken from Jay Alammar blog post)

• Imagine that you want to forecast the price of houses at your neighborhood;
• After some research you found that 3 people sold houses for the following values:

Area (sq ft) (x) Price (y)


2,104 $399, 900
1,600 $329, 900
2,400 $369, 000

2023 Introduction to Artificial Neural Networks 11


Artificial Neural Networks
Motivation Example (taken from Jay Alammar blog post)

• If you want to sell a 2K sq ft house, how much should ask for it?
• How about finding the average price per square feet?
• $180 per sq ft.

2023 Introduction to Artificial Neural Networks 12


Artificial Neural Networks
Motivation Example (taken from Jay Alammar blog post)

• Our very first neural network looks like this:

2023 Introduction to Artificial Neural Networks 13


Artificial Neural Networks
Motivation Example (taken from Jay Alammar blog post)

• Multiplying 2, 000 sq ft by 180 gives us $360, 000.


• Calculating the prediction is simple multiplication.
• We needed to think about the weight we’ll be multiplying by.
• That is what training means!

Area (sq ft) (x) Price (y) Estimated Price(ŷ)


2,104 $399, 900 $378, 720
1,600 $329, 900 $288, 000
2,400 $369, 000 $432, 000

2023 Introduction to Artificial Neural Networks 14


Artificial Neural Networks
Motivation Example (taken from Jay Alammar blog post)

• How bad is our model?


• Calculate the Error;
• A better model is one that has less error;

• Mean Square Error: 2, 058

Area (sq ft) (x) Price (y) Estimated Price(ŷ) y − ŷ (y − ŷ)2


2,104 $399, 900 $378, 720 $21 449
1,600 $329, 900 $288, 000 $42 1756
2,400 $369, 000 $432, 000 $ − 63 3969

2023 Introduction to Artificial Neural Networks 15


Artificial Neural Networks

• Fitting the line to our data:

Follows the equation: ŷ = W ∗ x

2023 Introduction to Artificial Neural Networks 16


Artificial Neural Networks
The Bias

How about adding the Intercept?

• ŷ = W x + b

2023 Introduction to Artificial Neural Networks 17


Artificial Neural Networks
The Bias

2023 Introduction to Artificial Neural Networks 18


The ad-hoc training

2023 Introduction to Artificial Neural Networks 19


Artificial Neural Networks
How to discover the correct weights?

• Gradient Descent:
• Finding the minimum of a function;
• Look for the best weights values, minimizing the error;
• Takes steps proportional to the negative of the gradient of the function at the current point.
• Gradient is a vector that is tangent of a function and points in the direction of greatest
increase of this function.

2023 Introduction to Artificial Neural Networks 20


Artificial Neural Networks
Gradient Descent

• In mathematics, gradient is defined as partial derivative for every input variable of


function;
• Negative gradient is a vector pointing at the greatest decrease of a function;
• Minimize a function by iteratively moving a little bit in the direction of negative gradient;

2023 Introduction to Artificial Neural Networks 21


Artificial Neural Networks
Gradient Descent

2023 Introduction to Artificial Neural Networks 22


Artificial Neural Networks
Gradient Descent

2023 Introduction to Artificial Neural Networks 23


Perceptron

The Perceptron was formally proposed by McCulloch and Pitts in the 1940s with the purpose
of mathematically modeling the human neuron. Although it served as the basis for many
algorithms, its discriminative power is limited, as it can only learn hyperplanes as decision
functions.

Problem definition: let X = {(x1 , y1 ), (x2 , y2 ), . . . , (xz , yz )} be a dataset where xi ∈ Rn+1


corresponds to the input data, and yi ∈ [−1, +1] denotes its respective output value. Also, X
can be partitioned as follows: X = X 1 ∪ X 2 , where X 1 and X 2 denote the training and test
data sets, respectively. Our goal is, given the training set, to learn a function
h : Rn+1 → {−1, +1} that can correctly assign a class to a given sample.

2023 Introduction to Artificial Neural Networks 24


Now, let’s adapt the threshold activation function so that we can create our hypothesis function:
{
+1 if wT x ≥ θ
hw (x) = (1)
−1 otherwise.

To simplify the notation, it’s usual to bring θ to the left side of the equation and assign w0 = θ.
Again, we’ll consider x0 = 1. Thus, we have the updated hypothesis function as follows:
{
+1 if wT x ≥ 0
hw (x) = (2)
−1 otherwise.

2023 Introduction to Artificial Neural Networks 25


Let’s revisit some concepts of Analytical Geometry. Suppose we have two vectors u = [u1 u2 ]
and v = [v1 v2 ]. Geometrically, we can represent them as follows:


Recalling that ∥u∥ = u21 + u22 de-
notes the length (magnitude) of u.

2023 Introduction to Artificial Neural Networks 26


We also have the definition of scalar projection between two vectors, given as follows:

So, we can represent the scalar projec-


tion between two vectors as wT x =
∥w∥. cos ϕ. Remembering that we have
the following situations:
• cos ϕ > 0 when 0 < ϕ < 90◦ .
• cos ϕ < 0 when 90◦ < ϕ < 270◦ .
• cos ϕ = 0 when ϕ = 90◦ or
ϕ = 270◦

2023 Introduction to Artificial Neural Networks 27


Going back to our initial problem, we have that the equation wT x = 0 defines a hyperplane
orthogonal to the weight vector w shifted by −θ (assuming w0 = −θ and x0 = 1). Let’s assume
θ = 0, meaning the hyperplane passes through the origin.

2023 Introduction to Artificial Neural Networks 28


How do we learn the weight set w? The intuitive idea is to adjust the weight vector in a way
that the samples are correctly positioned in the feature space. In the example below, we have
a dataset X 1 = {(x1 , +1), (x2 , +1), (x3 , −1), (x4 , −1)}, where the hyperplane wT x = 0 is
already correctly positioned.

The weight updating rule is given by the follow-


ing formula:

w(t+1) = w(t) + α(yi − hw(t) (xi ))xi , (3)

where ∀i = 1, 2, . . . , m.

2023 Introduction to Artificial Neural Networks 29


But how does it work in practice? Suppose a sample x ∈ X such that y = +1 and hw (x) = −1.
For a value of α = 0.5, we have:

w(t+1) = w(t) + α(1 − (−1))x


= w(t) + 0.5(2)x = w(t) + x. (4)
Thus, the weight vector w will be rotated so that x is positive.

Prediction Update Learned Hyperplane


2023 Introduction to Artificial Neural Networks 30
Similarly, if the label of x is negative, i.e., y = −1, we have:

w(t+1) = w(t) + α(−1 − (+1))x


= w(t) + 0.5(−2)x = w(t) − x. (5)

Thus, the weight vector will be rotated (through projections) to the other side. For a dataset
that is linearly separable, it has been mathematically proven that the Perceptron algorithm
has guaranteed convergence.

2023 Introduction to Artificial Neural Networks 31


How does its algorithm work? Let’s see:
1 Assign random weights to w.
2 Initialize α.
3 t = 0.
4 For each sample xi ∈ X 1 , do:
1 w(t+1) = w(t) + α(yi − hw(t) (xi ))xi
5 Repeat step 4 until some convergence criterion is established.

2023 Introduction to Artificial Neural Networks 32


Let’s see some examples of the Perceptron’s functioning. Consider solving the logical equation:
y = x1 AND x2 . For two inputs, we have 22 possibilities of samples, i.e., our dataset is
composed of the following elements: X = {([0 0], 0), ([0 1], 0), ([1 0], 0), ([1 1], 1)}. Can we
find the separating hyperplane?
Our hypothesis function is given by
hw (x) = g(−30 + 20x1 + 20x2 ). Using
g as a logistic function, we have:

x1 x2 hw (x)
0 0 g(−30) ≈ 0
0 1 g(−10) ≈ 0
1 0 g(−10) ≈ 0
1 1 g(10) ≈ 1

2023 Introduction to Artificial Neural Networks 33


Consider, now, the problem of solving the following logical equation: y = x1 OR x2 . For
two inputs, we have 22 possibilities of samples, i.e., our dataset is composed of the following
elements: X = {([0 0], 0), ([0 1], 1), ([1 0], 1), ([1 1], 1)}. Can we find the separating hyperplane?
Our hypothesis function is given by
hw (x) = g(−10 + 20x1 + 20x2 ). Using
g as a logistic function, we have:

x1 x2 hw (x)
0 0 g(−10) ≈ 0
0 1 g(10) ≈ 1
1 0 g(10) ≈ 1
1 1 g(30) ≈ 1

2023 Introduction to Artificial Neural Networks 34


Consider, now, the problem of solving the following logical equation: y = NOT x1 . Now, we
have only one input, that is, x1 . Thus, our dataset is composed of the following elements:
X = {(0, 1), (1, 0)}. Can we find the separating hyperplane?

Our hypothesis function is given by


hw (x) = g(10 − 20x1 ). Using g as
a logistic function, we have:

x1 hw (x)
0 g(10) ≈ 1
1 g(−10) ≈ 0

2023 Introduction to Artificial Neural Networks 35


Perceptron - What it can’t do!

• The XOR function:

2023 Introduction to Artificial Neural Networks 36


Activation Functions
• Multiply the input by its weights, add the bias, and apply activation;
• Sigmoid, Hyperbolic Tangent, Rectified Linear Unit;
• Differentiable function instead of the step function;

2023 Introduction to Artificial Neural Networks 37


Adding power to the ANN
Some examples of activation functions:
1
• Logistic function (sigmoid): g(a) = , such that g(a) ∈ [0, 1].
1 + e−a
• Threshold function (step): {
1 if wT x ≥ θ
gθ (a) =
0 otherwise.
such that g(a) ∈ {0, 1}.
• Hyperbolic tangent function: g(a) = 2σ(2a) − 1, such that g(a) ∈ [−1, 1] and σ(x)
corresponds to the logistic function.

2023 Introduction to Artificial Neural Networks 38


It’s desirable for an activation function to be differentiable. How to choose it? It depends on
its application.

1
Suppose g(a) = . We have
1 + e−a

g (a) = g(a)(1 − g(a)). Note that
g ′ (a) saturates when a > 5 or a <
−5. Furthermore, g ′ (a) < 1, ∀a.
This means that for networks with
many layers, the gradient tends to
vanish during training.

2023 Introduction to Artificial Neural Networks 39


Suppose g(a) = 2σ(2a) − 1. We
have g ′ (a) = 1 − g 2 (a). Although
saturations occur, g ′ (a) reaches
higher values, even reaching the
maximum of 1 when a = 0.

2023 Introduction to Artificial Neural Networks 40


Perceptron - Solving the XOR problem
• 3D example of the solution of learning the OR function:
• Using Sigmoid function;

2023 Introduction to Artificial Neural Networks 41


Perceptron - Solving the XOR problem

• Maybe there is a combination of functions that could create hyperplanes that separate
the XOR classes:
• By increasing the number of layers we increase the complexity of the function represented by
the ANN:

2023 Introduction to Artificial Neural Networks 42


Perceptron - Solving the XOR problem

• The combination of the layers:

2023 Introduction to Artificial Neural Networks 43


Multilayer Perceptron
So, what would be a Multilayer Perceptron (MLP) Neural Network? Basically, it’s a group of
neurons that, when combined, allow learning a greater number of decision functions.

(j)
In the illustration above, ai denotes neuron i from layer j, and W (l) is the weight matrix
connecting layers l and l + 1. This architecture is generally represented as n:3:1.

2023 Introduction to Artificial Neural Networks 45


Considering the previous neural network, let’s analyze a more specific situation.

(2) (1) (1) (1) (1)


a1 = g(w01 x0 + w11 x1 + w02 x2 + . . . + w0n xn )
| {z }
Xn
(1) (2)
wi1 xi = b1
i=0

The final decision function of the neural network is given by the following formulation:
( )
(3) (2) (2) (2) (2) (2) (2)
hx (w) = a1 = g w01 a0 + w11 a1 + w21 a2 . (6)

2023 Introduction to Artificial Neural Networks 46


And in the case of problems with multiple classes? In this case, for a problem with c classes,
our output layer needs c neurons.

We can use the one-hot encoding methodol-


ogy to represent each output neuron, where
hw (x) ∈ R3 :
     
1 0 0
     
     
h1w (x) ≈ 0 h2w (x) ≈ 1 h3w (x) ≈ 0
     
0 0 1
Class 1 Class 2 Class 3

The same happens with the label y of each sample, which now becomes a vector y ∈ R3 .

2023 Introduction to Artificial Neural Networks 47


There are several training algorithms for MLP Neural Networks, where the most well-known is
called backpropagation. It has two steps: (i) forward propagation and (ii) backward propaga-
tion. Before studying its operation, let’s take a look at the cost function:
[m c ]
1 ∑∑ k ( ) ( )
J(w) = −yi log hkw (xi ) − (1 − yik ) log 1 − hkw (xi ) . (7)
m
i=1 k=1

By analogy, the above formulation encompasses a neural network with c logistic regressors if
we have a logistic activation function in the output layer.

2023 Introduction to Artificial Neural Networks 48


We can use, once again, gradient descent to optimize the cost function J(w). However, note
that the problem becomes more complex because we need to calculate the partial derivatives
with respect to all the weights of the network, i.e.,

∂J(w)
(l)
, (8)
∂wij
where l = 1, 2, . . . , L − 1 such that L denotes the number of layers in the neural network.

2023 Introduction to Artificial Neural Networks 49


Let’s consider a network of type 3:4:4:2 illustrated below. Suppose we have only one sample in
the training set. In this case, the forward step is given by the following stages:
1 a(1) ← x
( )T
2 b(2) ← W (1) a(1)
( )
3 a(2) ← g b(2)
( )T
4 b(3) ← W (2) a(2)
( )
5 a(3) ← g b(3)
( )T
6 b(4) ← W (3) a(3)
( )
7 hw (x) = a(4) ← g b(4)

2023 Introduction to Artificial Neural Networks 50


(l)
For the backpropagation step, we have the definition of a new variable δj which denotes a
partial error accumulated in neuron j of layer l. This error must be calculated differently for the
output neurons and hidden layers, as follows:

• Output layer (l = 4):


(4) (4)
δj = aj − y j
= hjw (x) − y j (9)

In vector notation, we have δ (4) = a(4) − y = hw (x) − y.

2023 Introduction to Artificial Neural Networks 51


• Hidden layers l = {2, 3}:
( )T ( )
• δ (3) = W (3) δ (4) . ∗ g ′ b(3)
( )T ( )
• δ (2) = W (2) δ (3) . ∗ g ′ b(2)
In practice, we have the following formulation for error propagation in the intermediate
layers:

( )T ( )
δ (l) = W (l) δ (l+1) . ∗ g ′ b(l) . (10)

2023 Introduction to Artificial Neural Networks 52


The name backpropagation comes from the fact that the algorithm ”propagates backward”
the estimated error in each layer. The partial derivatives can be calculated as follows:

∂J(w) (l) (l+1)


(l)
= a i δj . (11)
∂wij
Note, then, that the partial derivatives are calculated with respect to all the weights of the
neural network.

2023 Introduction to Artificial Neural Networks 53


Here is the backpropagation algorithm for training an MLP neural network.
1 Assign random weights to w(l) ij for ∀l, i, j
2 Execute the steps below until the stopping criterion has been established: Epoch loop
∂J(w)
1 ∆(l) ij = 0 ∀l, i, j Variable used to store
∂w(l) ij
2 For each sample xi ∈ X 1 , do:
1 Execute the forward propagation step to calculate al , l = 2, 3, . . . , L
2 Execute the backward propagation step to calculate:
3 δ l , l = L, L − 1, . . . , 2 Error in each neuron
4 ∆(l) ij = ∆(l) ij + a(l) iδ (l+1) j Partial derivatives are accumulated
1 (l)
3 D(l) ij = ∆ ij ∀l, i, j Calculate the average gradient
m
(l)
4 w ij = w ij − αDij ∀l, i, j Update weights with gradient descent
(l) (l)

5 Evaluate the cost function J(w)

2023 Introduction to Artificial Neural Networks 54


Summarizing: forward step.

2023 Introduction to Artificial Neural Networks 55


Summarizing: backward step.

2023 Introduction to Artificial Neural Networks 56


Predicting probabilities
• Imagine that we have more than 2 classes to output;
• One of the most popular usages for ANN;

2023 Introduction to Artificial Neural Networks 57


Predicting probabilities

• The Softmax function;


• Takes an array and outputs a probability distribution, i.e., the probability of the input
example belonging to each of the classes in my problem;
• One of the activation functions available at Pytorch:

return F.log_softmax(output, dim=1)

Note
Softmax - function that takes as input a vector of K real numbers, and normalizes it into a
probability distribution

2023 Introduction to Artificial Neural Networks 58


Loss functions

• For regression problems


• Mean squared error is not always the best one to go;
• What if we have a three classes problem?
• Alternatives: mean_absolute_error, mean_squared_logarithmic_error

Note
logarithm means changing scale as the error can grow really fast;

2023 Introduction to Artificial Neural Networks 59


Loss functions

• Cross Entropy loss:


• Default loss function to use for binary classification problems.
• Measures the performance of a model whose output is a probability value between 0 and
1;
• Loss increases as the predicted probability diverges from the actual label;
• A perfect model would have a log loss of 0;

Note
As the correct predicted probability decreases, however, the log loss increases rapidly: In case
the model has to answer 1, but it does with a very low probability;

2023 Introduction to Artificial Neural Networks 60


Even with relatively simple models, we can have a good fit on the training
set

2023 Introduction to Artificial Neural Networks 61


Would that be true?

• Models tend to become too specialized in the training set.


• Models adhering perfectly to the shown examples risk overfitting.
• Remember: The training set is not a perfect representation of the real world.
• The main goal is to model a phenomenon through examples; merely classifying the
training set is worthless.

2023 Introduction to Artificial Neural Networks 62


Figure: (Left): Underfit, (Center): Fit, (Right): Overfit

2023 Introduction to Artificial Neural Networks 63


How to know if the model is overfitting?

• Always evaluate models on samples the model has never seen.


• Divide data into training and test sets, possibly using cross-validation.

2023 Introduction to Artificial Neural Networks 64


Portland Study: Tree

• Training set: R2 = 1.0

2023 Introduction to Artificial Neural Networks 65


Portland Study: Tree

• Test set: R2 = 0.43

2023 Introduction to Artificial Neural Networks 66


Interpreting the test set

• Perfect metric on the test set means the model is perfect?


• Not necessarily: In no non-trivial problem will you have access to a completely
representative database of the problem.
• Evaluating with a test set helps, but doesn’t solve the problem.
• There will never be enough examples to perfectly model the phenomenon.

2023 Introduction to Artificial Neural Networks 67


Model error analysis

• Prediction error can be divided into three parts:


• Irreducible Error: cannot be eliminated, regardless of the algorithm used.
• Introduced from the chosen framing of the problem.
• Caused by unknown factors.
• Bias Error: assumptions made by a model to make the target function easier to learn.
• Variance Error: the amount the estimate of the target function will change if different
training data is used.

2023 Introduction to Artificial Neural Networks 68


Bias Error

• Difference between the expected (or average) prediction of our model and the correct
value we are trying to predict.
• Imagine repeating the entire model-building process more than once:
• Each time you gather new data and run a new analysis, you create a new model.
• Due to randomness in the underlying data sets, the resulting models will have a variety of
predictions.
• Measures how far, on average, the predictions of these models are from the correct
value.
• Our model has bias if it systematically predicts below or above the target variable.

2023 Introduction to Artificial Neural Networks 69


Variance Error

• In a sense, it captures the model’s generalization capability.


• How much our prediction would change if we trained it with different data.
• Ideally, it shouldn’t change much from one training data set to the next.
• Algorithms with high variance are strongly influenced by the specifications of the
training data.
• Generally, nonlinear machine learning algorithms that have a lot of flexibility have high
variance.
• e.g., Polynomial Regression with high-degree polynomials!

2023 Introduction to Artificial Neural Networks 70


Dilemma: Variance x Bias

• Low bias: suggests fewer assumptions about the shape of the target function.
• Regression Trees, KNN Regression.
• High bias: suggests more assumptions about the shape of the target function.
• Linear Regression, Logistic Regression.
• Low variance: suggests small changes in the estimated target function with changes in
the training data.
• Linear Regression, Logistic Regression.
• High variance: suggests large changes in the estimated target function with changes in
the training data.
• Regression Trees, KNN Regression.

2023 Introduction to Artificial Neural Networks 71


Dilemma: Variance x Bias

• Increasing bias will decrease variance.


• Increasing variance will decrease bias.

2023 Introduction to Artificial Neural Networks 72


Dilemma: Variance x Bias
• The tradeoff

2023 Introduction to Artificial Neural Networks 73


• A very simple model with few parameters has high bias and low variance.
• A complex model with a large number of parameters will have high variance and low bias.
• Seek for balance - good without overfitting or underfitting the data.

2023 Introduction to Artificial Neural Networks 74


• Models should try to generalize beyond what is observed in the training set.
• Regularization plays a role in controlling classifiers’ overfitting.

2023 Introduction to Artificial Neural Networks 75


Artificial Neural Networks
Dealing with overfitting

• Dropout layers:
• Randomly disable some of the neurons during the training passes.

2023 Introduction to Artificial Neural Networks 76


Artificial Neural Networks
Dealing with overfitting

• Dropout layers:
# Drop half of the neurons outputs from the previous layer
self.fc1_drop = nn.Dropout(0.5)

• Note:
• ”drops out” a random set of activations in that layer by setting them to zero.
• forces the network to be redundant.
• the net should be able to provide the right classification for a specific example even if some
of the activations are dropped out.

2023 Introduction to Artificial Neural Networks 77


Artificial Neural Networks
Larger Example

• The MNIST dataset: database of handwritten digits.


• Dataset included in Pytorch.

2023 Introduction to Artificial Neural Networks 78


Artificial Neural Networks
The MNIST MLP

• Try to improve the classification results using this notebook.


• Things to try:
• Increase the number of neurons at the first layer.
• Change the optimizer and the loss function.
• Try with other optimizers.
• Try adding some extra layers.

2023 Introduction to Artificial Neural Networks 79


Artificial Neural Networks
The MNIST MLP

• Try to improve the classification results using this notebook.


• Things to try:
• Try adding Dropout layers.
• Increase the number of epochs.
• Try to normalize the data.
• What is the best accuracy?
• My solution: Solution notebook.

2023 Introduction to Artificial Neural Networks 80

You might also like