NeuralNetworks JorgeAndreu
NeuralNetworks JorgeAndreu
Abstract— This document presents information related to the on historical data. We are going to mainly talk
laboratory practices on Neural Networks carried out in the
course EI1028-IR2128-MT1028 – Intelligent Systems (2024-2025) about classifying messages, images, signals, etc.,
at Jaume I University which is what we mostly deal with in the lab
practices.
I. INTRODUCTION II. PERCEPTRONS
An information technology system based on the A perceptron is an algorithm designed for
human brain´s structure that provides computers to supervised learning of binary classifiers (functions
be able to use artificial intelligence is known as a that determine whether an input (a vector of
neural network. numbers) belongs to a specific class). It serves as a
linear classifier, a classification algorithm that
Neural networks consist of at least two-layer makes predictions based on a linear predictor
models (input layer and output layer) and may also function that combines the input feature vector with
include additional intermediate layers known as the set of weights.
hidden layers. The perceptron algorithm was developed in the
late 1950s, and its first custom hardware became
The information reaches the neurons in the input one of the first artificial neural networks to be
layer as a signal and is processed there, each neuron created.
is assigned a numerical weight that represents its The perceptron was the first neural network
level of importance. Using the activation function capable of learning and only composed of input and
and the threshold value, the neuron´s output value is output neurons.
calculated and weighted. Based on the result, other In a perceptron, input neurons have two states:
neurons are linked. ON and OFF, and the output neurons use a simple
Through these connections and weights, the threshold activation function to produce a decision.
algorithm is designed to generate results for each In its simplest form, this type of algorithm can only
input, training is used to optimize the weights to solve linear problems.
improve the algorithm and achieve increasingly
accurate results.
network. For this, in the code we have used the 'sgd' solver,
one hidden layer with 5 neurons each, and 4000
iterations maximum. We have achieved this graph
through 2578 iterations, it is below the maximum
allowed and its score is 100% as you can see on
Fig. 7 that it divides the points correctly.
B. Classification
With 2500 samples we get a score of 92 going up
to 142 iterations, there is an improvement in the
The goal of this practice is to apply the multilayer score but also in the number of iterations.
perceptron, which performs the linear perceptron, to
a classification task. Specifically, we will load the
data, which will be classified; divide them into two
sets, the training set, which will help me to get the
neurons to learn how they work by themselves, and
the test set, which will determine whether I have
succeeded in getting the neurons to do their job.
In addition, we will have to select a predictive
model, which will be the multilayer perceptron.
Finally, after executing the code and see that it
works, we will be able to evaluate it.
Fig. 9 2500 data plot
For the generation of samples in this practice, we In the case of 25000 samples, we get a score of
have the possibility to choose between two datasets, 91,70 similar to 2500 samples but with less
we will choose the moons dataset, in order to not iterations, only 86.
make a very common mistake of testing the
program on the same data, we will separate them
randomly.
When creating the multilayer perceptron model to
divide the blue points from the red points by two
planes, we have used a hidden layer with 5 neurons
and a maximum of 4000 iterations like the previous
exercise. The fitting function will iterate
automatically until the convergence is achieved or
until the maximum number of iterations is reached.
In our case we did reach convergence with 276
iterations, obtaining a score of 89. This score is the Fig. 10 25000 data plot
percentage of correct classification of the test data. With these checks we can conclude that the larger
the amount of test data, the fewer iterations we need
to obtain a similar percentage of correct
classification.
satisfactory result, as we can see in Figure 11, and Optimizer Epochs Accuracy Acc Loss Acc
we can also observe the main classification metrics. val val test
nadam 20 0.976 0.928 0.426 0.926
“The precision is intuitively the ability of the
classifier not to label as positive a sample that is
negative.
The recall is intuitively the ability of the classifier
to find all the positive samples.
f1-score: a weighted average of the precision and
recall.
The support is the number of occurrences of each
class.” [1].
Fig. 12 Image Classifications