0% found this document useful (0 votes)
40 views13 pages

Rneuronales2.ipynb - Colaboratory

The document discusses loading data on aircraft flights and sensor readings, preprocessing the data, creating a neural network model to predict sensor values, training the model over 100 epochs, and evaluating model performance on validation data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views13 pages

Rneuronales2.ipynb - Colaboratory

The document discusses loading data on aircraft flights and sensor readings, preprocessing the data, creating a neural network model to predict sensor values, training the model over 100 epochs, and evaluating model performance on validation data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

21/6/2021 Rneuronales2.

ipynb - Colaboratory

Integrantes
Allccahuaman Quichua, Paul

Silvera Marquez, Nestor

Ordoñez Atuncar, Felipe

# cargamos las librerias a utilizar
 
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
 
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('fast')
 
from keras.models import Sequential
from keras.layers import Dense,Activation,Flatten
from sklearn.preprocessing import MinMaxScaler 
 

#cargar data
df = pd.read_csv('https://docs.google.com/spreadsheets/d/e/2PACX-1vTQSnzVBCPj-s0wRGIJrkO-cwY4
df.head()

activo_cod ciclos s1 s2 s3 s4 s5 s6 s7 s8

0 1 1 518.67 641.82 1589.70 1400.60 14.62 21.61 554.36 2388.06 9046

1 1 2 518.67 642.15 1591.82 1403.14 14.62 21.61 553.75 2388.04 9044

2 1 3 518.67 642.35 1587.99 1404.20 14.62 21.61 554.26 2388.08 9052

3 1 4 518.67 642.35 1582.79 1401.87 14.62 21.61 554.45 2388.11 9049

4 1 5 518.67 642.37 1582.85 1406.22 14.62 21.61 554.00 2388.06 9055

avion2 = df[df['activo_cod'] == 2]
avion2

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 1/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

activo_cod ciclos s1 s2 s3 s4 s5 s6 s7 s8

192 2 1 518.67 641.89 1583.84 1391.28 14.62 21.60 554.53 2388.01 90

193 2 2 518.67 641.82 1587.05 1393.13 14.62 21.61 554.77 2387.98 90

194 2 3 518.67 641.55 1588.32 1398.96 14.62 21.60 555.14 2388.04 90

195 2 4 518.67 641.68 1584.15 1396.08 14.62 21.61 554.25 2387.98 90

196 2 5 518.67 641.73 1579.03 1402.52 14.62 21.60 555.12 2388.03 90

... ... ... ... ... ... ... ... ... ... ...

474 2 283 518.67 643.78 1602.03 1429.67 14.62 21.61 551.46 2388.16 90

475 2 284 518.67 643.91 1601.35 1430.04 14.62 21.61 551.96 2388.22 90

476 2 285 518.67 643.67 1596.84 1431.17 14.62 21.61 550.85 2388.20 90

477 2 286 518.67 643.44 1603.63 1429.57 14.62 21.61 551.61 2388.18 9
avion2 = avion2.loc[:, ['ciclos','s2']]
avion2478 2 287 518.67 643.85 1608.50 1430.84 14.62 21.61 551.66 2388.20 9

287 rows × 23 columns


ciclos s2

192 1 641.89

193 2 641.82

194 3 641.55

195 4 641.68

196 5 641.73

... ... ...

474 283 643.78

475 284 643.91

476 285 643.67

477 286 643.44

478 287 643.85

287 rows × 2 columns

avion2.set_index('ciclos', inplace=True)
 

avion2

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 2/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

s2

ciclos

1 641.89

2 641.82

3 641.55

4 641.68

5 641.73

... ...

283 643.78

284 643.91

285 643.67

286 643.44

287 643.85

287 rows × 1 columns


avion2.dtypes
 

s2 float64

dtype: object

avion2.plot()

<matplotlib.axes._subplots.AxesSubplot at 0x7f8216a0e2d0>

CREAMOS EL MODELO DE REDES NEURONALES

## iniciamos la NN
 
https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 3/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

## PASOS es la cantidad de predictores DE x1 hasta x12
 
PASOS = 12
 
# convert series to supervised learning
def series_to_supervised(data, n_in = 1, n_out=1, dropnan=True):
    n_vars = 1 if type(data) is list else data.shape[1]
    avion1 = pd.DataFrame(data)
    cols, names = list(), list()
    # input sequence (t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(avion1.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
    # forecast sequence (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(avion1.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
    # put it all together
    agg = pd.concat(cols, axis=1)
    agg.columns = names
    # drop rows with NaN values
    if dropnan:
        agg.dropna(inplace=True)
    return agg
 
# load dataset
values = avion2.values
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(-1, 1))
values=values.reshape(-1, 1) # esto lo hacemos porque tenemos 1 sola dimension
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, PASOS, 1)
reframed.head()

var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- v


12) 11) 10) 9) 8) 7) 6) 5)

12 -0.535583 -0.588013 -0.790283 -0.692902 -0.655457 -0.977539 -0.430695 -0.041229 -0

13 -0.588013 -0.790283 -0.692902 -0.655457 -0.977539 -0.430695 -0.041229 -0.468201 -0

14 -0.790283 -0.692902 -0.655457 -0.977539 -0.430695 -0.041229 -0.468201 -0.460693 -0

15 -0.692902 -0.655457 -0.977539 -0.430695 -0.041229 -0.468201 -0.460693 -0.258423 -0

16 -0.655457 -0.977539 -0.430695 -0.041229 -0.468201 -0.460693 -0.258423 -0.722839 -0

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 4/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

# split into train and test sets
 
values = reframed.values
n_train_days = 287 - (80 + PASOS)  # considera 30 dias test
train = values[:n_train_days, :]
test = values[n_train_days:, :]
# split into input and outputs
x_train, y_train = train[:, :-1], train[:, -1]
x_val, y_val = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
x_train = x_train.reshape((x_train.shape[0], 1, x_train.shape[1]))
x_val = x_val.reshape((x_val.shape[0], 1, x_val.shape[1]))
print(x_train.shape, y_train.shape, x_val.shape, y_val.shape)

(195, 1, 12) (195,) (80, 1, 12) (80,)

def crear_modeloFF():
    model = Sequential() 
    model.add(Dense(PASOS, input_shape=(1,PASOS), activation='tanh'))
    model.add(Flatten())
    model.add(Dense(1, activation='tanh'))
    model.compile(loss='mean_absolute_error',optimizer='Adam',metrics=["mse"]) 
    model.summary()
    return model

SE ENTRENA LA RED NEURONAL

EPOCHS = 100
 
model = crear_modeloFF()
 
history = model.fit(x_train,y_train,epochs=EPOCHS,validation_data=(x_val,y_val),batch_size = 
17/17 [==============================] - 0s 2ms/step - loss: 0.1937 - mse: 0.0592 -
Epoch 73/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1732 - mse: 0.0489 -


Epoch 74/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1835 - mse: 0.0547 -


Epoch 75/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1843 - mse: 0.0518 -


Epoch 76/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1934 - mse: 0.0576 -


Epoch 77/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1690 - mse: 0.0444 -


Epoch 78/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1717 - mse: 0.0449 -


Epoch 79/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1675 - mse: 0.0439 -


Epoch 80/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1874 - mse: 0.0565 -


Epoch 81/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1817 - mse: 0.0509 -


Epoch 82/100
https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 5/13
21/6/2021 Rneuronales2.ipynb - Colaboratory
Epoch 82/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1742 - mse: 0.0491 -


Epoch 83/100

17/17 [==============================] - 0s 4ms/step - loss: 0.1848 - mse: 0.0557 -


Epoch 84/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1744 - mse: 0.0497 -


Epoch 85/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1876 - mse: 0.0574 -


Epoch 86/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1732 - mse: 0.0502 -


Epoch 87/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1706 - mse: 0.0460 -


Epoch 88/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1909 - mse: 0.0578 -


Epoch 89/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1675 - mse: 0.0437 -


Epoch 90/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1775 - mse: 0.0523 -


Epoch 91/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1887 - mse: 0.0588 -


Epoch 92/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1926 - mse: 0.0562 -


Epoch 93/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1741 - mse: 0.0478 -


Epoch 94/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1786 - mse: 0.0516 -


Epoch 95/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1856 - mse: 0.0559 -


Epoch 96/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1838 - mse: 0.0511 -


Epoch 97/100

17/17 [==============================] - 0s 2ms/step - loss: 0.1918 - mse: 0.0586 -


Epoch 98/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1971 - mse: 0.0577 -


Epoch 99/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1591 - mse: 0.0434 -


Epoch 100/100

17/17 [==============================] - 0s 3ms/step - loss: 0.1820 - mse: 0.0571 -

# VOY A ESTIMAR CON LOS VALORES DE TESTING
 
results = model.predict(x_val)
plt.figure(figsize=(15,12))
plt.plot(range(len(y_val)),y_val,c='g',label='real')
plt.plot(range(len(results)),results,c='r',label='modelo',ls ='--')
plt.title('validate')
plt.legend(loc='best')
 

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 6/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

<matplotlib.legend.Legend at 0x7f820c3e6950>

len(y_val)

80

CALCULAMOS LOS PRÓXIMOS 12 CICLOS

ultimos_periodos= avion2[263:287]
ultimos_periodos

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 7/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

s2

ciclos

264 643.47

265 643.03

266 643.14

267 642.96

268 643.10

269 643.11

270 643.12

271 643.87

272 643.27

273 643.71

274 643.18

275 643.67

276 643.82

277 643.91

278 643.44

279 643.64

280 643.63

281 643.60

282 643.94

283 643.78

284 643.91

285 643.67
ultimos_periodos.shape
286 643.44
(24, 1)
287 643.85
# elimino la columna 12 porque deseo predecirla
 
values = ultimos_periodos.values
values = values.astype('float32')
# normalize features
values=values.reshape(-1, 1) # esto lo hacemos porque tenemos 1 sola dimension
scaled = values
reframed = series_to_supervised(scaled, PASOS, 1)
reframed.drop(reframed.columns[[12]], axis=1, inplace=True) # elimino la columna 'y pred' po
https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 8/13
21/6/2021 Rneuronales2.ipynb - Colaboratory
reframed.drop(reframed.columns[[12]], axis 1, inplace True)  # elimino la columna  y_pred  po
reframed # modificar acorde al numero de predictores

var1(t- var1(t- var1(t-


var1(t-9) var1(t-8) var1(t-7) var1(t-6) va
12) 11) 10)

12 643.469971 643.030029 643.140015 642.960022 643.099976 643.109985 643.119995 643

13 643.030029 643.140015 642.960022 643.099976 643.109985 643.119995 643.869995 643

14 643.140015 642.960022 643.099976 643.109985 643.119995 643.869995 643.270020 643

15 642.960022 643.099976 643.109985 643.119995 643.869995 643.270020 643.710022 643

16 643.099976 643.109985 643.119995 643.869995 643.270020 643.710022 643.179993 643

17 643.109985 643.119995 643.869995 643.270020 643.710022 643.179993 643.669983 643

18 643.119995 643.869995 643.270020 643.710022 643.179993 643.669983 643.820007 643

19 643.869995 643.270020 643.710022 643.179993 643.669983 643.820007 643.909973 643

20 643.270020 643.710022 643.179993 643.669983 643.820007 643.909973 643.440002 643

21 643.710022 643.179993 643.669983 643.820007 643.909973 643.440002 643.640015 643

22 643.179993 643.669983 643.820007 643.909973 643.440002 643.640015 643.630005 643

23 643.669983 643.820007 643.909973 643.440002 643.640015 643.630005 643.599976 643

# escalando los valores
 
values = ultimos_periodos.values
values = values.astype('float32')
# normalize features
values=values.reshape(-1, 1) # esto lo hacemos porque tenemos 1 sola dimension
scaled = scaler.fit_transform(values)
reframed = series_to_supervised(scaled, PASOS, 1)
reframed.drop(reframed.columns[[12]], axis=1, inplace=True)
reframed.head() # modificar acorde al numero de predictores

var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- var1(t- v


12) 11) 10) 9) 8) 7) 6) 5)

12 0.040771 -0.857056 -0.632568 -1.000000 -0.714355 -0.693848 -0.673462 0.857178 -0

13 -0.857056 -0.632568 -1.000000 -0.714355 -0.693848 -0.673462 0.857178 -0.367310 0

14 -0.632568 -1.000000 -0.714355 -0.693848 -0.673462 0.857178 -0.367310 0.530640 -0

15 -1.000000 -0.714355 -0.693848 -0.673462 0.857178 -0.367310 0.530640 -0.551025 0

16 -0.714355 -0.693848 -0.673462 0.857178 -0.367310 0.530640 -0.551025 0.448975 0

len(reframed)

12

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 9/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

# X_test es la última fila de reframed
# x_test servirá para hacer las predicciones a futuro
 
values = reframed.values
x_test = values[-1:,:]
x_test = x_test.reshape((x_test.shape[0], 1, x_test.shape[1]))
x_test

array([[[ 0.4489746 , 0.75512695, 0.9387207 , -0.02038574,

0.38781738, 0.36743164, 0.30615234, 1. ,

0.673584 , 0.9387207 , 0.4489746 , -0.02038574]]],

dtype=float32)

# con esto lleno la diagonal de la matriz para seguir calculando los días de pronostico
 
periodos = 12
 
def agregarNuevoValor(x_test,nuevoValor):
    for i in range(x_test.shape[2]-1):
        x_test[0][0][i] = x_test[0][0][i+1]
    x_test[0][0][x_test.shape[2]-1] = nuevoValor
    return x_test
 
results=[]
for i in range(periodos):
    parcial=model.predict(x_test)
    results.append(parcial[0])
    print(x_test)
    x_test=agregarNuevoValor(x_test,parcial[0])

[[[ 0.4489746 0.75512695 0.9387207 -0.02038574 0.38781738

0.36743164 0.30615234 1. 0.673584 0.9387207

0.4489746 -0.02038574]]]

[[[ 0.75512695 0.9387207 -0.02038574 0.38781738 0.36743164

0.30615234 1. 0.673584 0.9387207 0.4489746

-0.02038574 0.29713687]]]

[[[ 0.9387207 -0.02038574 0.38781738 0.36743164 0.30615234

1. 0.673584 0.9387207 0.4489746 -0.02038574

0.29713687 0.4101143 ]]]

[[[-0.02038574 0.38781738 0.36743164 0.30615234 1.

0.673584 0.9387207 0.4489746 -0.02038574 0.29713687

0.4101143 0.3364859 ]]]

[[[ 0.38781738 0.36743164 0.30615234 1. 0.673584

0.9387207 0.4489746 -0.02038574 0.29713687 0.4101143

0.3364859 0.1957471 ]]]

[[[ 0.36743164 0.30615234 1. 0.673584 0.9387207

0.4489746 -0.02038574 0.29713687 0.4101143 0.3364859

0.1957471 0.36165273]]]

[[[ 0.30615234 1. 0.673584 0.9387207 0.4489746

-0.02038574 0.29713687 0.4101143 0.3364859 0.1957471

0.36165273 0.29956776]]]

[[[ 1. 0.673584 0.9387207 0.4489746 -0.02038574

0.29713687 0.4101143 0.3364859 0.1957471 0.36165273

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 10/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

0.29956776 0.33490598]]]

[[[ 0.673584 0.9387207 0.4489746 -0.02038574 0.29713687

0.4101143 0.3364859 0.1957471 0.36165273 0.29956776

0.33490598 0.3814663 ]]]

[[[ 0.9387207 0.4489746 -0.02038574 0.29713687 0.4101143

0.3364859 0.1957471 0.36165273 0.29956776 0.33490598

0.3814663 0.3661718 ]]]

[[[ 0.4489746 -0.02038574 0.29713687 0.4101143 0.3364859

0.1957471 0.36165273 0.29956776 0.33490598 0.3814663

0.3661718 0.36071187]]]

[[[-0.02038574 0.29713687 0.4101143 0.3364859 0.1957471

0.36165273 0.29956776 0.33490598 0.3814663 0.3661718

0.36071187 0.3389988 ]]]

len(results)

12

# invierto los valores pronosticados para que se entienda original valores escalados
 
adimen = [x for x in results]    
inverted = scaler.inverse_transform(adimen)
inverted

array([[643.59557807],

[643.6509359 ],

[643.6148587 ],

[643.54589808],

[643.62719021],

[643.59676918],

[643.61408456],

[643.63689866],

[643.62940451],

[643.6267292 ],

[643.61609 ],

[643.56431157]])

len(inverted)

12

 
y_pred=pd.DataFrame(inverted,columns=['Forecast'],index = [288,289,290,291,292,293,294,295,29
len(y_pred)

12

y_pred.tail()
 

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 11/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

Forecast

295 643.636899

296 643.629405

297 643.626729

298 643.616090

299 643.564312
df2=pd.concat([avion2,y_pred])
df2

s2 Forecast

1 641.89 NaN

2 641.82 NaN

3 641.55 NaN

4 641.68 NaN

5 641.73 NaN

... ... ...

295 NaN 643.636899

296 NaN 643.629405

297 NaN 643.626729

298 NaN 643.616090

299 NaN 643.564312

299 rows × 2 columns

df2=pd.concat([avion2,y_pred])

df2.plot(figsize=(12,8))

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 12/13
21/6/2021 Rneuronales2.ipynb - Colaboratory

<matplotlib.axes._subplots.AxesSubplot at 0x7f820c737c50>

check 0 s completado a las 22:23

https://colab.research.google.com/drive/1uU3cU4iZMcLl7zixo82Kx5VaWLRJKqeD#scrollTo=kUlEklb2ZXO6&printMode=true 13/13

You might also like