Brain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep Learning
↪BatchNormalization
1
���������������������������������������� 16.0/16.0 MB
593.0 kB/s eta 0:00:0000:0100:01
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 1.20.3
Uninstalling numpy-1.20.3:
Successfully uninstalled numpy-1.20.3
ERROR: pip's dependency resolver does not currently take into account all
the packages that are installed. This behaviour is the source of the following
dependency conflicts.
rfpimp 1.3.2 requires sklearn, which is not installed.
wrf-python 1.3.4.1 requires basemap, which is not installed.
altair 5.2.0 requires typing-extensions>=4.0.1; python_version < "3.11", but you
have typing-extensions 3.7.4.3 which is incompatible.
bokeh 2.4.3 requires typing-extensions>=3.10.0, but you have typing-extensions
3.7.4.3 which is incompatible.
pandas 1.5.3 requires numpy>=1.20.3, but you have numpy 1.20.0 which is
incompatible.
pingouin 0.5.2 requires scikit-learn<1.1.0, but you have scikit-learn 1.1.3
which is incompatible.
pyportfolioopt 1.5.5 requires numpy<2.0.0,>=1.22.4, but you have numpy 1.20.0
which is incompatible.
sktime 0.14.0 requires numpy<1.23,>=1.21.0, but you have numpy 1.20.0 which is
incompatible.
tensorflow 2.4.1 requires numpy~=1.19.2, but you have numpy 1.20.0 which is
incompatible.
Successfully installed numpy-1.20.0
1 Data pre-processing
[22]: path = '/Users/kipkemoivincent/Desktop/Covid/Data2'
2
horizontal_flip=True, vertical_flip=True,zoom_range=0.3,
train_generator = train_datagen.flow_from_directory(path,
target_size=(IMG_WIDTH,␣
↪IMG_HEIGHT),
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=True,
subset='training')
batch_size=743,
class_mode='categorical',
shuffle=True,
subset='validation')
Label Mappings for classes present in the training and validation datasets
0 : glioma
1 : meningioma
2 : notumor
3 : pituitary
return
3
[31]: import matplotlib.pyplot as plt
for i in range(3):
for j in range(3):
label = labels[np.argmax(train_generator[0][1][idx])]
ax[i, j].set_title(f"{label}")
ax[i, j].imshow(train_generator[0][0][idx][:, :, :])
ax[i, j].axis("off")
idx += 1
plt.tight_layout()
#plt.suptitle("Sample Training Images", fontsize=21)
plt.show()
4
[41]: X, y = next(train_generator)
X=(X-X.mean())/X.std()
#X_test, y_test = next(validation_generator)
[44]: ((5056, 100, 100, 3), (1264, 100, 100, 3), (703, 100, 100, 3))
3 A. CustomCNN
[52]: from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.optimizers import Adam
initializer = tf.keras.initializers.HeNormal()
values = initializer(shape=(2, 2))
#Convolution
x = Conv2D(32, (3, 3), activation="relu")(input_data)
#Pooling
x = MaxPooling2D(pool_size = (4, 4), strides=(4, 4))(x)
#Dropout
x = Dropout(0.25)(x)
# 2nd Convolution
x = Conv2D(32, (3, 3), activation="relu")(x)
5
x = MaxPooling2D(pool_size = (2, 2))(x)
#Dropout
x = Dropout(0.3)(x)
#3rd Convolution
x = Conv2D(32, (3, 3), activation='relu')(x)
#Dropout
x = Dropout(0.3)(x)
metrics = ['accuracy'])
[55]: cnn.summary()
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 100, 100, 3)] 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 98, 98, 32) 896
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 24, 24, 32) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 24, 24, 32) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 22, 22, 32) 9248
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 11, 11, 32) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 11, 11, 32) 0
_________________________________________________________________
6
conv2d_5 (Conv2D) (None, 9, 9, 32) 9248
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 4, 4, 32) 0
_________________________________________________________________
dropout_5 (Dropout) (None, 4, 4, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 65664
_________________________________________________________________
dense_3 (Dense) (None, 4) 516
=================================================================
Total params: 85,572
Trainable params: 85,572
Non-trainable params: 0
_________________________________________________________________
es = EarlyStopping(monitor='val_accuracy', patience=20)
rlrop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=10)
callbacks_list = [checkpoint1,es,rlrop]
Epoch 1/100
1264/1264 [==============================] - 24s 19ms/step - loss: 1.2916 -
accuracy: 0.4375 - val_loss: 0.8828 - val_accuracy: 0.6714
7
1264/1264 [==============================] - 24s 19ms/step - loss: 0.6538 -
accuracy: 0.7470 - val_loss: 0.5994 - val_accuracy: 0.7411
8
Epoch 00012: val_accuracy did not improve from 0.89189
Epoch 13/100
1264/1264 [==============================] - 28s 22ms/step - loss: 0.3606 -
accuracy: 0.8654 - val_loss: 0.3292 - val_accuracy: 0.8848
9
Epoch 00021: val_accuracy did not improve from 0.91607
Epoch 22/100
1264/1264 [==============================] - 27s 21ms/step - loss: 0.2643 -
accuracy: 0.9075 - val_loss: 0.2284 - val_accuracy: 0.9203
10
1264/1264 [==============================] - 26s 21ms/step - loss: 0.1907 -
accuracy: 0.9313 - val_loss: 0.1899 - val_accuracy: 0.9331
11
accuracy: 0.9402 - val_loss: 0.1590 - val_accuracy: 0.9445
12
Epoch 00048: val_accuracy improved from 0.95306 to 0.95590, saving model to
weights.best_custom_cnn2.hdf5
Epoch 49/100
1264/1264 [==============================] - 25s 20ms/step - loss: 0.1266 -
accuracy: 0.9547 - val_loss: 0.1216 - val_accuracy: 0.9545
13
Epoch 00057: val_accuracy did not improve from 0.96302
Epoch 58/100
1264/1264 [==============================] - 37s 29ms/step - loss: 0.1033 -
accuracy: 0.9603 - val_loss: 0.1273 - val_accuracy: 0.9545
14
Epoch 67/100
1264/1264 [==============================] - 29s 23ms/step - loss: 0.0809 -
accuracy: 0.9713 - val_loss: 0.1311 - val_accuracy: 0.9474
15
Epoch 00076: val_accuracy did not improve from 0.96871
Epoch 77/100
1264/1264 [==============================] - 36s 29ms/step - loss: 0.0592 -
accuracy: 0.9783 - val_loss: 0.1137 - val_accuracy: 0.9659
16
plt.legend(['train', 'test'], loc='upper left')
plt.show()
17
[101]: # load the saved model
from keras.models import load_model
cnn=load_model('weights.best_custom_cnn2.hdf5')
[106]: pred1=cnn.predict(X_test)
18
[103]: from keras import models
from numpy import loadtxt
from tensorflow.keras.models import save_model
save_model(cnn, "customCNN1.h5")
# load and evaluate a saved model
loaded_model = models.load_model('customCNN1.h5')
# summarize model.
model=loaded_model
train_pred_p=model.predict(X_train)
train_pred = np.argmax(train_pred_p, axis=1)
4 B. MobileNetV2
[64]: from tensorflow.keras.layers.experimental.preprocessing import RandomFlip,␣
↪RandomRotation
19
input_shape = image_size
base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
include_top=False, # Do not␣
↪include the dense prediction layer
weights="imagenet") # Load␣
↪imageNet parameters
outputs = prediction_layer(x)
return model
[65]: filepath21="weights.best_mobile_net2.hdf5"
checkpoint2 = ModelCheckpoint(filepath21, monitor='val_accuracy', verbose=1,␣
↪save_best_only=True, mode='max')
20
[66]: # Define a model using the make_model function
image_size = (100,100,3)
mobilenet_model = make_mobilenet_model(image_size, num_classes = 2)
metrics = ['accuracy'])
21
callbacks=callbacks_list2,␣
↪class_weight=class_weights)
Epoch 1/100
1264/1264 [==============================] - 45s 34ms/step - loss: 1.0274 -
accuracy: 0.6334 - val_loss: 0.4062 - val_accuracy: 0.8465
22
Epoch 00008: val_accuracy did not improve from 0.90665
Epoch 9/100
1264/1264 [==============================] - 53s 42ms/step - loss: 0.2349 -
accuracy: 0.9140 - val_loss: 0.2349 - val_accuracy: 0.9169
23
accuracy: 0.9520 - val_loss: 0.2212 - val_accuracy: 0.9217
24
accuracy: 0.9711 - val_loss: 0.1555 - val_accuracy: 0.9422
25
accuracy: 0.9748 - val_loss: 0.1577 - val_accuracy: 0.9486
26
Epoch 00044: val_accuracy did not improve from 0.95016
Epoch 45/100
1264/1264 [==============================] - 48s 38ms/step - loss: 0.0428 -
accuracy: 0.9862 - val_loss: 0.1810 - val_accuracy: 0.9438
27
Epoch 00053: val_accuracy did not improve from 0.95174
Epoch 54/100
1264/1264 [==============================] - 62s 49ms/step - loss: 0.0234 -
accuracy: 0.9930 - val_loss: 0.1723 - val_accuracy: 0.9454
28
1264/1264 [==============================] - 49s 39ms/step - loss: 0.0229 -
accuracy: 0.9941 - val_loss: 0.1645 - val_accuracy: 0.9517
29
Epoch 00072: val_accuracy did not improve from 0.95332
Epoch 73/100
1264/1264 [==============================] - 46s 37ms/step - loss: 0.0220 -
accuracy: 0.9929 - val_loss: 0.1659 - val_accuracy: 0.9494
30
1264/1264 [==============================] - 55s 43ms/step - loss: 0.0099 -
accuracy: 0.9969 - val_loss: 0.1697 - val_accuracy: 0.9533
31
32
[104]: mobilenet_model=load_model('weights.best_mobile_net2.hdf5')
[107]: pred2=mobilenet_model.predict(X_test)
33
[72]: from keras import models
from numpy import loadtxt
from tensorflow.keras.models import save_model
save_model(cnn, "Mobilenetv2.h5")
# load and evaluate a saved model
loaded_model = models.load_model('Mobilenetv2.h5')
# summarize model.
model=loaded_model
train_pred_p=model.predict(X_train)
train_pred = np.argmax(train_pred_p, axis=1)
5 C: DenseNet169
[73]: import ssl
ssl._create_default_https_context = ssl._create_unverified_context
34
[78]: def make_densenet_model(image_size, num_classes):
input_shape = image_size
base_model = tf.keras.applications.DenseNet169(input_shape=input_shape,
include_top=False, # Do not␣
↪include the dense prediction layer
weights="imagenet") # Load␣
↪imageNet parameters
x = base_model(x, training=False)
outputs = prediction_layer(x)
return model
Model: "model_4"
_________________________________________________________________
35
Layer (type) Output Shape Param #
=================================================================
input_8 (InputLayer) [(None, 100, 100, 3)] 0
_________________________________________________________________
densenet169 (Functional) (None, 3, 3, 1664) 12642880
_________________________________________________________________
global_average_pooling2d_2 ( (None, 1664) 0
_________________________________________________________________
dropout_8 (Dropout) (None, 1664) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 1664) 0
_________________________________________________________________
dense_8 (Dense) (None, 128) 213120
_________________________________________________________________
dense_9 (Dense) (None, 4) 516
=================================================================
Total params: 12,856,516
Trainable params: 213,636
Non-trainable params: 12,642,880
_________________________________________________________________
[80]: filepath31="weights.best_densenet1692.hdf5"
checkpoint3 = ModelCheckpoint(filepath31, monitor='val_accuracy', verbose=1,␣
↪save_best_only=True, mode='max')
metrics = ['accuracy'])
,callbacks=callbacks_list3,class_weight=class_weights)
Epoch 1/100
2528/2528 [==============================] - 229s 90ms/step - loss: 0.4918 -
accuracy: 0.8218 - val_loss: 0.3327 - val_accuracy: 0.8774
36
2528/2528 [==============================] - 245s 97ms/step - loss: 0.3589 -
accuracy: 0.8645 - val_loss: 0.2915 - val_accuracy: 0.8837
37
Epoch 00011: val_accuracy improved from 0.92089 to 0.93196, saving model to
weights.best_densenet1692.hdf5
Epoch 12/100
2528/2528 [==============================] - 236s 94ms/step - loss: 0.1774 -
accuracy: 0.9330 - val_loss: 0.1971 - val_accuracy: 0.9248
38
accuracy: 0.9583 - val_loss: 0.1710 - val_accuracy: 0.9391
39
Epoch 00029: val_accuracy did not improve from 0.95253
Epoch 30/100
2528/2528 [==============================] - 266s 105ms/step - loss: 0.0787 -
accuracy: 0.9713 - val_loss: 0.1647 - val_accuracy: 0.9422
40
Epoch 39/100
2528/2528 [==============================] - 235s 93ms/step - loss: 0.0585 -
accuracy: 0.9796 - val_loss: 0.1716 - val_accuracy: 0.9462
41
accuracy: 0.9899 - val_loss: 0.1484 - val_accuracy: 0.9581
42
Epoch 00057: val_accuracy did not improve from 0.96123
Epoch 58/100
2528/2528 [==============================] - 261s 103ms/step - loss: 0.0336 -
accuracy: 0.9879 - val_loss: 0.1460 - val_accuracy: 0.9573
43
accuracy: 0.9903 - val_loss: 0.1477 - val_accuracy: 0.9581
44
45
[108]: model=load_model('weights.best_densenet1692.hdf5')
[117]: pred3=model.predict(X_test)
46
[88]: from keras import models
from numpy import loadtxt
from tensorflow.keras.models import save_model
save_model(cnn, "DenseNet1692.h5")
# load and evaluate a saved model
loaded_model = models.load_model('DenseNet1692.h5')
# summarize model.
model=loaded_model
train_pred_p=model.predict(X_train)
train_pred = np.argmax(train_pred_p, axis=1)
6 D: ResNet50
[89]: def make_resnet_model(image_size, num_classes):
input_shape = image_size
base_model = tf.keras.applications.ResNet50(input_shape=input_shape,
47
include_top=False, # Do not␣
↪include the dense prediction layer
weights="imagenet") # Load␣
↪imageNet parameters
outputs = prediction_layer(x)
return model
Model: "model_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) [(None, 100, 100, 3)] 0
_________________________________________________________________
48
resnet50 (Functional) (None, 4, 4, 2048) 23587712
_________________________________________________________________
global_average_pooling2d_3 ( (None, 2048) 0
_________________________________________________________________
dropout_9 (Dropout) (None, 2048) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 2048) 0
_________________________________________________________________
dense_10 (Dense) (None, 128) 262272
_________________________________________________________________
dense_11 (Dense) (None, 4) 516
=================================================================
Total params: 23,850,500
Trainable params: 262,788
Non-trainable params: 23,587,712
_________________________________________________________________
[91]: filepath51="weights.best_ResNet502.hdf5"
checkpoint4 = ModelCheckpoint(filepath51, monitor='val_accuracy', verbose=1,␣
↪save_best_only=True, mode='max')
,callbacks=callbacks_list4,class_weight=class_weights)
Epoch 1/100
2528/2528 [==============================] - 283s 110ms/step - loss: 1.0110 -
accuracy: 0.5737 - val_loss: 0.5416 - val_accuracy: 0.8062
49
Epoch 00002: val_accuracy improved from 0.80617 to 0.82278, saving model to
weights.best_ResNet502.hdf5
Epoch 3/100
2528/2528 [==============================] - 334s 132ms/step - loss: 0.5222 -
accuracy: 0.7965 - val_loss: 0.4245 - val_accuracy: 0.8347
50
2528/2528 [==============================] - 215s 85ms/step - loss: 0.3708 -
accuracy: 0.8583 - val_loss: 0.3265 - val_accuracy: 0.8813
51
Epoch 00020: val_accuracy improved from 0.89082 to 0.89399, saving model to
weights.best_ResNet502.hdf5
Epoch 21/100
2528/2528 [==============================] - 242s 96ms/step - loss: 0.2757 -
accuracy: 0.8933 - val_loss: 0.2837 - val_accuracy: 0.8892
52
2528/2528 [==============================] - 250s 99ms/step - loss: 0.2408 -
accuracy: 0.9091 - val_loss: 0.2735 - val_accuracy: 0.9074
53
accuracy: 0.9228 - val_loss: 0.2565 - val_accuracy: 0.9106
54
weights.best_ResNet502.hdf5
Epoch 48/100
2528/2528 [==============================] - 311s 123ms/step - loss: 0.1458 -
accuracy: 0.9497 - val_loss: 0.2222 - val_accuracy: 0.9248
55
accuracy: 0.9502 - val_loss: 0.2165 - val_accuracy: 0.9264
56
Epoch 00066: val_accuracy did not improve from 0.93038
Epoch 67/100
2528/2528 [==============================] - 250s 99ms/step - loss: 0.1613 -
accuracy: 0.9400 - val_loss: 0.2166 - val_accuracy: 0.9288
57
accuracy: 0.9436 - val_loss: 0.2165 - val_accuracy: 0.9280
58
59
[110]: model=load_model('weights.best_ResNet502.hdf5')
[111]: pred4=model.predict(X_test)
60
[98]: from keras import models
from numpy import loadtxt
from tensorflow.keras.models import save_model
save_model(cnn, "ResNet502.h5")
# load and evaluate a saved model
loaded_model = models.load_model('ResNet502.h5')
# summarize model.
model=loaded_model
train_pred_p=model.predict(X_train)
train_pred = np.argmax(train_pred_p, axis=1)
plt.title('model accuracy')
plt.ylabel('accuracy')
61
plt.xlabel('epoch')
plt.legend(['CustomCNN', 'MobileNetV2','DenseNet169','ResNet50'], loc='lower␣
↪right')
plt.show()
# summarize history for loss
plt.plot(history1.history['loss'])
plt.plot(history2.history['loss'])
plt.plot(history3.history['loss'])
plt.plot(history4.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['CustomCNN', 'MobileNetV2','DenseNet169','ResNet50'], loc='upper␣
↪right')
plt.show()
62
7 ENSEMBLE
To create an ensemble of the four models, we will stack their predictions and use Microsoft FLAML
AutoML to find an optimal combiner.
[120]: 0.968705547652916
[119]: y_test
63
[119]: array([[1., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 1., 0.],
…,
[0., 0., 1., 0.],
[0., 0., 1., 0.],
[0., 1., 0., 0.]], dtype=float32)
64
lgbm
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.4s, estimator lgbm's
best error=0.0342, best estimator xgboost's best error=0.0342
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 6, current learner
lgbm
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.5s, estimator lgbm's
best error=0.0299, best estimator lgbm's best error=0.0299
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 7, current learner
lgbm
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.5s, estimator lgbm's
best error=0.0299, best estimator lgbm's best error=0.0299
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 8, current learner
lgbm
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.6s, estimator lgbm's
best error=0.0299, best estimator lgbm's best error=0.0299
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 9, current learner
lgbm
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.7s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 10, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.8s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 11, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 0.9s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 12, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:00] {2391} INFO - at 1.0s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:00] {2218} INFO - iteration 13, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.1s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 14, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.3s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 15, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.3s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 16, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.5s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 17, current
65
learner lgbm
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.5s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 18, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.7s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 19, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.8s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 20, current
learner rf
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 1.9s, estimator rf's
best error=0.0455, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 21, current
learner rf
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 2.0s, estimator rf's
best error=0.0455, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 22, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:01] {2391} INFO - at 2.1s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:01] {2218} INFO - iteration 23, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.2s, estimator lgbm's
best error=0.0256, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 24, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.2s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 25, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.4s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 26, current
learner rf
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.6s, estimator rf's
best error=0.0370, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 27, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.7s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 28, current
learner extra_tree
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 2.8s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 29, current
66
learner extra_tree
[flaml.automl.logger: 04-25 17:07:02] {2391} INFO - at 3.0s, estimator
extra_tree's best error=0.0313, best estimator lgbm's best error=0.0256
[flaml.automl.logger: 04-25 17:07:02] {2218} INFO - iteration 30, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.1s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 31, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.2s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 32, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.4s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 33, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.5s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 34, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.8s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 35, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 3.9s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 36, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 4.0s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 37, current
learner rf
[flaml.automl.logger: 04-25 17:07:03] {2391} INFO - at 4.1s, estimator rf's
best error=0.0370, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:03] {2218} INFO - iteration 38, current
learner rf
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.3s, estimator rf's
best error=0.0370, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 39, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.4s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 40, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.5s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 41, current
67
learner xgboost
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.6s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 42, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.8s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 43, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 4.9s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 44, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:04] {2391} INFO - at 5.0s, estimator
xgboost's best error=0.0284, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:04] {2218} INFO - iteration 45, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.1s, estimator
xgboost's best error=0.0256, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 46, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.3s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 47, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.4s, estimator
xgboost's best error=0.0256, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 48, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.5s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 49, current
learner rf
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.6s, estimator rf's
best error=0.0370, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 50, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 5.9s, estimator
xgboost's best error=0.0256, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 51, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:05] {2391} INFO - at 6.0s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:05] {2218} INFO - iteration 52, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:06] {2391} INFO - at 6.1s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:06] {2218} INFO - iteration 53, current
68
learner lgbm
[flaml.automl.logger: 04-25 17:07:06] {2391} INFO - at 6.2s, estimator lgbm's
best error=0.0242, best estimator lgbm's best error=0.0242
[flaml.automl.logger: 04-25 17:07:06] {2218} INFO - iteration 54, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:06] {2391} INFO - at 6.4s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:06] {2218} INFO - iteration 55, current
learner rf
[flaml.automl.logger: 04-25 17:07:06] {2391} INFO - at 6.7s, estimator rf's
best error=0.0370, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:06] {2218} INFO - iteration 56, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:07] {2391} INFO - at 7.2s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:07] {2218} INFO - iteration 57, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:07] {2391} INFO - at 7.4s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:07] {2218} INFO - iteration 58, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:07] {2391} INFO - at 7.5s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:07] {2218} INFO - iteration 59, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:07] {2391} INFO - at 7.9s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:07] {2218} INFO - iteration 60, current
learner rf
[flaml.automl.logger: 04-25 17:07:08] {2391} INFO - at 8.1s, estimator rf's
best error=0.0327, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:08] {2218} INFO - iteration 61, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:08] {2391} INFO - at 8.3s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:08] {2218} INFO - iteration 62, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:08] {2391} INFO - at 8.5s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:08] {2218} INFO - iteration 63, current
learner rf
[flaml.automl.logger: 04-25 17:07:08] {2391} INFO - at 8.7s, estimator rf's
best error=0.0327, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:08] {2218} INFO - iteration 64, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:08] {2391} INFO - at 8.8s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:08] {2218} INFO - iteration 65, current
69
learner lgbm
[flaml.automl.logger: 04-25 17:07:09] {2391} INFO - at 9.4s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:09] {2218} INFO - iteration 66, current
learner rf
[flaml.automl.logger: 04-25 17:07:09] {2391} INFO - at 9.6s, estimator rf's
best error=0.0327, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:09] {2218} INFO - iteration 67, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:09] {2391} INFO - at 10.0s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:09] {2218} INFO - iteration 68, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:10] {2391} INFO - at 10.2s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:10] {2218} INFO - iteration 69, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:10] {2391} INFO - at 10.5s, estimator lgbm's
best error=0.0213, best estimator lgbm's best error=0.0213
[flaml.automl.logger: 04-25 17:07:10] {2218} INFO - iteration 70, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:10] {2391} INFO - at 10.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:10] {2218} INFO - iteration 71, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:10] {2391} INFO - at 10.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:10] {2218} INFO - iteration 72, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:10] {2391} INFO - at 11.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:10] {2218} INFO - iteration 73, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:11] {2391} INFO - at 11.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:11] {2218} INFO - iteration 74, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:11] {2391} INFO - at 11.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:11] {2218} INFO - iteration 75, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:11] {2391} INFO - at 11.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:11] {2218} INFO - iteration 76, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:11] {2391} INFO - at 11.7s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:11] {2218} INFO - iteration 77, current
70
learner xgboost
[flaml.automl.logger: 04-25 17:07:11] {2391} INFO - at 12.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:11] {2218} INFO - iteration 78, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:12] {2391} INFO - at 12.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:12] {2218} INFO - iteration 79, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:12] {2391} INFO - at 12.6s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:12] {2218} INFO - iteration 80, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:12] {2391} INFO - at 12.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:12] {2218} INFO - iteration 81, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:12] {2391} INFO - at 13.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:12] {2218} INFO - iteration 82, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:13] {2391} INFO - at 13.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:13] {2218} INFO - iteration 83, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:13] {2391} INFO - at 13.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:13] {2218} INFO - iteration 84, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:13] {2391} INFO - at 13.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:13] {2218} INFO - iteration 85, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:13] {2391} INFO - at 14.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:13] {2218} INFO - iteration 86, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:14] {2391} INFO - at 14.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:14] {2218} INFO - iteration 87, current
learner rf
[flaml.automl.logger: 04-25 17:07:14] {2391} INFO - at 14.4s, estimator rf's
best error=0.0327, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:14] {2218} INFO - iteration 88, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:14] {2391} INFO - at 14.5s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:14] {2218} INFO - iteration 89, current
71
learner xgboost
[flaml.automl.logger: 04-25 17:07:14] {2391} INFO - at 14.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:14] {2218} INFO - iteration 90, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:14] {2391} INFO - at 15.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:14] {2218} INFO - iteration 91, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:15] {2391} INFO - at 15.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:15] {2218} INFO - iteration 92, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:15] {2391} INFO - at 15.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:15] {2218} INFO - iteration 93, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:15] {2391} INFO - at 15.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:15] {2218} INFO - iteration 94, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:15] {2391} INFO - at 15.9s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:15] {2218} INFO - iteration 95, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:16] {2391} INFO - at 16.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:16] {2218} INFO - iteration 96, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:16] {2391} INFO - at 16.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:16] {2218} INFO - iteration 97, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:16] {2391} INFO - at 16.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:16] {2218} INFO - iteration 98, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:16] {2391} INFO - at 17.0s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:16] {2218} INFO - iteration 99, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:17] {2391} INFO - at 17.7s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:17] {2218} INFO - iteration 100, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:17] {2391} INFO - at 17.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:17] {2218} INFO - iteration 101, current
72
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:17] {2391} INFO - at 18.1s, estimator
xgb_limitdepth's best error=0.0327, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:17] {2218} INFO - iteration 102, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 18.2s, estimator
xgb_limitdepth's best error=0.0299, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 103, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 18.4s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 104, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 18.6s, estimator
xgb_limitdepth's best error=0.0299, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 105, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 18.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 106, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 18.8s, estimator
xgb_limitdepth's best error=0.0299, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 107, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:18] {2391} INFO - at 19.1s, estimator
xgb_limitdepth's best error=0.0284, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:18] {2218} INFO - iteration 108, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:19] {2391} INFO - at 19.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:19] {2218} INFO - iteration 109, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:19] {2391} INFO - at 19.6s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:19] {2218} INFO - iteration 110, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:19] {2391} INFO - at 19.9s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:19] {2218} INFO - iteration 111, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:20] {2391} INFO - at 20.3s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:20] {2218} INFO - iteration 112, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:20] {2391} INFO - at 20.6s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:20] {2218} INFO - iteration 113, current
73
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:20] {2391} INFO - at 20.8s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:20] {2218} INFO - iteration 114, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:21] {2391} INFO - at 21.1s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:21] {2218} INFO - iteration 115, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:21] {2391} INFO - at 21.3s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:21] {2218} INFO - iteration 116, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:21] {2391} INFO - at 21.7s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:21] {2218} INFO - iteration 117, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:21] {2391} INFO - at 21.9s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:21] {2218} INFO - iteration 118, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:22] {2391} INFO - at 22.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:22] {2218} INFO - iteration 119, current
learner lgbm
[flaml.automl.logger: 04-25 17:07:22] {2391} INFO - at 22.5s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:22] {2218} INFO - iteration 120, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:22] {2391} INFO - at 22.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:22] {2218} INFO - iteration 121, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:22] {2391} INFO - at 22.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:22] {2218} INFO - iteration 122, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:23] {2391} INFO - at 23.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:23] {2218} INFO - iteration 123, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:23] {2391} INFO - at 23.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:23] {2218} INFO - iteration 124, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:23] {2391} INFO - at 23.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:23] {2218} INFO - iteration 125, current
74
learner xgboost
[flaml.automl.logger: 04-25 17:07:23] {2391} INFO - at 23.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:23] {2218} INFO - iteration 126, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:23] {2391} INFO - at 24.1s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:23] {2218} INFO - iteration 127, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:24] {2391} INFO - at 24.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:24] {2218} INFO - iteration 128, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:24] {2391} INFO - at 24.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:24] {2218} INFO - iteration 129, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:24] {2391} INFO - at 24.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:24] {2218} INFO - iteration 130, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:25] {2391} INFO - at 25.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:25] {2218} INFO - iteration 131, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:25] {2391} INFO - at 25.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:25] {2218} INFO - iteration 132, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:25] {2391} INFO - at 25.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:25] {2218} INFO - iteration 133, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:25] {2391} INFO - at 25.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:25] {2218} INFO - iteration 134, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:25] {2391} INFO - at 26.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:25] {2218} INFO - iteration 135, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:26] {2391} INFO - at 26.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:26] {2218} INFO - iteration 136, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:26] {2391} INFO - at 26.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:26] {2218} INFO - iteration 137, current
75
learner xgboost
[flaml.automl.logger: 04-25 17:07:26] {2391} INFO - at 26.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:26] {2218} INFO - iteration 138, current
learner rf
[flaml.automl.logger: 04-25 17:07:26] {2391} INFO - at 27.0s, estimator rf's
best error=0.0284, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:26] {2218} INFO - iteration 139, current
learner rf
[flaml.automl.logger: 04-25 17:07:27] {2391} INFO - at 27.2s, estimator rf's
best error=0.0284, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:27] {2218} INFO - iteration 140, current
learner rf
[flaml.automl.logger: 04-25 17:07:27] {2391} INFO - at 27.4s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:27] {2218} INFO - iteration 141, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:27] {2391} INFO - at 27.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:27] {2218} INFO - iteration 142, current
learner rf
[flaml.automl.logger: 04-25 17:07:27] {2391} INFO - at 27.9s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:27] {2218} INFO - iteration 143, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:27] {2391} INFO - at 28.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:27] {2218} INFO - iteration 144, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:28] {2391} INFO - at 28.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:28] {2218} INFO - iteration 145, current
learner rf
[flaml.automl.logger: 04-25 17:07:28] {2391} INFO - at 28.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:28] {2218} INFO - iteration 146, current
learner rf
[flaml.automl.logger: 04-25 17:07:28] {2391} INFO - at 28.7s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:28] {2218} INFO - iteration 147, current
learner rf
[flaml.automl.logger: 04-25 17:07:28] {2391} INFO - at 28.9s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:28] {2218} INFO - iteration 148, current
learner rf
[flaml.automl.logger: 04-25 17:07:28] {2391} INFO - at 29.0s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:28] {2218} INFO - iteration 149, current
76
learner rf
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 29.2s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 150, current
learner rf
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 29.4s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 151, current
learner rf
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 29.6s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 152, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 29.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 153, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 29.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 154, current
learner rf
[flaml.automl.logger: 04-25 17:07:29] {2391} INFO - at 30.1s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:29] {2218} INFO - iteration 155, current
learner rf
[flaml.automl.logger: 04-25 17:07:30] {2391} INFO - at 30.3s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:30] {2218} INFO - iteration 156, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:30] {2391} INFO - at 30.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:30] {2218} INFO - iteration 157, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:30] {2391} INFO - at 30.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:30] {2218} INFO - iteration 158, current
learner rf
[flaml.automl.logger: 04-25 17:07:30] {2391} INFO - at 30.9s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:30] {2218} INFO - iteration 159, current
learner rf
[flaml.automl.logger: 04-25 17:07:30] {2391} INFO - at 31.1s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:30] {2218} INFO - iteration 160, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:31] {2391} INFO - at 31.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:31] {2218} INFO - iteration 161, current
77
learner rf
[flaml.automl.logger: 04-25 17:07:31] {2391} INFO - at 31.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:31] {2218} INFO - iteration 162, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:31] {2391} INFO - at 31.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:31] {2218} INFO - iteration 163, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:32] {2391} INFO - at 32.1s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:32] {2218} INFO - iteration 164, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:32] {2391} INFO - at 32.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:32] {2218} INFO - iteration 165, current
learner rf
[flaml.automl.logger: 04-25 17:07:32] {2391} INFO - at 32.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:32] {2218} INFO - iteration 166, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:32] {2391} INFO - at 32.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:32] {2218} INFO - iteration 167, current
learner lrl1
[flaml.automl.logger: 04-25 17:07:32] {2391} INFO - at 33.0s, estimator lrl1's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:32] {2218} INFO - iteration 168, current
learner lrl1
/opt/anaconda3/envs/Project/lib/python3.8/site-
packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was
reached which means the coef_ did not converge
warnings.warn(
/opt/anaconda3/envs/Project/lib/python3.8/site-
packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was
reached which means the coef_ did not converge
warnings.warn(
/opt/anaconda3/envs/Project/lib/python3.8/site-
packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was
reached which means the coef_ did not converge
warnings.warn(
/opt/anaconda3/envs/Project/lib/python3.8/site-
packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was
reached which means the coef_ did not converge
warnings.warn(
[flaml.automl.logger: 04-25 17:07:33] {2391} INFO - at 33.2s, estimator lrl1's
best error=0.0256, best estimator xgboost's best error=0.0199
78
[flaml.automl.logger: 04-25 17:07:33] {2218} INFO - iteration 169, current
learner lrl1
[flaml.automl.logger: 04-25 17:07:33] {2391} INFO - at 33.3s, estimator lrl1's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:33] {2218} INFO - iteration 170, current
learner rf
[flaml.automl.logger: 04-25 17:07:33] {2391} INFO - at 33.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:33] {2218} INFO - iteration 171, current
learner rf
[flaml.automl.logger: 04-25 17:07:33] {2391} INFO - at 33.6s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:33] {2218} INFO - iteration 172, current
learner rf
[flaml.automl.logger: 04-25 17:07:33] {2391} INFO - at 33.9s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:33] {2218} INFO - iteration 173, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:34] {2391} INFO - at 34.1s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:34] {2218} INFO - iteration 174, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:34] {2391} INFO - at 34.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:34] {2218} INFO - iteration 175, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:34] {2391} INFO - at 34.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:34] {2218} INFO - iteration 176, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:34] {2391} INFO - at 34.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:34] {2218} INFO - iteration 177, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:07:34] {2391} INFO - at 34.9s, estimator
xgb_limitdepth's best error=0.0242, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:34] {2218} INFO - iteration 178, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:35] {2391} INFO - at 35.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:35] {2218} INFO - iteration 179, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:35] {2391} INFO - at 35.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:35] {2218} INFO - iteration 180, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:36] {2391} INFO - at 36.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
79
[flaml.automl.logger: 04-25 17:07:36] {2218} INFO - iteration 181, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:36] {2391} INFO - at 36.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:36] {2218} INFO - iteration 182, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:36] {2391} INFO - at 36.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:36] {2218} INFO - iteration 183, current
learner rf
[flaml.automl.logger: 04-25 17:07:36] {2391} INFO - at 37.0s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:36] {2218} INFO - iteration 184, current
learner rf
[flaml.automl.logger: 04-25 17:07:37] {2391} INFO - at 37.2s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:37] {2218} INFO - iteration 185, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:37] {2391} INFO - at 37.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:37] {2218} INFO - iteration 186, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:37] {2391} INFO - at 37.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:37] {2218} INFO - iteration 187, current
learner rf
[flaml.automl.logger: 04-25 17:07:37] {2391} INFO - at 38.0s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:37] {2218} INFO - iteration 188, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:38] {2391} INFO - at 38.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:38] {2218} INFO - iteration 189, current
learner rf
[flaml.automl.logger: 04-25 17:07:38] {2391} INFO - at 38.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:38] {2218} INFO - iteration 190, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:38] {2391} INFO - at 38.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:38] {2218} INFO - iteration 191, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:39] {2391} INFO - at 39.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:39] {2218} INFO - iteration 192, current
learner rf
[flaml.automl.logger: 04-25 17:07:39] {2391} INFO - at 39.5s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
80
[flaml.automl.logger: 04-25 17:07:39] {2218} INFO - iteration 193, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:39] {2391} INFO - at 39.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:39] {2218} INFO - iteration 194, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:40] {2391} INFO - at 40.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:40] {2218} INFO - iteration 195, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:40] {2391} INFO - at 40.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:40] {2218} INFO - iteration 196, current
learner rf
[flaml.automl.logger: 04-25 17:07:40] {2391} INFO - at 40.7s, estimator rf's
best error=0.0270, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:40] {2218} INFO - iteration 197, current
learner rf
[flaml.automl.logger: 04-25 17:07:40] {2391} INFO - at 41.0s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:40] {2218} INFO - iteration 198, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:41] {2391} INFO - at 41.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:41] {2218} INFO - iteration 199, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:41] {2391} INFO - at 41.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:41] {2218} INFO - iteration 200, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:41] {2391} INFO - at 41.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:41] {2218} INFO - iteration 201, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:42] {2391} INFO - at 42.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:42] {2218} INFO - iteration 202, current
learner rf
[flaml.automl.logger: 04-25 17:07:42] {2391} INFO - at 42.7s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:42] {2218} INFO - iteration 203, current
learner rf
[flaml.automl.logger: 04-25 17:07:42] {2391} INFO - at 43.1s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:42] {2218} INFO - iteration 204, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:43] {2391} INFO - at 43.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
81
[flaml.automl.logger: 04-25 17:07:43] {2218} INFO - iteration 205, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:43] {2391} INFO - at 43.9s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:43] {2218} INFO - iteration 206, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:44] {2391} INFO - at 44.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:44] {2218} INFO - iteration 207, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:44] {2391} INFO - at 44.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:44] {2218} INFO - iteration 208, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:44] {2391} INFO - at 45.0s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:44] {2218} INFO - iteration 209, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:45] {2391} INFO - at 45.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:45] {2218} INFO - iteration 210, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:45] {2391} INFO - at 45.5s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:45] {2218} INFO - iteration 211, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:46] {2391} INFO - at 46.2s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:46] {2218} INFO - iteration 212, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:46] {2391} INFO - at 46.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:46] {2218} INFO - iteration 213, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:46] {2391} INFO - at 46.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:46] {2218} INFO - iteration 214, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:46] {2391} INFO - at 46.8s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:46] {2218} INFO - iteration 215, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:47] {2391} INFO - at 47.1s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:47] {2218} INFO - iteration 216, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:47] {2391} INFO - at 47.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
82
[flaml.automl.logger: 04-25 17:07:47] {2218} INFO - iteration 217, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:47] {2391} INFO - at 47.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:47] {2218} INFO - iteration 218, current
learner rf
[flaml.automl.logger: 04-25 17:07:47] {2391} INFO - at 47.8s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:47] {2218} INFO - iteration 219, current
learner rf
[flaml.automl.logger: 04-25 17:07:47] {2391} INFO - at 48.1s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:47] {2218} INFO - iteration 220, current
learner rf
[flaml.automl.logger: 04-25 17:07:48] {2391} INFO - at 48.3s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:48] {2218} INFO - iteration 221, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:48] {2391} INFO - at 48.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:48] {2218} INFO - iteration 222, current
learner rf
[flaml.automl.logger: 04-25 17:07:48] {2391} INFO - at 49.0s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:48] {2218} INFO - iteration 223, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:49] {2391} INFO - at 49.3s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:49] {2218} INFO - iteration 224, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:49] {2391} INFO - at 49.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:49] {2218} INFO - iteration 225, current
learner rf
[flaml.automl.logger: 04-25 17:07:49] {2391} INFO - at 49.9s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:49] {2218} INFO - iteration 226, current
learner rf
[flaml.automl.logger: 04-25 17:07:50] {2391} INFO - at 50.1s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:50] {2218} INFO - iteration 227, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:50] {2391} INFO - at 50.6s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:50] {2218} INFO - iteration 228, current
learner rf
[flaml.automl.logger: 04-25 17:07:50] {2391} INFO - at 50.9s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0199
83
[flaml.automl.logger: 04-25 17:07:50] {2218} INFO - iteration 229, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:51] {2391} INFO - at 51.4s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:51] {2218} INFO - iteration 230, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:51] {2391} INFO - at 51.7s, estimator
xgboost's best error=0.0199, best estimator xgboost's best error=0.0199
[flaml.automl.logger: 04-25 17:07:51] {2218} INFO - iteration 231, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:52] {2391} INFO - at 52.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:52] {2218} INFO - iteration 232, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:52] {2391} INFO - at 52.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:52] {2218} INFO - iteration 233, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:52] {2391} INFO - at 52.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:52] {2218} INFO - iteration 234, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:53] {2391} INFO - at 53.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:53] {2218} INFO - iteration 235, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:53] {2391} INFO - at 53.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:53] {2218} INFO - iteration 236, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:54] {2391} INFO - at 54.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:54] {2218} INFO - iteration 237, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:54] {2391} INFO - at 54.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:54] {2218} INFO - iteration 238, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:55] {2391} INFO - at 56.0s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:55] {2218} INFO - iteration 239, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:56] {2391} INFO - at 56.4s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:56] {2218} INFO - iteration 240, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:56] {2391} INFO - at 56.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
84
[flaml.automl.logger: 04-25 17:07:56] {2218} INFO - iteration 241, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:57] {2391} INFO - at 57.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:57] {2218} INFO - iteration 242, current
learner rf
[flaml.automl.logger: 04-25 17:07:57] {2391} INFO - at 57.4s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:57] {2218} INFO - iteration 243, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:57] {2391} INFO - at 57.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:57] {2218} INFO - iteration 244, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:58] {2391} INFO - at 58.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:58] {2218} INFO - iteration 245, current
learner xgboost
[flaml.automl.logger: 04-25 17:07:58] {2391} INFO - at 59.0s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:07:58] {2218} INFO - iteration 246, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:00] {2391} INFO - at 60.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:00] {2218} INFO - iteration 247, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:00] {2391} INFO - at 60.4s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:00] {2218} INFO - iteration 248, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:00] {2391} INFO - at 60.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:00] {2218} INFO - iteration 249, current
learner rf
[flaml.automl.logger: 04-25 17:08:01] {2391} INFO - at 61.2s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:01] {2218} INFO - iteration 250, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:01] {2391} INFO - at 61.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:01] {2218} INFO - iteration 251, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:01] {2391} INFO - at 62.1s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:01] {2218} INFO - iteration 252, current
learner rf
[flaml.automl.logger: 04-25 17:08:02] {2391} INFO - at 62.4s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
85
[flaml.automl.logger: 04-25 17:08:02] {2218} INFO - iteration 253, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:02] {2391} INFO - at 62.7s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:02] {2218} INFO - iteration 254, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:03] {2391} INFO - at 63.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:03] {2218} INFO - iteration 255, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:03] {2391} INFO - at 63.7s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:03] {2218} INFO - iteration 256, current
learner lgbm
[flaml.automl.logger: 04-25 17:08:03] {2391} INFO - at 63.8s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:03] {2218} INFO - iteration 257, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:04] {2391} INFO - at 64.5s, estimator
xgb_limitdepth's best error=0.0228, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:04] {2218} INFO - iteration 258, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:04] {2391} INFO - at 64.8s, estimator
xgb_limitdepth's best error=0.0228, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:04] {2218} INFO - iteration 259, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:05] {2391} INFO - at 65.8s, estimator
xgb_limitdepth's best error=0.0228, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:05] {2218} INFO - iteration 260, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:06] {2391} INFO - at 66.5s, estimator
xgb_limitdepth's best error=0.0228, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:06] {2218} INFO - iteration 261, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:07] {2391} INFO - at 67.1s, estimator
xgb_limitdepth's best error=0.0228, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:07] {2218} INFO - iteration 262, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:07] {2391} INFO - at 67.8s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:07] {2218} INFO - iteration 263, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:08] {2391} INFO - at 68.4s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:08] {2218} INFO - iteration 264, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:08] {2391} INFO - at 69.0s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
86
[flaml.automl.logger: 04-25 17:08:08] {2218} INFO - iteration 265, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:09] {2391} INFO - at 69.4s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:09] {2218} INFO - iteration 266, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:09] {2391} INFO - at 69.8s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:09] {2218} INFO - iteration 267, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:10] {2391} INFO - at 70.2s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:10] {2218} INFO - iteration 268, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:10] {2391} INFO - at 70.5s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:10] {2218} INFO - iteration 269, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:10] {2391} INFO - at 70.8s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:10] {2218} INFO - iteration 270, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:11] {2391} INFO - at 71.5s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:11] {2218} INFO - iteration 271, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:11] {2391} INFO - at 71.7s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:11] {2218} INFO - iteration 272, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:12] {2391} INFO - at 72.7s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:12] {2218} INFO - iteration 273, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:12] {2391} INFO - at 72.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:12] {2218} INFO - iteration 274, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:13] {2391} INFO - at 73.8s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:13] {2218} INFO - iteration 275, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:14] {2391} INFO - at 74.8s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:14] {2218} INFO - iteration 276, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:14] {2391} INFO - at 75.1s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
87
[flaml.automl.logger: 04-25 17:08:14] {2218} INFO - iteration 277, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:15] {2391} INFO - at 75.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:15] {2218} INFO - iteration 278, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:15] {2391} INFO - at 75.7s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:15] {2218} INFO - iteration 279, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:16] {2391} INFO - at 76.8s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:16] {2218} INFO - iteration 280, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:17] {2391} INFO - at 77.5s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:17] {2218} INFO - iteration 281, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:17] {2391} INFO - at 77.9s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:17] {2218} INFO - iteration 282, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:18] {2391} INFO - at 78.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:18] {2218} INFO - iteration 283, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:18] {2391} INFO - at 79.1s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:18] {2218} INFO - iteration 284, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:19] {2391} INFO - at 79.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:19] {2218} INFO - iteration 285, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:19] {2391} INFO - at 79.8s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:19] {2218} INFO - iteration 286, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:20] {2391} INFO - at 80.2s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:20] {2218} INFO - iteration 287, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:20] {2391} INFO - at 80.6s, estimator
xgb_limitdepth's best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:20] {2218} INFO - iteration 288, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:21] {2391} INFO - at 81.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
88
[flaml.automl.logger: 04-25 17:08:21] {2218} INFO - iteration 289, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:21] {2391} INFO - at 81.8s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:21] {2218} INFO - iteration 290, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:22] {2391} INFO - at 82.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:22] {2218} INFO - iteration 291, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:22] {2391} INFO - at 82.7s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:22] {2218} INFO - iteration 292, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:23] {2391} INFO - at 83.4s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:23] {2218} INFO - iteration 293, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:23] {2391} INFO - at 83.7s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:23] {2218} INFO - iteration 294, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:24] {2391} INFO - at 84.2s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:24] {2218} INFO - iteration 295, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:24] {2391} INFO - at 84.8s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:24] {2218} INFO - iteration 296, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:25] {2391} INFO - at 85.8s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:25] {2218} INFO - iteration 297, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:26] {2391} INFO - at 86.3s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:26] {2218} INFO - iteration 298, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:26] {2391} INFO - at 86.7s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:26] {2218} INFO - iteration 299, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:27] {2391} INFO - at 87.2s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:27] {2218} INFO - iteration 300, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:27] {2391} INFO - at 87.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
89
[flaml.automl.logger: 04-25 17:08:27] {2218} INFO - iteration 301, current
learner rf
[flaml.automl.logger: 04-25 17:08:27] {2391} INFO - at 88.0s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:27] {2218} INFO - iteration 302, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:28] {2391} INFO - at 88.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:28] {2218} INFO - iteration 303, current
learner rf
[flaml.automl.logger: 04-25 17:08:28] {2391} INFO - at 88.6s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:28] {2218} INFO - iteration 304, current
learner xgb_limitdepth
[flaml.automl.logger: 04-25 17:08:29] {2391} INFO - at 89.4s, estimator
xgb_limitdepth's best error=0.0199, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:29] {2218} INFO - iteration 305, current
learner rf
[flaml.automl.logger: 04-25 17:08:29] {2391} INFO - at 89.7s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:29] {2218} INFO - iteration 306, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:30] {2391} INFO - at 90.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:30] {2218} INFO - iteration 307, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:30] {2391} INFO - at 90.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:30] {2218} INFO - iteration 308, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:30] {2391} INFO - at 91.1s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:30] {2218} INFO - iteration 309, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:31] {2391} INFO - at 91.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:31] {2218} INFO - iteration 310, current
learner rf
[flaml.automl.logger: 04-25 17:08:31] {2391} INFO - at 91.9s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:31] {2218} INFO - iteration 311, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:32] {2391} INFO - at 92.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:32] {2218} INFO - iteration 312, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:32] {2391} INFO - at 92.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
90
[flaml.automl.logger: 04-25 17:08:32] {2218} INFO - iteration 313, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:33] {2391} INFO - at 93.4s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:33] {2218} INFO - iteration 314, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:33] {2391} INFO - at 93.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:33] {2218} INFO - iteration 315, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:34] {2391} INFO - at 94.3s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:34] {2218} INFO - iteration 316, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:34] {2391} INFO - at 94.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:34] {2218} INFO - iteration 317, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:34] {2391} INFO - at 95.0s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:34] {2218} INFO - iteration 318, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:35] {2391} INFO - at 95.4s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:35] {2218} INFO - iteration 319, current
learner rf
[flaml.automl.logger: 04-25 17:08:35] {2391} INFO - at 95.6s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:35] {2218} INFO - iteration 320, current
learner rf
[flaml.automl.logger: 04-25 17:08:35] {2391} INFO - at 95.9s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:35] {2218} INFO - iteration 321, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:36] {2391} INFO - at 96.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:36] {2218} INFO - iteration 322, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:36] {2391} INFO - at 96.9s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:36] {2218} INFO - iteration 323, current
learner rf
[flaml.automl.logger: 04-25 17:08:37] {2391} INFO - at 97.2s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:37] {2218} INFO - iteration 324, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:37] {2391} INFO - at 97.6s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
91
[flaml.automl.logger: 04-25 17:08:37] {2218} INFO - iteration 325, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:38] {2391} INFO - at 98.4s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:38] {2218} INFO - iteration 326, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:38] {2391} INFO - at 98.7s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:38] {2218} INFO - iteration 327, current
learner xgboost
[flaml.automl.logger: 04-25 17:08:39] {2391} INFO - at 99.5s, estimator
xgboost's best error=0.0185, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:39] {2218} INFO - iteration 328, current
learner rf
[flaml.automl.logger: 04-25 17:08:39] {2391} INFO - at 99.9s, estimator rf's
best error=0.0256, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:39] {2218} INFO - iteration 329, current
learner lgbm
[flaml.automl.logger: 04-25 17:08:39] {2391} INFO - at 100.0s, estimator lgbm's
best error=0.0213, best estimator xgboost's best error=0.0185
[flaml.automl.logger: 04-25 17:08:39] {2627} INFO - retrain xgboost for 0.1s
[flaml.automl.logger: 04-25 17:08:40] {2630} INFO - retrained model:
XGBClassifier(base_score=0.5, booster='gbtree', callbacks=[],
colsample_bylevel=0.7222010785416154, colsample_bynode=1,
colsample_bytree=0.8600840124935673, early_stopping_rounds=None,
enable_categorical=False, eval_metric=None, feature_types=None,
gamma=0, gpu_id=-1, grow_policy='lossguide', importance_type=None,
interaction_constraints='', learning_rate=0.055437673600423176,
max_bin=256, max_cat_threshold=64, max_cat_to_onehot=4,
max_delta_step=0, max_depth=0, max_leaves=14,
min_child_weight=0.05664523153071309, missing=nan,
monotone_constraints='()', n_estimators=18, n_jobs=-1,
num_parallel_tree=1, objective='multi:softprob', predictor='auto',
…)
[flaml.automl.logger: 04-25 17:08:40] {1930} INFO - fit succeeded
[flaml.automl.logger: 04-25 17:08:40] {1931} INFO - Time taken to find the best
model: 52.18394494056702
[114]: pred5=automl.predict(X_data)
92
[124]: (0.968705547652916,
0.9445234708392604,
0.9274537695590327,
0.9274537695590327,
0.9985775248933144)
93
[129]: import numpy as np
import plotly.graph_objects as go
from functools import reduce
from itertools import product
from IPython.display import Image
SUB = str.maketrans("0123456789", "����������")
SUP = str.maketrans("0123456789", "�¹²³������")
z=[ [np.round(acc1,3),np.round(acc2,3),np.round(acc3,3),np.round(acc4,3),np.
↪round(acc5,3)]]
y=['<b>Accuracy</b>']
def get_anno_text(z_value):
annotations=[]
a, b = len(z_value), len(z_value[0])
flat_z = reduce(lambda x,y: x+y, z_value) # z_value.flat if you deal with␣
↪numpy
fig = go.Figure(data=go.Heatmap(
z=z,
x=x,
y=y,
hoverongaps = True, colorscale ='turbid',
opacity=0.6,colorbar=dict(tickfont=dict(size=20)) ))#matter#
fig.update_layout(title={'text': "",
'y':0.8,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
plot_bgcolor='rgba(0,0,0,0)',
annotations = get_anno_text(z),
width=1000,
height=400,xaxis={'side': 'top'},margin=dict(l=20, r=20, t=20, b=20))
94
mirror=True)
fig.update_yaxes(tickfont = dict(size=24),linewidth=0.1, linecolor='black',
mirror=True)
fig.write_image("table2b.png",engine="kaleido")
#plt.savefig("table2a.pdf", format="pdf", bbox_inches="tight")
fig.show()
Image('table2b.png')
[129]:
[ ]:
95