Face - Emotion Recog - Implementation
Face - Emotion Recog - Implementation
num_classes = 5
img_rows,img_cols = 48,48
batch_size = 8
train_data_dir = 'C:/Users/HP/Desktop/projectEssai/train'
validation_data_dir = 'C:/Users/HP/Desktop/projectEssai/validation'
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=30,
shear_range=0.3,
zoom_range=0.3,
width_shift_range=0.4,
height_shift_range=0.4,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
color_mode='grayscale',
target_size=(img_rows,img_cols),
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
color_mode='grayscale',
target_size=(img_rows,img_cols),
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
Vgg16 architecture :
model = Sequential()
# Block-1
model.add(Conv2D(32,
(3,3),padding='same',kernel_initializer='he_normal',input_shape=(img_rows,i
mg_cols,1)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv2D(32,
(3,3),padding='same',kernel_initializer='he_normal',input_shape=(img_rows,i
mg_cols,1)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# Block-2
model.add(Conv2D(64,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv2D(64,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# Block-3
model.add(Conv2D(128,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv2D(128,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# Block-4
model.add(Conv2D(256,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Conv2D(256,(3,3),padding='same',kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# Block-5
model.add(Flatten())
model.add(Dense(64,kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
# Block-6
model.add(Dense(64,kernel_initializer='he_normal'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
# Block-7
model.add(Dense(num_classes,kernel_initializer='he_normal'))
model.add(Activation('softmax'))
print(model.summary())
this architectures consists of several layers, eleven in all (8 convolutional layers and
3 fully connected).
linear unit) to eliminate the negative values. At the end, we added a layer of average
Global Pooling,
We applied then the Softmax function to calculate the rate of the 5 classes of
expressions.
model.add(Dense(5, activation='softmax'))
3.6 Optimization :
when we train this model, we are basically trying to solve an optimization problem.
We are trying to optimize weigts given arbitrarly.
Our task is to find the weigths that most accurately map our input data to correct the
output class.
During the training this weights updated and saved in the file.h5.
The weights are optimised using an optimization algorithm.
In our algorithm we used the optimize SGD( stochastic gradient descent).
from tensorflow.keras.optimizers import RMSprop,SGD,Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping,
ReduceLROnPlateau
checkpoint = ModelCheckpoint('Emotion_little_vgg.h5',
monitor='val_loss',
mode='min',
save_best_only=True,
verbose=1)
#earlystop = EarlyStopping(monitor='val_loss',
#min_delta=0,
#patience=3,
#verbose=1,
#restore_best_weights=True
#)
reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=3,
verbose=1,
min_delta=0.0001)
callbacks = [PlotLossesCallback(),checkpoint,reduce_lr]
model.compile(loss='categorical_crossentropy',
optimizer = Adam(lr=0.001),
metrics=['accuracy'])
3.7 Learning :
The next step is to learn the given data. For learning the network we use the
« fit() » function.
nb_train_samples = 24176
nb_validation_samples = 3006
epochs=15
history=model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples//batch_size,
epochs=epochs,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=nb_validation_samples//batch_size)
An epoch refers to a single pass of the entire data set to the network during the
training.
4. Results :
5. Mesures of performance :
Fig1. Learning and validating accuracy and loss in VGG16
accuracy
training (min: 0.255, max: 0.464, cur: 0.464)
validation (min: 0.294, max: 0.586, cur: 0.586)
Loss
training (min: 1.295, max: 1.725, cur: 1.295)
validation (min: 1.043, max: 1.551, cur: 1.043)
Accuracy=58%
accuracy
training (min: 0.377, max: 0.671, cur: 0.671)
validation (min: 0.410, max: 0.690, cur: 0.690)
Loss
training (min: 0.855, max: 1.499, cur: 0.855)
validation (min: 0.829, max: 1.384, cur: 0.829)
test_dir=validation_data_dir
import sklearn
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_rows, img_cols),
color_mode='grayscale',
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
FLOW1_model = load_model('C:/Users/HP/Desktop/projectEssai/Emotion_little_vgg(3).h5')
#cnfusion Matrix and Classification Report
Y_pred = FLOW1_model.predict_generator(test_generator, test_generator.samples //
test_generator.batch_size)
y_pred = np.argmax(Y_pred, axis=1)
y_pred = y_pred.reshape(-1)
print('Confusion Matrix')
print(confusion_matrix(test_generator.classes, y_pred))
print('Classification Report')
target_names = ['happy','sad','neutral','angry', 'surprise']
print(classification_report(test_generator.classes, y_pred, target_names=target_names))
#Evaluating using Keras model_evaluate:
x, y = zip(*(test_generator[i] for i in range(len(test_generator))))
x_test, y_test = np.vstack(x), np.vstack(y)
loss, acc = FLOW1_model.evaluate(x_test, y_test, batch_size=batch_size)
print("Accuracy: " ,acc)
print("Loss: ", loss)
Confusion Matrix
[[ 17 346 253 168 176]
[ 26 684 496 291 327]
[ 15 468 321 204 208]
[ 15 441 300 194 189]
[ 9 312 213 125 138]]
Classification Report
precision recall f1-score support
1. Discussions