0% found this document useful (0 votes)
26 views16 pages

Emonet

Uploaded by

nenne275
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views16 pages

Emonet

Uploaded by

nenne275
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Final Project Report: Emotion Recognition from Speech

Using LSTM and RealTime Detection


Nandini Chaudhary
Enrollment ID: 202001090
B. Tech. Project (BTP) Report
BTP Mode: Off Campus
Mentor: Mukesh Sharma
Dhirubhai Ambani Institute of ICT (DA-IICT)
Gandhinagar, India
[email protected]
May 2, 2024

Abstract—This document presents the final project I. E MOTION R ECOGNITION IN VARIOUS


report detailing the design, development, and eval- D OMAINS
uation of a real-time emotion detection system with
two main components. The first part involves emotion A. Emotion Recognition in Healthcare
speech recognition from a dataset, where we build and Emotion Recognition Detection Systems have po-
evaluate a model using Long Short-Term Memory tential applications in healthcare for mental health
(LSTM) networks to classify emotions from speech
data. The second part focuses on real-time emotion assessment, patient monitoring, and therapeutic in-
recognition, where we implement a system that detects terventions. They can contribute to improved patient
emotions in real-time and changes the LED colour care and treatment outcomes.
correspondingly to reflect the detected emotion. The
report encompasses the methodology, findings, and
contributions towards enhancing emotion detection B. Emotion Recognition in Education
technology and reflections on the project’s outcomes. Emotion Recognition Detection Systems can aid in
student engagement assessment, personalized learn-
ing, and emotional support in educational settings.
They offer insights into educational interventions
and student well-being.

C. Emotion Recognition in Customer Service


Call Center Optimization: Emotion Recognition
systems can analyze customer voice tones to assess
their emotions during calls, enabling call centre
agents to provide personalized and empathetic sup-
port.

1
2

Part I:Emotion Speech Recognition From


Dataset
3

II. F ROM DATASET


1 import numpy as np
A. Learning Outcomes 2 import pandas as pd
The project provided an invaluable opportunity for3 import os
4 import librosa
me to acquire and refine a diverse set of skills 5
spanning technical and interpersonal domains. I 6 paths = []
developed proficiency in machine learning model 7 labels = []
development by implementing LSTM networks for8 for dirname, _, filenames in os.walk('
emotion classification from speech data. /kaggle/input'):
9 for filename in filenames:
Additionally, working solo on the project enhanced10 paths.append(os.path.join(
my ability to manage tasks independently and dirname, filename))
efficiently while refining my problem-solving 11 label = filename.split('_')
skills in the context of project development. [-1]
12 label = label.split('.')[0]
13 labels.append(label.lower())
B. LSTM Model Definition 14 if len(paths) == 2800:
Long Short-Term Memory (LSTM) is a recurrent15 break
neural network (RNN) architecture designed to 16 print('Dataset is Loaded')
overcome the vanishing gradient problem. It is
particularly effective for processing and 2) Feature Extraction: Then, feature extraction on
classifying data sequences, making it suitable for the audio files will prepare them for input to our
speech recognition and sentiment analysis tasks. model. Here is the Python code used for
An LSTM network consists of memory cells that extracting Mel-Frequency Cepstral Coefficients
can maintain information over long periods, (MFCCs) from the audio files:
allowing them to capture dependencies and 1 def extract_mfcc(filename):
patterns in sequential data. 2 y, sr = librosa.load(filename,
duration=3, offset=0.5)
3 mfcc = np.mean(librosa.feature.
C. Data Preparation
mfcc(y=y, sr=sr, n_mfcc=40).T,
The preparation of the dataset formed the The axis=0)
dataset’s preparation formed the project’s 4 return mfcc
foundational phase, laying the groundwork for
subsequent analysis and model development. This This function takes a filename as input, loads the
section delineates the meticulous steps in dataset corresponding audio file using librosa, and extracts
preparation, encompassing data loading, MFCC features. The extracted features are then
preprocessing, labelling, and organization into a returned as an array.
structured format.
Next, I applied this function to all the audio files
D. Data Loading and Preprocessing in our dataset:
To prepare the data for training the LSTM model, 1 X_mfcc = df['speech'].apply(lambda x:
I followed a series of steps to load and preprocess extract_mfcc(x))
the audio files using Python. Here’s a detailed
breakdown of the process The variable X_mfcc now contains the extracted
1) Loading the Dataset: This began by loading MFCC features for all audio files in the dataset.
the dataset using Python. This involved traversing
the directory structure to locate audio files and Finally, I converted the extracted features into a
their corresponding labels. I stored the file paths numpy array and expanded the dimensions to
and labels in separate lists for further processing. prepare them for input to our model:
4

III. E XPLORATORY DATA A NALYSIS


1 X = [x for x in X_mfcc]
2 X = np.array(X) After this, I performed Exploratory Data Analysis
3 X = np.expand_dims(X, -1) (EDA) to gain insights into the distribution of
different emotions within the dataset. Here is what
The shape of the resulting array X is (2800, 40, I did:
1), where 2800 is the number of samples, 40 is
the number of MFCC coefficients, and 1 indicates A. Count Plots
the number of channels. It created count plots to visualize the distribution
3) Data Splitting: Then, the dataset will be split of emotion labels in the dataset. This allows me to
into training and validation sets to assess the understand the class imbalance and adjust the
model’s performance. This step ensures the model modelling strategy accordingly.
generalizes well to unseen data and helps prevent
Overfitting. 1 import matplotlib.pyplot as plt
2 import seaborn as sns
3

4 plt.figure(figsize=(8, 6))
5 sns.countplot(x='label', data=df)
6 plt.xlabel('Emotion', size=12)
7 plt.ylabel('Count', size=12)
8 plt.title('Distribution of Emotion
Labels', size=14)
9 plt.show()
5

B. Individual Emotion Analysis • Angry


It then analyzed individual emotions by
visualizing audio waveforms and spectrograms for
each emotion category. This helps to understand
the characteristics of different emotions in the
audio data.
1 def waveplot(data, sampling_rate,
emotion):
2 plt.figure(figsize=(10, 4))
3 plt.title(emotion, size=20)
4 librosa.display.waveshow(data, sr=
sampling_rate)
5 plt.show()
6

7 def spectrogram(data, sr, emotion):


8 x = librosa.stft(data)
9 xdb = librosa.amplitude_to_db(abs(
x))
10 plt.figure(figsize=(11, 4)) • Disgust
11 plt.title(emotion, size=20)
12 librosa.display.specshow(xdb, sr=
sr, x_axis='time', y_axis='hz')
13 plt.colorbar()
14 plt.show()

Then, these functions visualize waveforms and


spectrograms for each emotion.
• Fear
6

• Neutral

• Surprised

• Sad

C. LSTM Model Development


The LSTM model served as the core component
of the emotion detection system, facilitating the
classification of audio samples into distinct
emotion categories. This section delineates the
architecture, training process, and evaluation
outcomes of the LSTM model.
1) Model Architecture: I constructed an LSTM
model using the Keras library for emotion
classification from speech data. The model
architecture consists of an input layer, followed by
• Happy LSTM and dense layers with dropout
regularization to prevent overfitting.
7

37.50% on the validation data. While the model


1 from keras.models import Sequential performed well on the training data, it exhibited
2 from keras.layers import Dense, LSTM, overfitting as indicated by the lower validation
Dropout accuracy. Further optimization and regularization
3
techniques may be applied to improve
4 model = Sequential([
5 LSTM(256, return_sequences=False, generalization performance.
input_shape=(40,1)),
6 Dropout(0.2),
7 Dense(128, activation='relu'),
8 Dropout(0.2),
9 Dense(64, activation='relu'),
10 Dropout(0.2),
11 Dense(7, activation='softmax')
12 ])
13

14 model.compile(loss='
categorical_crossentropy',
optimizer='adam', metrics=['
accuracy'])
15 model.summary()

2) Model Training: The LSTM model was trained


for 50 epochs with a batch size of 64. During
training, both training and validation loss and
accuracy were monitored. The training process
involved optimizing the categorical cross-entropy
loss function using the Adam optimizer.

1 # Train the model


2 history = model.fit(X_train, y_train,
validation_data=(X_val, y_val),
epochs=50, batch_size=32)

3) Model Evaluation: The evaluation outcomes of


the trained LSTM model are discussed,
encompassing performance metrics such as
accuracy, precision, recall, and F1 score.

1 # Evaluate the model


2 loss, accuracy = model.evaluate(X_test
, y_test) V. C ONCLUSION
3 print(f'Test Loss: {loss}, Test
Accuracy: {accuracy}') In this project, I developed an LSTM-based model
for emotion recognition from speech data. Despite
achieving high accuracy on the training data, the
IV. R ESULTS model’s performance on unseen data could be
The trained LSTM model achieved an accuracy of improved through better regularization and
approximately 99.82% on the training data and hyperparameter tuning.
8

VI. A DDITIONAL C OMMENTS


The best validation accuracy achieved was
72.32%. I used checkpointing to save the model
with the best validation accuracy and adjusted the
learning rate for slow convergence.

1 # best val accuracy: 72.32


2 # use checkpoint to save the best val
accuracy model
3 # adjust learning rate for slow
convergence
9

Part II: Real-time Emotion Recognition with


LED Feedback
10

VII. I NTRODUCTION A. Arduino Uno to Sound Sensor (KY-038)


The real-time emotion detection system with LED • 5V to VCC (Positive):
feedback aims to provide a visually intuitive The 5V pin of the Arduino Uno is connected
representation of detected emotions using LEDs. to the VCC (Positive) pin of the Sound Sensor.
Emotion recognition is an important aspect of This connection provides the necessary power
human-computer interaction, and by integrating supply to the sound sensor for its operation.
LED feedback, the system enhances user The Sound Sensor requires a stable voltage
experience and accessibility. The project utilizes source to function correctly.
an Arduino microcontroller, a sound sensor
module, and a WS2812B addressable LED strip to • GND (Ground) to GND (Ground):
detect sound levels and display corresponding The GND pin of the Arduino Uno is connected
LED patterns. to the GND (Ground) pin of the Sound Sensor.
This connection establishes a common ground
VIII. C OMPONENTS U SED reference between the Arduino and the Sound
The following components were used in the Sensor.
project: • A0 (Analog Input) to OUT (Analog Output):
• Arduino UNO microcontroller The A0 pin of the Arduino Uno is connected
to the OUT (Analog Output) pin of the Sound
• Sound sensor module Sensor. This connection enables the Arduino
• WS2812B addressable LED strip to read the analog output signal generated by
the sound sensor.
• Jumper wires

IX. M ETHODOLOGY B. Arduino Uno to LED Strip


The methodology involved reading sound sensor
• D5 (Digital Output) to Data Input:
values using the Arduino analog input, mapping
The D5 pin of the Arduino Uno is connected
these values to LED brightness, and displaying
to the Data Input pin of the LED Strip. This
corresponding LED patterns based on the detected
connection enables the Arduino to control the
emotion. The Arduino code implemented a series
LED strip and change its color and brightness
of functions to control LED behavior according to
based on input signals.
sound levels.
• 3.3V to 3.3V (Positive):
X. W HY A RDUINO U NO ? The 3.3V pin of the Arduino Uno is connected
The Arduino Uno was chosen for this project due to the positive voltage input of the LED Strip.
to its simplicity, availability, and ease of use. This connection provides power to the LED
While Arduino Nano is smaller and more strip, allowing it to illuminate.
compact, the Uno offers more flexibility in terms
of pin count and ease of prototyping. Since the XII. P ROJECT D ETAILS
project involves connecting multiple components
and requires both digital and analog pins, the A. Arduino Uno
Arduino Uno’s larger form factor makes it more The Arduino reads the analog output signal from
suitable for this purpose. the sound sensor and maps it to a range suitable
for LED colors. Higher analog values indicate
XI. C OMPONENT C ONNECTIONS louder sounds, while lower values indicate quieter
Refer circuit diagram XVI sounds.
11

B. Sound Sensor Operation B. Main Loop


The sound sensor listens for sounds in its The loop() function continuously reads the
surroundings and sends a digital signal to the value from the sound sensor and adjusts the LED
Arduino Uno when it detects a loud sound. The color and pattern based on the sound level.
Arduino Uno reads this signal from Pin 2 and
processes it to determine the detected emotion.
1 void loop() {
2 // Read the value from the sound
sensor
C. Emotion Classification 3 int soundValue = analogRead(
SOUND_SENSOR_PIN);
The Arduino Uno uses a simple algorithm to 4 Serial.println(soundValue); // Print
classify the detected emotion based on the sound the sound sensor reading
level. For example, a loud sound might indicate 5
// Map the sound value to a range
happiness or excitement, while a quiet sound 6
suitable for LED brightness
might indicate sadness or calmness. The 7 int brightness = map(soundValue, 0,
classification algorithm is designed to be simple 1023, 0, 255);
and intuitive, providing basic feedback to the user.8
9 // Adjust LED color and pattern
based on sound level
10 if (brightness > 200) {
D. RGB LED Feedback 11 // If sound level is very high,
display pulsating red color
Once the emotion is classified, the Arduino Uno 12 pulsatingRed(brightness);
adjusts the intensity of the RGB LED to represent13 } else if (brightness > 100) {
the detected emotion. For example, a bright, 14 // If sound level is moderate,
vibrant color might indicate happiness, while a display fast blinking yellow
dim, muted color might indicate sadness. By color
15 fastBlinkingYellow(brightness);
adjusting the intensity of each LED (red, green, 16 } else if (brightness > 50) {
and blue), a wide range of colors can be displayed17 // If sound level is low, display
to represent different emotions. slow blinking blue color
18 slowBlinkingBlue(brightness);
19 } else {
20 // If sound level is very low,
XIII. A RDUINO C ODE D ESCRIPTION
display static green color
21 fill_solid(leds, NUM_LEDS, CRGB::
A. Setup Green);
The setup() function initializes the Arduino 22 FastLED.setBrightness(100); //
Lower brightness for low sound
environment. It configures the LED strip and sets levels
up serial communication for debugging purposes.23 }
24
25 FastLED.show();
26 delay(100); // Adjust delay for
1 void setup() { responsiveness
2 FastLED.addLeds<WS2812, LED_PIN, GRB 27 }
>(leds, NUM_LEDS);
3 Serial.begin(9600); // Initialize
serial communication for
debugging C. LED Color Functions
4 } The code includes functions to display different
colors and patterns on the LED strip based on the
12

sound level. system in providing LED feedback corresponding


to detected emotions. The system accurately
reflected changes in ambient sound levels and
1 // Function to display pulsating red displayed appropriate LED patterns.
color
2 void pulsatingRed(int brightness) {
3 for (int b = 0; b < brightness; b++) XV. C ONCLUSION
{ The real-time emotion detection system with LED
4 fill_solid(leds, NUM_LEDS, CRGB(b,
feedback successfully achieved its objectives of
0, 0));
5 FastLED.show(); enhancing user experience and providing intuitive
6 delay(10); visual representations of detected emotions. Future
7 } improvements could include refining the LED
8 for (int b = brightness; b > 0; b--) patterns and integrating additional sensors for
{
more comprehensive emotion detection.
9 fill_solid(leds, NUM_LEDS, CRGB(b,
0, 0));
10 FastLED.show(); R EFERENCES
11 delay(10);
[1] TinkerCad.
12 } https://www.tinkercad.com/things/
13 } 5IYKu2tXmEO-funky-fyyran/editel?tenant=circuits
14

15 // Function to display fast blinking [2] Long Short-Term Memory - Glossary.


yellow color https://www.devx.com/terms/long-short-term-memory/
16 void fastBlinkingYellow(int brightness [3] Arduino Uno Tutorial.
) { https://diyi0t.com/arduino-uno-tutorial/
17 fill_solid(leds, NUM_LEDS, CRGB(
[4] Recurrent Neural Networks (RNNs):
brightness, brightness, 0));
https://bestaitool.com/recurrent-neural-networks-rnns-unraveling-the-sequences-in-data/
18 FastLED.show();
19 delay(100); [5] What is Arduino? : A Comprehensive Guide for Beginners..
20 fill_solid(leds, NUM_LEDS, CRGB:: https://microdigisoft.com/what-is-arduino/s
Black); [6] RGB Vs. CMYK.
21 FastLED.show(); https://phxinks.com/blogs/news/rgb-vs-cmyk
22 delay(100);
23 }
24 D EFINITIONS
25 // Function to display slow blinking
blue color 1. Mel-Frequency Cepstral Coefficients
26 void slowBlinkingBlue(int brightness) (MFCC): These are features commonly used
{ in speech and audio processing. MFCCs
27 fill_solid(leds, NUM_LEDS, CRGB(0, represent the short-term power spectrum of a
0, brightness)); sound and are derived from the Mel-frequency
28 FastLED.show();
29 delay(500); scale, which approximates the human auditory
30 fill_solid(leds, NUM_LEDS, CRGB:: system’s response to different frequencies.
Black); (Referenced in Section "From Dataset" under
31 FastLED.show(); "Feature Extraction")
32 delay(500);
33 } 2. Long Short-Term Memory (LSTM): LSTM
is a type of recurrent neural network (RNN) ar-
chitecture designed to overcome the vanishing
XIV. R ESULTS gradient problem. It is particularly effective for
The results of the experiments demonstrated the processing and classifying sequences of data,
effectiveness of the real-time emotion detection such as speech and time series data, due to
13

its ability to capture long-term dependencies. problem and enable more effective training
(Referenced in Section "From Dataset" under of deep recurrent architectures. (Referenced in
"LSTM Model Definition") Section "From Dataset" under "LSTM Model
Definition")
3. Exploratory Data Analysis (EDA): EDA is
the process of analyzing and visualizing data to 7. Regularization: A set of techniques used to
gain insights and identify patterns or trends. It prevent overfitting in machine learning models.
involves techniques such as data visualization, Regularization methods introduce constraints
summary statistics, and hypothesis testing to on the model parameters to reduce complexity
understand the structure and characteristics of and encourage simpler solutions that general-
the dataset. (Referenced in Section "Results" ize well to unseen data. Common regulariza-
under "Data Analysis") tion techniques include L1 and L2 regulariza-
tion, dropout, and early stopping. (Referenced
4. Overfitting: Overfitting occurs when a ma- in Section "Results" under "Model Evalua-
chine learning model learns noise or irrelevant tion")
patterns from the training data and performs
poorly on unseen data. It typically happens 8. Serial Communication: The process of send-
when the model is too complex or has been ing data one bit at a time over a communication
trained for too many epochs, leading to high channel. In the context of Arduino program-
performance on the training set but poor gen- ming, serial communication is commonly used
eralization to new data. Regularization tech- for debugging purposes to send data from the
niques such as dropout and early stopping microcontroller to a computer for monitoring
are commonly used to mitigate overfitting. and analysis. (Referenced in Section "Project
(Referenced in Section "Results" under "Model Details" under "Setup")
Evaluation")
9. Adam Optimizer: Adam is an optimization
5. Recurrent Neural Network (RNN): A type of algorithm commonly used for training deep
neural network designed to handle sequential neural networks. It combines techniques such
data by maintaining a state or memory of as momentum and adaptive learning rates to
previous inputs. RNNs are suitable for tasks achieve efficient and effective optimization of
such as time series prediction, natural language model parameters. The Adam optimizer adapts
processing, and speech recognition. LSTM is the learning rate for each parameter based on
a specialized variant of RNNs designed to estimates of the first and second moments of
address the limitations of traditional RNNs the gradients, allowing it to converge quickly
in capturing long-term dependencies. (Refer- and handle noisy or sparse gradients effec-
enced in Section "From Dataset" under "LSTM tively. (Referenced in Section "From Dataset"
Model Definition") under "Model Training")
6. Vanishing Gradient Problem: A challenge 10. Analog Input/Output (A0): Analog pins on
encountered during the training of deep neu- microcontrollers like the Arduino Uno are used
ral networks, where gradients become increas- to read analog signals from sensors or output
ingly small as they are backpropagated through analog signals to devices. Analog inputs are
layers, leading to slow or ineffective learn- capable of measuring continuous voltage lev-
ing. This problem is particularly pronounced els, while analog outputs can generate variable
in traditional RNNs, limiting their ability to voltage levels to control analog devices. In the
capture long-range dependencies in sequen- context of the Arduino Uno, analog pins are
tial data. LSTM networks were specifically often used to interface with sensors such as po-
developed to mitigate the vanishing gradient tentiometers, temperature sensors, and sound
14

sensors. (Referenced in Section "Arduino Uno blue color may indicate low levels of detected
to Sound Sensor (KY-038)" under "Component sound, corresponding to relaxed or peaceful
Connections") emotions. (Referenced in Section "Arduino
Code Description" under "LED Color Func-
11. FastLED Library: FastLED is a popular Ar- tions")
duino library used for controlling address-
able LED strips and matrices. It offers high- 15. Analog Input: Analog input refers to a type of
performance functionality for controlling a signal or data representation that varies contin-
large number of LEDs with various color ef- uously over time within a certain range of val-
fects and animations. FastLED supports a wide ues. In the context of Arduino programming,
range of LED chipsets and provides optimized analog input typically involves reading data
code for smooth and efficient LED animations. from analog sensors such as potentiometers,
(Referenced in Section "Arduino Code De- light sensors, or sound sensors. The Arduino
scription" under "Setup") Uno’s analog input pins allow it to measure
analog voltages and convert them into digital
12. Pulsating Red Color: Pulsating red color values for processing. (Referenced in Section
refers to a visual effect where the intensity "Arduino Uno to Sound Sensor (KY-038)")
of red LEDs increases and decreases rhyth-
mically, creating a pulsating or breathing ef- 16. Digital Output: Digital output refers to a
fect. This effect is often used to represent type of signal or data representation that has
strong emotions or alert states in LED-based only two possible states: high (1) or low (0).
displays and visualizations. In the context of In Arduino programming, digital output pins
the real-time emotion detection system, pul- can be used to control digital devices such as
sating red color may indicate high levels of LEDs, motors, or relays by switching them on
detected sound, corresponding to intense emo- or off. The Arduino Uno’s digital output pins
tions. (Referenced in Section "Arduino Code provide a means of sending digital signals to
Description" under "LED Color Functions") external components for various applications.
(Referenced in Section "Arduino Uno to LED
13. Blinking Yellow Color: Blinking yellow color Strip")
refers to a visual effect where yellow LEDs
turn on and off rapidly, creating a blinking 17. Serial Communication: Serial communica-
or flashing pattern. This effect is commonly tion is a method of transmitting data between
used to draw attention or convey a warning in electronic devices one bit at a time over a
LED-based displays. In the real-time emotion communication channel or wire. In Arduino
detection system, blinking yellow color may programming, serial communication is com-
indicate moderate levels of detected sound, monly used for debugging, data logging, or
suggesting a state of caution or heightened interfacing with other devices such as com-
awareness. (Referenced in Section "Arduino puters or sensors. The Arduino Uno’s built-in
Code Description" under "LED Color Func- serial interface allows it to communicate with
tions") a computer via USB or with other Arduino
boards via serial ports. (Referenced in Section
14. Blinking Blue Color: Blinking blue color "Setup")
refers to a visual effect where blue LEDs
alternate between on and off states at a slower 18. CRGB: CRGB is a data type used in the
pace compared to blinking yellow. This effect FastLED library for representing RGB color
can convey a sense of calmness or tranquility values. It stands for "Color RGB" and is used
in LED-based displays. In the context of the to specify the intensity of red, green, and blue
real-time emotion detection system, blinking components in an RGB color model. CRGB
15

values range from 0 to 255 for each color be displayed. (Referenced in Section "Project
channel, allowing for a wide range of colors to Details" under "RGB LED Feedback")
be represented. In the context of the real-time
22. Sound Sensor Module: A sound sensor mod-
emotion detection system, CRGB values are
ule is an electronic component that detects
used to control the color of LEDs in the LED
sound waves in its surroundings and converts
strip. (Referenced in Section "Arduino Code
them into electrical signals. It typically con-
Description" under "LED Color Functions")
sists of a microphone or sound sensor element,
19. NUM_LEDS: NUM_LEDS is a constant or an amplifier, and output pins for transmitting
variable representing the total number of LEDs the detected sound levels to a microcontroller
in the LED strip. In the context of the Arduino or other electronic devices. In the real-time
code for the real-time emotion detection sys- emotion detection system, the sound sensor
tem, NUM_LEDS is used to specify the size module is used to capture ambient sound lev-
of the LED array and determine the number els, which are then processed by the Arduino
of LEDs that will be controlled by the Ar- Uno to classify emotions and control the LED
duino Uno. By defining this value, the code feedback. (Referenced in Section "Compo-
can ensure that the correct number of LEDs nents Used")
are addressed and manipulated according to
the detected emotion. (Referenced in Section
"Arduino Uno to LED Strip")

20. Delay Function: In Arduino programming, the


delay() function is used to pause the execution
of the program for a specified period of time. It
takes a single argument, which is the duration
of the delay in milliseconds. During the delay
period, the Arduino does not perform any
other actions, allowing for precise timing in
tasks such as LED blinking, sensor polling,
or serial communication. The delay function
is commonly used to create time intervals
or control the timing of events in Arduino
sketches. (Referenced in Section "Main Loop
Function")

21. RGB Color Model: The RGB color model is


a color representation system in which colors
are defined by specifying the intensities of
three primary colors: red, green, and blue. By
varying the intensity of each primary color
component, a wide range of colors can be
produced. In the context of the real-time emo-
tion detection system, the RGB color model is
used to control the color of LEDs in the LED
strip. By adjusting the intensity of red, green,
and blue light emitted by each LED, different
colors corresponding to detected emotions can
16

XVI. C IRCUIT D IAGRAM

You might also like