DEMO PPT
DEMO PPT
Support System
Conference logo
Presented By:
Sanjay Sharma
Name of Conference
LIST OF CONTENT
Introduction(in paper)
Literature Review(last 4 paper of reference)
Problem Statement and Objectives(From conclusion)
Features Extraction(In Paper)
Feature Selection(In paper)
Application of Machine Learning(Web Development)
Support Vector Machine(HTML)
Process Flow Diagram( you have to create)
Result(in paper)
Conclusion(in paper)
Future Scope(in paper)
References(first 7 paper of reference)
Introduction
• Bennasar et al (2015) introduces two new nonlinear feature selection methods, namely Joint
Mutual Information Maximisation (JMIM) and Normalised Joint Mutual Information
Maximisation (NJMIM); both these methods use mutual information and the ‘maximum of the
minimum’ criterion, which alleviates the problem of overestimation of the feature significance
as demonstrated both theoretically and experimentally. The proposed methods are compared
using eleven publically available datasets with five competing methods. The results
demonstrate that the JMIM method outperforms the other methods on most tested public
datasets, reducing the relative average classification error by almost 6% in comparison to the
next best performing method. The statistical significance of the results is confirmed by the
ANOVA test. Moreover, this method produces the best trade-off between accuracy and
stability.
Problem Statement and Objective
Problem statement:
It is difficult to identify and recognize the emotion of human being and save
human life from stress, suicide etc. This pressure sign and symptoms add a
status of high excitement, elevated heart rate, extreme adrenaline creation and
malfunction of coping component, sense of tension and exhaustion and inability to
completely focus. To minimize the risk factor and protect human being at earlier
stage of tension, stress etc. we study about “ Emotion Recognition From German
Speech using Feature selection Based Decision Support System “ in detail.
Objective:
• Develop a system those will provide recognition of emotion using feature
selection and support vector machine.
• Calculate accuracy of system by providing test signal to modal.
• Compare the result obtained from this method with the available literature.
• Detect the emotions.
Feature Extraction
• Timeframe: These characteristics give the temporal properties of voiced and unvoiced
elements. They operate entirely on the temporal signal.
• Energy: The capacity of the transmission x inside a particular window of N trial samples
is depicted by:
Feature Extraction
N
E n = x(n).x * (n)
n 1
• Zero Traversing Rate (ZTR): The zero crossings attribute computes the number of
periods how the symbol of the conversation signal amplitude differs in the time site in a
structure. For solitary-voiced speech signals, absolutely no crossings are used to
generate a rough estimation in the essential-consistency. In the case of sophisticated
indicators, it is a straightforward working out of noisiness. The Zero Traversing Amount
computes how often the conversation transmission modifications its indication: -
1 N
ZTR = | sgn( xn ) sgn( xn1 ) |
2 n 1
• Pitch: The signal which comes through the singing pathway begins in the larynx where
vocal cords are placed and come to an end at jaws. The shape of the vocal pathway and
the vibrations of the singing cords are monitored by nerves from the human brain. The
noise, which we make, could be classified into voiced and unvoiced appears to be.
Feature Extraction
during the technology of unvoiced appears to be the vocal cords usually do not vibrate
and remain open whereas during voiced noises they vibrate and generate exactly what
is known as glottal pulse. A pulse is an addition of your sinusoidal influx of fundamental
regularity and its harmonics (Amplitude diminishes when frequency raises). The basic
regularity of glottal pulse is named as the pitch.
• Centroid: The spectral centroid is identified as the middle of gravitational pressure in the
variety. The centroid may be the computation from the spectral form and higher
centroid values refer to better textures with increased high frequencies.
N /2
f [k ] X r [k ]
Cr k 1
N /2
X
k 1
r [k ]
Whereby f [k] may be the regularity at bin k. Centroid adjusts the sound sharpness.
Sharpness corresponds to the high-frequency content of the spectrum. Higher centroid
values refer to spectra in the range of higher frequencies. Because of its effectiveness to
define spectral shape, centroid measures are utilized in audio classification tasks.
Feature Extraction
vector of flatness beliefs per period of time. The flatness of any music band is known as
the ratio of your geometric along with the arithmetic method of the power variety
coefficients within that music group. Each vector is decreased into a scalar by
determining the imply benefit over the groups for every single particular structure,
therefore reaching a scalar function that specifies the general flatness.
.
Feature Selection
From the set of extracted features set, only few feature are significant in nature. It
means we have to search out and exclude irrelevant features from the retrieved
feature set. For this sake in this work we have used Information theory to rank
features on the basis of their Joint Mutual Information (JMI) with respect to rest
features as well as the associated class label. Let A and B be two discrete random
variables representing the elements of feature set, the mutual information between
these two features is defined as:
P ( A, B)
MI ( A, B) P( A, B) log 2
A B P ( A) P( B)
Where P is a probiblity density function and MI(A,B) is a measure of dependency
between the density of variable A and the density of target variable B.
Let the features of a sample instance is associated to some class label C, then the joint
mutual information between the features A, B and class label C can be represented as:
I(A:B/C) = I(A/B) – I( A/B,C )
Using the joint mutual information principle, features are ranked according to their
information with respected to next ranked features and associated class label for that
instance.
Web Development
• Machine learning is a field of study in which computer learn itself without
using of any external program. It makes computer similar to human in the
task of decision making.
• It involves computers learning from data provided so that they carry out
certain tasks.
• Machine learning is a subfield of Artificial Intelligence.
• Machine learning approaches are traditionally divided into three broad
categories, depending on the nature of the "signal" or "feedback" available
to the learning system
Supervised Machine learning
Unsupervised Machine learning
Reinforcement learning
Fig: Architecture of Machine
Learning
Fig: Emotion Recognition Using Machine Learning
Applications Of Machine learning
• Self Driving Cars
• Machine Learning in Healthcare
• Image Recognition
• Traffic prediction
• Speech Recognition
• Virtual Personal Assistant
• Financial Fraud Detection
HTML
• In Machine Learning, a Support Vector Machine is a most popular Supervised
learning algorithm.
• The goal of the SVM algorithm is to create the best line or decision boundary that
can segregate n-dimensional space into classes so that we can easily put the new
data point in the correct category in the future. This best decision boundary is called
a hyperplane.
• SVM chooses the extreme points/vectors that help in creating the hyperplane. These
extreme cases are called as support vectors, and hence algorithm is termed as
Support Vector Machine.
Classifier: f(x)=sgn(w*.x+b*) if sgn is ‘+’ then class is positive, if sgn is ‘–’ then class is
negative.
(w*.x(plus)+b*) >=1
(w*.x(minus)+b*) <=-1
yi(w*.x(plus)+b*) >=1
yi(w*.x(minus)+b*) >=1
Radial basis kernel-
Radial Basis Kernel is a kernel function that is used in machine learning to find
a non-linear classifier.
Kernel Function is used to transform n-dimensional input to m-dimensional
input, where m is much higher than n.
The main idea to use kernel is: A linear classifier curve in higher dimensions
becomes a Non-linear classifier curve in lower dimensions
Result
Objective-1
A system graph those will provide recognition of emotion
Fig: GUI of training model graph and testing graph of training model
Objective-2
Calculate Accuracy of system by providing test signal to modal
• The graphical user interface of training model graph and testing graph
apply on training model has been completed
• Ghai, M., Lal, S., Duggal, S., Manik, S. ,“Emotion recognition on speech
signals using machine learning”, International Conference on Big Data
Analytics and Computational Intelligence (ICBDAC), pp.34-39, 2017.
• Bhoomika Panda, Debananda Padhi, Kshamamayee Dash, “Use of SVM Classifier &
MFCC in Speech Emotion Recognition System”, IJARCSSE ,Vol. 2, No. 3, 2012.
• Harshini, D., Pranjali, B., Ranjitha, M., Rushali, J., Manikandan, J., “Design and
Evaluation of Speech based Emotion Recognition System using Support Vector
Machines”, IEEE India Council International Conference (INDICON), Vol. 16,2019.
• Guo, L., Wang, L., Dang, J., Liu, Z., & Guan, H., “Exploration of Complementary
Features for Speech Emotion Recognition Based on Kernel Extreme Learning
Machine” IEEE Access, Vol.7, pp.75798–75809, 2019.
• Aouani, H., & Ben Ayed, Y., “Emotion recognition in speech using MFCC with SVM,
DSVM and auto-encoder”,International Conference on Advanced Technologies for
Signal and Image Processing (ATSIP), Vol. 4, 2018.
Thank You