loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Quentin Portes 1 ; José Mendès Carvalho 1 ; Julien Pinquier 2 and Frédéric Lerasle 3

Affiliations: 1 Renault Software Lab, Toulouse, France ; 2 IRIT, Paul Sabatier University, CNRS, Toulouse, France ; 3 LAAS-CNRS, Paul Sabatier University, Toulouse, France

Keyword(s): Sentiment Analysis, Deep Learning, Multimodal, Fusion, Embedded System, Cockpit Monitoring.

Abstract: Multimodal neural network in sentiment analysis uses video, text and audio. Processing these three modalities tends to create computationally high models. In the embedded context, all resources and specifically computational resources are restricted. In this paper, we design models dealing with these two antagonist issues. We focused our work on reducing the numbers of model input features and the size of the different neural network architectures. The major contribution in this paper is the design of a specific 3D Residual Network instead of using a basic 3D convolution. Our experiments are focused on the well-known dataset MOSI (Multimodal Corpus of Sentiment Intensity). The objective is to perform similar results as the state of the art. Our best multimodal approach achieves a F1 score of 80% with a number of parameters reduced by 2.2 and the memory load reduced by a factor 13.8, compared to the state of the art. We designed five models, one for each modality (i.e video, audio and text) and one for each fusion technique. The two high-level multimodal fusions presented in this paper are based on the evidence theory and on a neural network approach. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 2a06:98c0:3600::103

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Portes, Q.; Carvalho, J.; Pinquier, J. and Lerasle, F. (2021). Multimodal Neural Network for Sentiment Analysis in Embedded Systems. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP; ISBN 978-989-758-488-6; ISSN 2184-4321, SciTePress, pages 387-398. DOI: 10.5220/0010224703870398

@conference{visapp21,
author={Quentin Portes. and José Mendès Carvalho. and Julien Pinquier. and Frédéric Lerasle.},
title={Multimodal Neural Network for Sentiment Analysis in Embedded Systems},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP},
year={2021},
pages={387-398},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010224703870398},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP
TI - Multimodal Neural Network for Sentiment Analysis in Embedded Systems
SN - 978-989-758-488-6
IS - 2184-4321
AU - Portes, Q.
AU - Carvalho, J.
AU - Pinquier, J.
AU - Lerasle, F.
PY - 2021
SP - 387
EP - 398
DO - 10.5220/0010224703870398
PB - SciTePress