[HTML][HTML] Data fusion methods in multimodal human computer dialog

Y Ming-Hao, TAO Jian-Hua - Virtual Reality & Intelligent Hardware, 2019 - Elsevier
Y Ming-Hao, TAO Jian-Hua
Virtual Reality & Intelligent Hardware, 2019Elsevier
In multimodal human computer dialog, non-verbal channels, such as facial expression,
posture, gesture, etc, combined with spoken information, are also important in the procedure
of dialogue. Nowadays, in spite of high performance of users' single channel behavior
computing, it is still great challenge to understand users' intention accurately from their
multimodal behaviors. One reason for this challenge is that we still need to improve
multimodal information fusion in theories, methodologies and practical systems. This paper …
Abstract
In multimodal human computer dialog, non-verbal channels, such as facial expression, posture, gesture, etc, combined with spoken information, are also important in the procedure of dialogue. Nowadays, in spite of high performance of users’ single channel behavior computing, it is still great challenge to understand users’ intention accurately from their multimodal behaviors. One reason for this challenge is that we still need to improve multimodal information fusion in theories, methodologies and practical systems. This paper presents a review of data fusion methods in multimodal human computer dialog. We first introduce the cognitive assumption of single channel processing, and then discuss its implementation methods in human computer dialog; for the task of multi-modal information fusion, serval computing models are presented after we introduce the principle description of multiple data fusion. Finally, some practical examples of multimodal information fusion methods are introduced and the possible and important breakthroughs of the data fusion methods in future multimodal human-computer interaction applications are discussed.
Elsevier
Showing the best result for this search. See all results