Are we there yet? research in commercial spoken dialog systems

…, D Suendermann, K Dayanidhi, J Liscombe - … Conference on Text …, 2009 - Springer
In this paper we discuss the recent evolution of spoken dialog systems in commercial deployments.
Yet based on a simple finite state machine design paradigm, dialog systems reached …

[PDF][PDF] Classifying subject ratings of emotional speech using acoustic features

J Liscombe, J Venditti, JB Hirschberg - 2003 - academiccommons.columbia.edu
This paper presents results from a study examining emotional speech using acoustic
features and their use in automatic machine learning classification. In addition, we propose a …

Using context to improve emotion detection in spoken dialog systems

J Liscombe, G Riccardi, D Hakkani-Tur - 2005 - academiccommons.columbia.edu
Most research that explores the emotional state of users of spoken dialog systems does not
fully utilize the contextual nature that the dialog structure provides. This paper reports results …

[PDF][PDF] Experiments in emotional speech

J Hirschberg, J Liscombe, J Venditti - ISCA & IEEE Workshop on …, 2003 - cs.columbia.edu
Speech is a rich source of information, not only about what a speaker says, but also about
what the speaker’s attitude is toward the listener and toward the topic under discussion—as …

[HTML][HTML] Much more than the malady: The promise of a web-based digital platform incorporating self-report for research and clinical care in mild cognitive impairment

A McGarry, O Roesler, J Liscombe, M Neumann… - Mayo Clinic …, 2025 - Elsevier
Traditional clinical trials in neurodegenerative disorders have utilized combinations of
examination-based outcomes, global assessments by investigators and participants, and scales …

Detecting certainness in spoken tutorial dialogues

J Liscombe, JB Hirschberg, JJ Venditti - 2005 - academiccommons.columbia.edu
What role does affect play in spoken tutorial systems and is it automatically detectable? We
investigated the classification of student certainness in a corpus collected for ITSPOKE, a …

Investigating the utility of multimodal conversational technology and audiovisual analytic measures for the assessment and monitoring of amyotrophic lateral sclerosis …

M Neumann, O Roesler, J Liscombe, H Kothare… - arXiv preprint arXiv …, 2021 - arxiv.org
We propose a cloud-based multimodal dialog platform for the remote assessment and
monitoring of Amyotrophic Lateral Sclerosis (ALS) at scale. This paper presents our vision, …

Vocal and facial behavior during affect production in autism spectrum disorder

…, M Neumann, J Liscombe… - Journal of Speech …, 2025 - pubs.asha.org
Purpose: We investigate the extent to which automated audiovisual metrics extracted during
an affect production task show statistically significant differences between a cohort of …

[PDF][PDF] When words speak just as loudly as actions: Virtual agent based remote health assessment integrating what patients say with what they do

…, M Neumann, H Kothare, O Roesler, J Liscombe… - Proc …, 2023 - vikramr.com
We present a unified multimodal dialog platform for the remote assessment and monitoring
of patients’ neurological and mental health. Tina, a virtual agent, guides participants through …

[BOOK][B] Prosody and speaker state: paralinguistics, pragmatics, and proficiency

JJ Liscombe - 2007 - search.proquest.com
Prosody—suprasegmental characteristics of speech such as pitch, rhythm, and loudness—is
a rich source of information in spoken language and can tell a listener much about the …