NO.188 Intelligent Interaction with Autonomous Assistants in the Wild
May 27 - 30, 2024 (Check-in: May 26, 2024 )
Organizers
- Yutaka Arakawa
- Kyushu University, Japan
- Wolfgang Minker
- Ulm University, Germany
- Elisabeth André
- Augsburg University, Germany
- Leo Wanner
- Pompeu Fabra University, Spain
Overview
The schedule has been updated from the original one: November 15-18, 2021 (check-in: November 14, 2021).
The early 21st century has seen a variety of software assistants getting access into our daily lives. Most prominently has been the emergence of speech assistants, e.g. Amazon Alexa, Google Assistant, etc. Such assistant applications are also often labeled as “intelligent” assistants. Even though, by human standards, they are mostly used for rather simple tasks, like playing music, getting weather information, or switching lights on and off, while the interaction with them is still rather unsophisticated and command-based. Due to recent progress in the research area of autonomous agents that are capable of cooperating with users on demanding tasks, as problem-solving or decision-making, it seems to be fundamental to equip them likewise with intelligent interaction skills, in order to be adequately accepted and trusted. However, it remains unclear what intelligence in the context of interaction means and what skills are required. These will be the main aspects of the proposed meeting.
In the domain of computer science, intelligence is often related to artificial intelligence (AI). AI expresses itself as the ability of machines to perceive their environment via sensoric measurements and to act accordingly. Therefore, machines or computers try to imitate functions of the human brain, thus they emulate cognitive capabilities. These skills comprise the ability to learn, plan, reason, adapt as well as to process natural language. Furthermore, intelligent systems need to rely on affective computing for considering human emotion and be able to act on their behalf, i.e. autonomously. With the advances in computational processing power and machine learning, the development of autonomous assistants and robots for helping users in nearly all application areas has become a hot topic in current research and has already provided promising results.
One of the bottlenecks of getting this kind of technology out of the laboratories and into the wild is to render the interaction with such applications equally intelligent. This task requires collaboration between multiple research disciplines, such as computer science, psychology and ethics. Main research obstacles for computer scientists form the development of technological models, methods, and strategies for the communication of complex assistant functionalities. For this purpose, the interaction needs to evolve from being command-based to reliable human-computer dialogues, that can be initiated either by the human or the machine and can possibly span multiple interaction participants. In this context, the use of multimodal sensory information, e.g. visionary, physiological features, is essential. For fostering the understanding of intelligent interactions and an evaluation of these, the use of human factors and psychological models needs to be considered. Only by including the user, a sound level of acceptance, trust, and usability can be achieved. Finally, as the intelligent interaction with autonomous assistants is bound to (private) data and can have a possible social impact, the expertise of ethicists is valuable.
An active cooperation between computer science, psychology and ethics will result in a platform to exchange ideas and benefit from complementary work. We aim to create a unique venue for discussion and collaboration between experts from these disciplines. The planned Shonan Meeting will help to explore possible challenges and jointly develop a research agenda for main directions. Therefore, we will invite keynote speakers from the respective research fields, whose contributions will serve as a basis for breakout sessions. In these sessions, participants will work actively on specific research objectives in small groups. This will help foster an interdisciplinary understanding and cooperativity. The results of the breakout sessions will then be discussed with the whole plenum.
The outcome of the workshop will be published in the form of free open-access CEUR-Proceedings ( http://ceur-ws.org/ ). This is expected to encourage joint publications at top conferences and journals in computer science, jointly authored by psychologists, ethicists, and AI researchers. Furthermore, it will help for joint funding applications for long-term research collaborations continuing the research beyond 2021.
Research challenges:
- Challenges of Intelligent Interaction with Autonomous Assistants (IIAA) in the wild.
- Role of multi-modal in- and output (speech, gestures and emotions).
- Dialogue modeling and appropriate response generation for expressive and adaptive autonomous assistants.
- Single and multiple user identification, modelling and tracking in IIAA.
- Context / Activity recognition for IIAA
- Context awareness (automatic detection of stress, user status and environmental context).
- Personalization and user-centered development.
- Evolution of the technology and ideas for future research and application scenarios.
- Role of grounding and embodiment in language and dialogue.
- Psychological issues and learning effects in IIAA (assistants vs. users).
- Role of personality in adaptive human-assistant systems.
- Trust and reliability in IIAA.
- Theory of mind for IIAA.
- Nudge in IIAA
Use cases, user groups and industrial applications:
- Appropriate user groups for IIAA (e.g. elderly, youngsters, and school kids).
- Specific application domains for IIAA:
- public space (e.g. interactive digital signage, guidance systems).
- assistive environments (e.g. elderly care, hospitals).
- Success stories, functional systems and industrial challenges.
Development, testing and evaluation:
- Experimental design, user studies and evaluation of IIAA.
- Investigation of long-term vs. short-term relation in IIAA.
Ethics and societal impact:
- Legal issues.
- Social responsibility.
- Data protection and privacy by design and default.
- Social design and development of naturally interacting human-assistant systems.