default search action
2nd DialDoc@ACL 2022: Dublin, Ireland
- Song Feng, Hui Wan, Caixia Yuan, Han Yu:
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, DialDoc@ACL 2022, Dublin, Ireland, May 26, 2022. Association for Computational Linguistics 2022, ISBN 978-1-955917-33-9 - Xiachong Feng, Xiaocheng Feng, Bing Qin:
MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization. 1-12 - Xinyan Zhao, Bin He, Yasheng Wang, Yitong Li, Fei Mi, Yajiao Liu, Xin Jiang, Qun Liu, Huanhuan Chen:
UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues. 13-22 - Greyson Gerhard-Young, Raviteja Anantha, Srinivas Chappidi, Björn Hoffmeister:
Low-Resource Adaptation of Open-Domain Generative Chatbots. 23-30 - Yuya Nakano, Seiya Kawano, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura:
Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System. 31-40 - Vaishali Pal, Evangelos Kanoulas, Maarten de Rijke:
Parameter-Efficient Abstractive Question Answering over Tables or Text. 41-53 - Tianda Li, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu:
Conversation- and Tree-Structure Losses for Dialogue Disentanglement. 54-64 - Yosi Mass, Doron Cohen, Asaf Yehudai, David Konopnicki:
Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval. 65-71 - Zhaodong Wang, Kazunori Komatani:
Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model. 72-82 - Takashi Kodama, Ribeka Tanaka, Sadao Kurohashi:
Construction of Hierarchical Structured Knowledge-based Recommendation Dialogue Dataset and Dialogue System. 83-92 - Yan Xu, Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan Su, Pascale Fung:
Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters. 93-107 - Shi-Wei Zhang, Yiyang Du, Guanzhong Liu, Zhao Yan, Yunbo Cao:
G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents. 108-114 - Yiwei Jiang, Amir Hadifar, Johannes Deleu, Thomas Demeester, Chris Develder:
UGent-T2K at the 2nd DialDoc Shared Task: A Retrieval-Focused Dialog System Grounded in Multiple Documents. 115-122 - Kun Li, Tianhua Zhang, Liping Tang, Junan Li, Hongyuan Lu, Xixin Wu, Helen Meng:
Grounded Dialogue Generation with Cross-encoding Re-ranker, Grounding Span Prediction, and Passage Dropout. 123-129 - Minjun Zhu, Bin Li, Yixuan Weng, Fei Xia:
A Knowledge storage and semantic space alignment Method for Multi-documents dialogue generation. 130-135 - Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, Kyomin Jung:
Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model. 136-141 - Sayed Hesam Alavian, Ali Satvaty, Sadra Sabouri, Ehsaneddin Asgari, Hossein Sameti:
Docalog: Multi-document Dialogue System using Transformer-based Span Retrieval. 142-147 - Srijan Bansal, Suraj Tripathi, Sumit Agarwal, Sireesh Gururaja, Aditya Srikanth Veerubhotla, Ritam Dutt, Teruko Mitamura, Eric Nyberg:
R3 : Refined Retriever-Reader pipeline for Multidoc2dial. 148-154 - Song Feng, Siva Sankalp Patel, Hui Wan:
DialDoc 2022 Shared Task: Open-Book Document-grounded Dialogue Modeling. 155-160 - Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansky, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, Yossi Matias:
TRUE: Re-evaluating Factual Consistency Evaluation. 161-175 - Elnaz Nouri, Carlos Toxtli:
Handling Comments in Documents through Interactions. 176-186 - Carl Strathearn, Dimitra Gkatzia:
Task2Dial: A Novel Task and Dataset for Commonsense-enhanced Task-based Dialogue Grounded in Documents. 187-196
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.