


default search action
Transactions of the Association for Computational Linguistics, Volume 13
Volume 13, 2025
- Chaitanya Malaviya, Priyanka Agrawal, Kuzman Ganchev, Pranesh Srinivasan, Fantine Huot, Jonathan Berant, Mark Yatskar, Dipanjan Das, Mirella Lapata, Chris Alberti:
Dolomites: Domain-Specific Long-Form Methodical Tasks. 1-29 - Sara Rosenthal, Avirup Sil, Radu Florian, Salim Roukos:
CLAPnq: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems. 53-72 - Jianhui Pang, Fanghua Ye, Derek Fai Wong, Dian Yu, Shuming Shi, Zhaopeng Tu, Longyue Wang:
Salute the Classic: Revisiting Challenges of Machine Translation in the Age of Large Language Models. 73-95 - Ionut Constantinescu, Tiago Pimentel, Ryan Cotterell, Alex Warstadt:
Investigating Critical Period Effects in Language Acquisition through Neural Language Models. 96-120 - Kabir Ahuja, Vidhisha Balachandran, Madhur Panwar, Tianxing He, Noah A. Smith, Navin Goyal, Yulia Tsvetkov:
Learning Syntax Without Planting Trees: Understanding Hierarchical Generalization in Transformers. 121-141 - Carel van Niekerk, Christian Geishauser, Michael Heck, Shutong Feng, Hsien-Chin Lin, Nurul Lubis, Benjamin Matthias Ruppik, Renato Vukovic, Milica Gasic:
A Confidence-based Acquisition Model for Self-supervised Active Learning and Label Correction. 167-187 - Jikai Wang, Yi Su, Juntao Li, Qingrong Xia, Zi Ye, Xinyu Duan, Zhefeng Wang, Min Zhang:
OPT-Tree: Speculative Decoding with Adaptive Draft Tree Structure. 188-199 - Lena Strobl, Dana Angluin, David Chiang, Jonathan Rawski, Ashish Sabharwal:
Transformers as Transducers. 200-219 - Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev
, Lyudmila Rvanova, Daniil Vasilev, Akim Tsvigun, Sergey Petrakov, Rui Xing, Abdelrahman Boda Sadallah, Kirill Grishchenkov, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov, Artem Shelmanov:
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph. 220-248 - Ruihao Chen, Hegang Chen, Yuyin Lu, Yanghui Rao, Chunjiang Zhu:
Supervised Neural Topic Modeling with Label Alignment. 249-263 - Josip Jukic, Jan Snajder:
From Robustness to Improved Generalization and Calibration in Pre-trained Language Models. 264-280 - Sara Papi
, Peter Polák, Dominik Machácek, Ondrej Bojar:
How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System? 281-313 - Jing Yang, Max Glockner, Anderson Rocha, Iryna Gurevych:
Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks. 314-342 - Xiao Pu, Hao Wu, Xiuli Bi, Yu Wu, Xinbo Gao:
DEAR: Disentangled Event-Agnostic Representation Learning for Early Fake News Detection. 343-356 - Xiaohao Yang, He Zhao, Dinh Q. Phung, Wray L. Buntine, Lan Du:
LLM Reading Tea Leaves: Automatically Evaluating Topic Models with Large Language Models. 357-375 - Panyut Sriwirote, Wei Qi Leong, Charin Polpanumas, Santhawat Thanyawong
, William-Chandra Tjhi, Wirote Aroonmanakun, Attapol T. Rutherford:
The Thai Universal Dependency Treebank. 376-391 - Tianshu Yu, Ting-En Lin, Yuchuan Wu, Min Yang, Fei Huang, Yongbin Li:
Diverse AI Feedback For Large Language Model Alignment. 392-407 - David A. Haslett, Zhenguang G. Cai:
How Much Semantic Information is Available in Large Language Model Tokens? 408-423 - Xiaoxi Luo, Weiwei Sun:
Phonetic Reconstruction of the Consonant System of Middle Chinese via Mixed Integer Optimization. 424-441 - Mor Ventura, Eyal Ben-David, Anna Korhonen, Roi Reichart:
Navigating Cultural Chasms: Exploring and Unlocking the Cultural POV of Text-To-Image Models. 142-166 - Karel D'Oosterlinck, Winnie Xu, Chris Develder, Thomas Demeester, Amanpreet Singh, Christopher Potts, Douwe Kiela, Shikib Mehri:
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment. 442-460 - Mubashara Akhtar, Chenxi Pang, Andreea Marzoca, Yasemin Altun, Julian Martin Eisenschlos:
TANQ: An Open Domain Dataset of Table Answered Questions. 461-480 - Fan Jiang, Tom Drummond, Trevor Cohn:
Few-Shot Multilingual Open-Domain QA from Five Examples. 481-504 - Anubhav Jangra, Jamshid Mozafari, Adam Jatowt, Smaranda Muresan:
Navigating the Landscape of Hint Generation Research: From the Past to the Future. 505-528 - Bingbing Wen, Jihan Yao, Shangbin Feng, Chenjun Xu, Yulia Tsvetkov, Bill Howe, Lucy Lu Wang:
Know Your Limits: A Survey of Abstention in Large Language Models. 529-556 - Hongyuan Xu, Yuhang Niu, Ciyi Liu, Yanlong Wen, Xiaojie Yuan:
TaxoPro: A Plug-In LoRA-based Cross-Domain Method for Low-Resource Taxonomy Completion. 557-576 - Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, Ruixuan Li:
Exploring Practical Gaps in Using Cross Entropy to Implement Maximum Mutual Information Criterion for Rationalization. 577-594 - Farhan Samir, Emily P. Ahn, Shreya Prakash, Márton Sóskuthy, Vered Shwartz, Jian Zhu:
A Comparative Approach for Auditing Multilingual Phonetic Transcript Archives. 595-612 - Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua B. Tenenbaum:
Understanding Epistemic Language with a Language-augmented Bayesian Theory of Mind. 613-637 - Chen Cecilia Liu, Iryna Gurevych, Anna Korhonen:
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art. 652-689 - Pierluigi Cassotti, Nina Tahmasebi:
Sense-specific Historical Word Usage Generation. 690-708 - Heather C. Lent, Erick Galinkin, Yiyi Chen
, Jens Myrup Pedersen
, Leon Derczynski, Johannes Bjerva
:
NLP Security and Ethics, in the Wild. 709-743 - Yao Zhu, Yunjian Zhang, Zizhe Wang, Xiu Yan, Peng Sun, Xiangyang Ji:
Patchwise Cooperative Game-based Interpretability Method for Large Vision-language Models. 744-759 - Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, Tagyoung Chung:
REAL Sampling: Boosting Factuality and Diversity of Open-ended Generation by Extrapolating the Entropy of an Infinitely Large LM. 760-783

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.