Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics

Daniel Deutsch, Rotem Dror, Dan Roth


Abstract
How reliably an automatic summarization evaluation metric replicates human judgments of summary quality is quantified by system-level correlations. We identify two ways in which the definition of the system-level correlation is inconsistent with how metrics are used to evaluate systems in practice and propose changes to rectify this disconnect. First, we calculate the system score for an automatic metric using the full test set instead of the subset of summaries judged by humans, which is currently standard practice. We demonstrate how this small change leads to more precise estimates of system-level correlations. Second, we propose to calculate correlations only on pairs of systems that are separated by small differences in automatic scores which are commonly observed in practice. This allows us to demonstrate that our best estimate of the correlation of ROUGE to human judgments is near 0 in realistic scenarios. The results from the analyses point to the need to collect more high-quality human judgments and to improve automatic metrics when differences in system scores are small.
Anthology ID:
2022.naacl-main.442
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6038–6052
Language:
URL:
https://aclanthology.org/2022.naacl-main.442
DOI:
10.18653/v1/2022.naacl-main.442
Bibkey:
Cite (ACL):
Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 6038–6052, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics (Deutsch et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.442.pdf
Video:
 https://aclanthology.org/2022.naacl-main.442.mp4
Data
SummEval