MLQA: Evaluating cross-lingual extractive question answering

P Lewis, B Oğuz, R Rinott, S Riedel… - arXiv preprint arXiv …, 2019 - arxiv.org
arXiv preprint arXiv:1910.07475, 2019arxiv.org
Question answering (QA) models have shown rapid progress enabled by the availability of
large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to
collect, and rarely exist in languages other than English, making training QA systems in
other languages challenging. An alternative to building large monolingual training datasets
is to develop cross-lingual systems which can transfer to a target language without requiring
training data in that language. In order to develop such systems, it is crucial to invest in high …
Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making training QA systems in other languages challenging. An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language. In order to develop such systems, it is crucial to invest in high quality multilingual evaluation benchmarks to measure progress. We present MLQA, a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area. MLQA contains QA instances in 7 languages, namely English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. It consists of over 12K QA instances in English and 5K in each other language, with each QA instance being parallel between 4 languages on average. MLQA is built using a novel alignment context strategy on Wikipedia articles, and serves as a cross-lingual extension to existing extractive QA datasets. We evaluate current state-of-the-art cross-lingual representations on MLQA, and also provide machine-translation-based baselines. In all cases, transfer results are shown to be significantly behind training-language performance.
arxiv.org