Anna Rumshisky


2024

pdf bib
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
Shabnam Tafreshi | Arjun Akula | João Sedoc | Aleksandr Drozd | Anna Rogers | Anna Rumshisky
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP

pdf bib
Prompt Perturbation Consistency Learning for Robust Language Models
Yao Qiang | Subhrangshu Nandi | Ninareh Mehrabi | Greg Ver Steeg | Anoop Kumar | Anna Rumshisky | Aram Galstyan
Findings of the Association for Computational Linguistics: EACL 2024

Large language models (LLMs) have demonstrated impressive performance on a number of natural language processing tasks, such as question answering and text summarization. However, their performance on sequence labeling tasks such as intent classification and slot filling (IC-SF), which is a central component in personal assistant systems, lags significantly behind discriminative models. Furthermore, there is a lack of substantive research on robustness of LLMs to various perturbations in the input prompts. The contributions of this paper are three-fold. First, we show that fine-tuning sufficiently large LLMs can produce IC-SF performance comparable to discriminative models. Next, we systematically analyze the performance deterioration of those fine-tuned models due to three distinct yet relevant types of input perturbations - oronyms, synonyms, and paraphrasing. Finally, we propose an efficient mitigation approach, Prompt Perturbation Consistency Learning (PPCL), which works by regularizing the divergence between losses from clean and perturbed samples. Our experiments show that PPCL can recover on an average 59% and 69% of the performance drop for IC and SF tasks, respectively. Furthermore, PPCL beats data augmentation approach while using ten times fewer augmented data samples.

pdf bib
Emergent Abilities in Reduced-Scale Generative Language Models
Sherin Muckatira | Vijeta Deshpande | Vladislav Lialin | Anna Rumshisky
Findings of the Association for Computational Linguistics: NAACL 2024

Large language models can solve new tasks without task-specific fine-tuning. This ability, also known as in-context learning (ICL), is considered an emergent ability and is primarily seen in large language models with billions of parameters. This study investigates if such emergent properties are strictly tied to model size or can be demonstrated by smaller models trained on reduced-scale data. To explore this, we simplify pre-training data and pre-train 36 causal language models with parameters varying from 1 million to 165 million parameters. We show that models trained on this simplified pre-training data demonstrate enhanced zero-shot capabilities across various tasks in simplified language, achieving performance comparable to that of pre-trained models six times larger on unrestricted language. This suggests that downscaling the language allows zero-shot learning capabilities to emerge in models with limited size.Additionally, we find that these smaller models pre-trained on simplified data demonstrate a power law relationship between the evaluation loss and the three scaling factors: compute, dataset size, and model size.

pdf bib
Deconstructing In-Context Learning: Understanding Prompts via Corruption
Namrata Shivagunde | Vladislav Lialin | Sherin Muckatira | Anna Rumshisky
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

The ability of large language models (LLMs) to “learn in context” based on the provided prompt has led to an explosive growth in their use, culminating in the proliferation of AI assistants such as ChatGPT, Claude, and Bard. These AI assistants are known to be robust to minor prompt modifications, mostly due to alignment techniques that use human feedback. In contrast, the underlying pre-trained LLMs they use as a backbone are known to be brittle in this respect. Building high-quality backbone models remains a core challenge, and a common approach to assessing their quality is to conduct few-shot evaluation. Such evaluation is notorious for being highly sensitive to minor prompt modifications, as well as the choice of specific in-context examples. Prior work has examined how modifying different elements of the prompt can affect model performance. However, these earlier studies tended to concentrate on a limited number of specific prompt attributes and often produced contradictory results. Additionally, previous research either focused on models with fewer than 15 billion parameters or exclusively examined black-box models like GPT-3 or PaLM, making replication challenging. In the present study, we decompose the entire prompt into four components: task description, demonstration inputs, labels, and inline instructions provided for each demonstration. We investigate the effects of structural and semantic corruptions of these elements on model performance. We study models ranging from 1.5B to 70B in size, using ten datasets covering classification and generation tasks. We find that repeating text within the prompt boosts model performance, and bigger models (≥30B) are more sensitive to the semantics of the prompt. Finally, we observe that adding task and inline instructions to the demonstrations enhances model performance even when the instructions are semantically corrupted. The code is available at this URL.

pdf bib
NarrativeTime: Dense Temporal Annotation on a Timeline
Anna Rogers | Marzena Karpinska | Ankita Gupta | Vladislav Lialin | Gregory Smelkov | Anna Rumshisky
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

For the past decade, temporal annotation has been sparse: only a small portion of event pairs in a text was annotated. We present NarrativeTime, the first timeline-based annotation framework that achieves full coverage of all possible TLINKs. To compare with the previous SOTA in dense temporal annotation, we perform full re-annotation of the classic TimeBankDense corpus (American English), which shows comparable agreement with a signigicant increase in density. We contribute TimeBankNT corpus (with each text fully annotated by two expert annotators), extensive annotation guidelines, open-source tools for annotation and conversion to TimeML format, and baseline results.

2023

pdf bib
Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale.
Vijeta Deshpande | Dan Pechi | Shree Thatte | Vladislav Lialin | Anna Rumshisky
Findings of the Association for Computational Linguistics: ACL 2023

In recent years, language models have drastically grown in size, and the abilities of these models have been shown to improve with scale. The majority of recent scaling laws studies focused on high-compute high-parameter count settings, leaving the question of when these abilities begin to emerge largely unanswered. In this paper, we investigate whether the effects of pre-training can be observed when the problem size is reduced, modeling a smaller, reduced-vocabulary language. We show the benefits of pre-training with masked language modeling (MLM) objective in models as small as 1.25M parameters, and establish a strong correlation between pre-training perplexity and downstream performance (GLUE benchmark). We examine downscaling effects, extending scaling laws to models as small as ~1M parameters. At this scale, we observe a break of the power law for compute-optimal models and show that the MLM loss does not scale smoothly with compute-cost (FLOPs) below 2.2 × 1015 FLOPs. We also find that adding layers does not always benefit downstream performance.Our filtered pre-training data, reduced English vocabulary, and code are available at https://github.com/text-machine-lab/mini_bertgithub.com/text-machine-lab/mini_bert

pdf bib
Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Saleh Soltan | Andy Rosenbaum | Tobias Falke | Qin Lu | Anna Rumshisky | Wael Hamza
Findings of the Association for Computational Linguistics: ACL 2023

Pre-trained encoder-only and sequence-to-sequence (seq2seq) models each have advantages, however training both model types from scratch is computationally expensive. We explore recipes to improve pre-training efficiency by initializing one model from the other. (1) Extracting the encoder from a seq2seq model, we show it under-performs a Masked Language Modeling (MLM) encoder, particularly on sequence labeling tasks. Variations of masking during seq2seq training, reducing the decoder size, and continuing with a small amount of MLM training do not close the gap. (2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model. Overall, this two-stage approach is an efficient recipe to obtain both a multilingual encoder and a seq2seq model, matching the performance of training each model from scratch while reducing the total compute cost by 27%.

pdf bib
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Namrata Shivagunde | Vladislav Lialin | Anna Rumshisky
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Language model probing is often used to test specific capabilities of models. However, conclusions from such studies may be limited when the probing benchmarks are small and lack statistical power. In this work, we introduce new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500) inspired by psycholinguistic studies. We dramatically extend existing NEG-136 and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44 sentence pairs to 750 each. We also create another version of extended negation dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It consists of 770 sentence pairs. We evaluate 22 models on the extended datasets, seeing model performance dip 20-57% compared to the original smaller benchmarks. We observe high levels of negation sensitivity in models like BERT and ALBERT demonstrating that previous findings might have been skewed due to smaller test sets. Finally, we observe that while GPT3 has generated all the examples in ROLE-1500 is only able to solve 24.6% of them during probing. The datasets and code are available on Github.

pdf bib
Proceedings of the 5th Clinical Natural Language Processing Workshop
Tristan Naumann | Asma Ben Abacha | Steven Bethard | Kirk Roberts | Anna Rumshisky
Proceedings of the 5th Clinical Natural Language Processing Workshop

pdf bib
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP
Shabnam Tafreshi | Arjun Akula | João Sedoc | Aleksandr Drozd | Anna Rogers | Anna Rumshisky
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP

pdf bib
Reasoning Circuits: Few-shot Multi-hop Question Generation with Structured Rationales
Saurabh Kulshreshtha | Anna Rumshisky
Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)

Multi-hop Question Generation is the task of generating questions which require the reader to reason over and combine information spread across multiple passages employing several reasoning steps. Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks and make model predictions more interpretable. However, few-shot performance gains from including rationales have been largely observed only in +100B language models, and otherwise require large-scale manual rationale annotation. In this paper, we introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime (8- to 128-shot). We propose to annotate a small number of examples following our proposed multi-step rationale schema, treating each reasoning step as a separate task to be performed by a generative language model. We show that our framework leads to improved control over the difficulty of the generated questions and better performance compared to baselines trained without rationales, both on automatic evaluation metrics and in human evaluation. Importantly, we show that this is achievable with a modest model size.

2022

pdf bib
Federated Learning with Noisy User Feedback
Rahul Sharma | Anil Ramakrishna | Ansel MacLaughlin | Anna Rumshisky | Jimit Majmudar | Clement Chung | Salman Avestimehr | Rahul Gupta
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Machine Learning (ML) systems are getting increasingly popular, and drive more and more applications and services in our daily life. Thishas led to growing concerns over user privacy, since human interaction data typically needs to be transmitted to the cloud in order to trainand improve such systems. Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy. However, since ML models are most commonly trained with label supervision, we need a way to extract labels on edge to make FL viable. In this work, we propose a strategy for training FL models using positive and negative user feedback. We also design a novel framework to study different noise patterns in user feedback, and explore how well standard noise-robust objectives can help mitigate this noise when training models in a federated setting. We evaluate our proposed training setup through detailed experiments on two text classification datasets and analyze the effects of varying levels of user reliability and feedback noise on model performance. We show that our method improves substantially over a self-training baseline, achieving performance closer to models trained with full supervision.

pdf bib
Controlled Data Generation via Insertion Operations for NLU
Manoj Kumar | Yuval Merhav | Haidar Khan | Rahul Gupta | Anna Rumshisky | Wael Hamza
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

Use of synthetic data is rapidly emerging as a realistic alternative to manually annotating live traffic for industry-scale model building. Manual data annotation is slow, expensive and not preferred for meeting customer privacy expectations. Further, commercial natural language applications are required to support continuously evolving features as well as newly added experiences. To address these requirements, we propose a targeted synthetic data generation technique by inserting tokens into a given semantic signature. The generated data are used as additional training samples in the tasks of intent classification and named entity recognition. We evaluate on a real-world voice assistant dataset, and using only 33% of the available training set, we achieve the same accuracy as training with all available data. Further, we analyze the effects of data generation across varied real-world applications and propose heuristics that improve the task performance further.

pdf bib
Down and Across: Introducing Crossword-Solving as a New NLP Benchmark
Saurabh Kulshreshtha | Olga Kovaleva | Namrata Shivagunde | Anna Rumshisky
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Solving crossword puzzles requires diverse reasoning capabilities, access to a vast amount of knowledge about language and the world, and the ability to satisfy the constraints imposed by the structure of the puzzle. In this work, we introduce solving crossword puzzles as a new natural language understanding task. We release a corpus of crossword puzzles collected from the New York Times daily crossword spanning 25 years and comprised of a total of around nine thousand puzzles. These puzzles include a diverse set of clues: historic, factual, word meaning, synonyms/antonyms, fill-in-the-blank, abbreviations, prefixes/suffixes, wordplay, and cross-lingual, as well as clues that depend on the answers to other clues. We separately release the clue-answer pairs from these puzzles as an open-domain question answering dataset containing over half a million unique clue-answer pairs. For the question answering task, our baselines include several sequence-to-sequence and retrieval-based generative models. We also introduce a non-parametric constraint satisfaction baseline for solving the entire crossword puzzle. Finally, we propose an evaluation framework which consists of several complementary performance metrics.

pdf bib
Life after BERT: What do Other Muppets Understand about Language?
Vladislav Lialin | Kevin Zhao | Namrata Shivagunde | Anna Rumshisky
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics bench- mark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregres- sive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model’s linguistic capabilities.

pdf bib
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
Jack FitzGerald | Kay Rottmann | Julia Hirschberg | Mohit Bansal | Anna Rumshisky | Charith Peris | Christopher Hench
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)

pdf bib
Chasing the Tail with Domain Generalization: A Case Study on Frequency-Enriched Datasets
Manoj Kumar | Anna Rumshisky | Rahul Gupta
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Natural language understanding (NLU) tasks are typically defined by creating an annotated dataset in which each utterance is encountered once. Such data does not resemble real-world natural language interactions in which certain utterances are encountered frequently, others rarely. For deployed NLU systems this is a vital problem, since the underlying machine learning (ML) models are often fine-tuned on typical NLU data, and then applied to real-world data with a very different distribution. Such systems need to maintain interpretation consistency for both high-frequency utterances and low-frequency utterances. We propose an alternative strategy that explicitly uses utterance frequency in training data to learn models that are more robust to unknown distributions. We present a methodology to simulate utterance usage in two public NLU corpora and create new corpora with head, body and tail segments. We evaluate several methods for joint intent classification and named entity recognition (IC-NER), and use two domain generalization approaches that we adapt to NER. The proposed approaches demonstrate upto 7.02% relative improvement in semantic accuracy over baselines on the tail data. We provide insights as to why the proposed approaches work and show that the reasons for observed improvements do not align with those reported in previous work.

pdf bib
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Shabnam Tafreshi | João Sedoc | Anna Rogers | Aleksandr Drozd | Anna Rumshisky | Arjun Akula
Proceedings of the Third Workshop on Insights from Negative Results in NLP

pdf bib
Proceedings of the 4th Clinical Natural Language Processing Workshop
Tristan Naumann | Steven Bethard | Kirk Roberts | Anna Rumshisky
Proceedings of the 4th Clinical Natural Language Processing Workshop

pdf bib
Learning to Ask Like a Physician
Eric Lehman | Vladislav Lialin | Katelyn Edelwina Legaspi | Anne Janelle Sy | Patricia Therese Pile | Nicole Rose Alberto | Richard Raymund Ragasa | Corinna Victoria Puyat | Marianne Katharina Taliño | Isabelle Rose Alberto | Pia Gabrielle Alfonso | Dana Moukheiber | Byron Wallace | Anna Rumshisky | Jennifer Liang | Preethi Raghavan | Leo Anthony Celi | Peter Szolovits
Proceedings of the 4th Clinical Natural Language Processing Workshop

Existing question answering (QA) datasets derived from electronic health records (EHR) are artificially generated and consequently fail to capture realistic physician information needs. We present Discharge Summary Clinical Questions (DiSCQ), a newly curated question dataset composed of 2,000+ questions paired with the snippets of text (triggers) that prompted each question. The questions are generated by medical experts from 100+ MIMIC-III discharge summaries. We analyze this dataset to characterize the types of information sought by medical experts. We also train baseline models for trigger detection and question generation (QG), paired with unsupervised answer retrieval over EHRs. Our baseline model is able to generate high quality questions in over 62% of cases when prompted with human selected triggers. We release this dataset (and all code to reproduce baseline model results) to facilitate further research into realistic clinical QA and QG: https://github.com/elehman16/discq.

2021

pdf bib
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Kristina Toutanova | Anna Rumshisky | Luke Zettlemoyer | Dilek Hakkani-Tur | Iz Beltagy | Steven Bethard | Ryan Cotterell | Tanmoy Chakraborty | Yichao Zhou
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Proceedings of the Second Workshop on Insights from Negative Results in NLP
João Sedoc | Anna Rogers | Anna Rumshisky | Shabnam Tafreshi
Proceedings of the Second Workshop on Insights from Negative Results in NLP

pdf bib
BERT Busters: Outlier Dimensions that Disrupt Transformers
Olga Kovaleva | Saurabh Kulshreshtha | Anna Rogers | Anna Rumshisky
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Proceedings of the 3rd Clinical Natural Language Processing Workshop
Anna Rumshisky | Kirk Roberts | Steven Bethard | Tristan Naumann
Proceedings of the 3rd Clinical Natural Language Processing Workshop

pdf bib
A guide to the dataset explosion in QA, NLI, and commonsense reasoning
Anna Rogers | Anna Rumshisky
Proceedings of the 28th International Conference on Computational Linguistics: Tutorial Abstracts

Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work. Only for question answering we already have over 100 datasets, with over 40 published after 2018. However, most new datasets get “solved” soon after publication, and this is largely due not to the verbal reasoning capabilities of our models, but to annotation artifacts and shallow cues in the data that they can exploit. This tutorial aims to (1) provide an up-to-date guide to the recent datasets, (2) survey the old and new methodological issues with dataset construction, and (3) outline the existing proposals for overcoming them. The target audience is the NLP practitioners who are lost in dozens of the recent datasets, and would like to know what these datasets are actually measuring. Our overview of the problems with the current datasets and the latest tips and tricks for overcoming them will also be useful to the researchers working on future benchmarks.

pdf bib
Proceedings of the First Workshop on Insights from Negative Results in NLP
Anna Rogers | João Sedoc | Anna Rumshisky
Proceedings of the First Workshop on Insights from Negative Results in NLP

pdf bib
Towards Visual Dialog for Radiology
Olga Kovaleva | Chaitanya Shivade | Satyananda Kashyap | Karina Kanjaria | Joy Wu | Deddeh Ballah | Adam Coy | Alexandros Karargyris | Yufan Guo | David Beymer Beymer | Anna Rumshisky | Vandana Mukherjee Mukherjee
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

Current research in machine learning for radiology is focused mostly on images. There exists limited work in investigating intelligent interactive systems for radiology. To address this limitation, we introduce a realistic and information-rich task of Visual Dialog in radiology, specific to chest X-ray images. Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-the-art models. We show that incorporating medical history of the patient leads to better performance in answering questions as opposed to conventional visual question answering model which looks only at the image. While our experiments show promising results, they indicate that the task is extremely challenging with significant scope for improvement. We make both the datasets (synthetic and gold standard) and the associated code publicly available to the research community.

pdf bib
A Primer in BERTology: What We Know About How BERT Works
Anna Rogers | Olga Kovaleva | Anna Rumshisky
Transactions of the Association for Computational Linguistics, Volume 8

Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.

pdf bib
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna | Anna Rogers | Anna Rumshisky
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Large Transformer-based models were shown to be reducible to a smaller number of self-attention heads and layers. We consider this phenomenon from the perspective of the lottery ticket hypothesis, using both structured and magnitude pruning. For fine-tuned BERT, we show that (a) it is possible to find subnetworks achieving performance that is comparable with that of the full model, and (b) similarly-sized subnetworks sampled from the rest of the model perform worse. Strikingly, with structured pruning even the worst possible subnetworks remain highly trainable, indicating that most pre-trained BERT weights are potentially useful. We also study the “good” subnetworks to see if their success can be attributed to superior linguistic knowledge, but find them unstable, and not explained by meaningful self-attention patterns.

2019

pdf bib
Adversarial Decomposition of Text Representation
Alexey Romanov | Anna Rumshisky | Anna Rogers | David Donahue
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.

pdf bib
What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes
Alexey Romanov | Maria De-Arteaga | Hanna Wallach | Jennifer Chayes | Christian Borgs | Alexandra Chouldechova | Sahin Geyik | Krishnaram Kenthapadi | Anna Rumshisky | Adam Kalai
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

There is a growing body of work that proposes methods for mitigating bias in machine learning systems. These methods typically rely on access to protected attributes such as race, gender, or age. However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections. In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual’s true occupation and a word embedding of their name. This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes. Crucially, it only requires access to individuals’ names at training time and not at deployment time. We evaluate two variations of our proposed method using a large-scale dataset of online biographies. We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier’s overall true positive rate.

pdf bib
Revealing the Dark Secrets of BERT
Olga Kovaleva | Alexey Romanov | Anna Rogers | Anna Rumshisky
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT’s heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.

pdf bib
Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential
Anna Rogers | Olga Kovaleva | Anna Rumshisky
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda

Calls to action on social media are known to be effective means of mobilization in social movements, and a frequent target of censorship. We investigate the possibility of their automatic detection and their potential for predicting real-world protest events, on historical data of Bolotnaya protests in Russia (2011-2013). We find that political calls to action can be annotated and detected with relatively high accuracy, and that in our sample their volume has a moderate positive correlation with rally attendance.

pdf bib
Proceedings of the 2nd Clinical Natural Language Processing Workshop
Anna Rumshisky | Kirk Roberts | Steven Bethard | Tristan Naumann
Proceedings of the 2nd Clinical Natural Language Processing Workshop

pdf bib
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP
Anna Rogers | Aleksandr Drozd | Anna Rumshisky | Yoav Goldberg
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP

2018

pdf bib
Context-Aware Neural Model for Temporal Information Extraction
Yuanliang Meng | Anna Rumshisky
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose a context-aware neural network model for temporal information extraction. This model has a uniform architecture for event-event, event-timex and timex-timex pairs. A Global Context Layer (GCL), inspired by Neural Turing Machine (NTM), stores processed temporal relations in narrative order, and retrieves them for use when relevant entities come in. Relations are then classified in context. The GCL model has long-term memory and attention mechanisms to resolve irregular long-distance dependencies that regular RNNs such as LSTM cannot recognize. It does not require any new input features, while outperforming the existing models in literature. To our knowledge it is also the first model to use NTM-like architecture to process the information from global context in discourse-scale natural text processing. We are going to release the source code in the future.

pdf bib
Triad-based Neural Network for Coreference Resolution
Yuanliang Meng | Anna Rumshisky
Proceedings of the 27th International Conference on Computational Linguistics

We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.

pdf bib
RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian
Anna Rogers | Alexey Romanov | Anna Rumshisky | Svitlana Volkova | Mikhail Gronas | Alex Gribov
Proceedings of the 27th International Conference on Computational Linguistics

This paper presents RuSentiment, a new dataset for sentiment analysis of social media posts in Russian, and a new set of comprehensive annotation guidelines that are extensible to other languages. RuSentiment is currently the largest in its class for Russian, with 31,185 posts annotated with Fleiss’ kappa of 0.58 (3 annotations per post). To diversify the dataset, 6,950 posts were pre-selected with an active learning-style strategy. We report baseline classification results, and we also release the best-performing embeddings trained on 3.2B tokens of Russian VKontakte posts.

pdf bib
What’s in Your Embedding, And How It Predicts Task Performance
Anna Rogers | Shashwath Hosur Ananthakrishna | Anna Rumshisky
Proceedings of the 27th International Conference on Computational Linguistics

Attempts to find a single technique for general-purpose intrinsic evaluation of word embeddings have so far not been successful. We present a new approach based on scaled-up qualitative analysis of word vector neighborhoods that quantifies interpretable characteristics of a given model (e.g. its preference for synonyms or shared morphological forms as nearest neighbors). We analyze 21 such factors and show how they correlate with performance on 14 extrinsic and intrinsic task datasets (and also explain the lack of correlation between some of them). Our approach enables multi-faceted evaluation, parameter search, and generally – a more principled, hypothesis-driven approach to development of distributional semantic representations.

pdf bib
Automatic Labeling of Problem-Solving Dialogues for Computational Microgenetic Learning Analytics
Yuanliang Meng | Anna Rumshisky | Florence Sullivan
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting
Peter Potash | Alexey Romanov | Anna Rumshisky
Proceedings of the Second Workshop on Stylistic Variation

Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluations methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions for this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaning fully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse.

pdf bib
Similarity-Based Reconstruction Loss for Meaning Representation
Olga Kovaleva | Anna Rumshisky | Alexey Romanov
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

This paper addresses the problem of representation learning. Using an autoencoder framework, we propose and evaluate several loss functions that can be used as an alternative to the commonly used cross-entropy reconstruction loss. The proposed loss functions use similarities between words in the embedding space, and can be used to train any neural model for text generation. We show that the introduced loss functions amplify semantic diversity of reconstructed sentences, while preserving the original meaning of the input. We test the derived autoencoder-generated representations on paraphrase detection and language inference tasks and demonstrate performance improvement compared to the traditional cross-entropy loss.

2017

pdf bib
SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor
Peter Potash | Alexey Romanov | Anna Rumshisky
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes a new shared task for humor understanding that attempts to eschew the ubiquitous binary approach to humor detection and focus on comparative humor ranking instead. The task is based on a new dataset of funny tweets posted in response to shared hashtags, collected from the ‘Hashtag Wars’ segment of the TV show @midnight. The results are evaluated in two subtasks that require the participants to generate either the correct pairwise comparisons of tweets (subtask A), or the correct ranking of the tweets (subtask B) in terms of how funny they are. 7 teams participated in subtask A, and 5 teams participated in subtask B. The best accuracy in subtask A was 0.675. The best (lowest) rank edit distance for subtask B was 0.872.

pdf bib
HumorHawk at SemEval-2017 Task 6: Mixing Meaning and Sound for Humor Recognition
David Donahue | Alexey Romanov | Anna Rumshisky
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes the winning system for SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor. Humor detection has up until now been predominantly addressed using feature-based approaches. Our system utilizes recurrent deep learning methods with dense embeddings to predict humorous tweets from the @midnight show #HashtagWars. In order to include both meaning and sound in the analysis, GloVe embeddings are combined with a novel phonetic representation to serve as input to an LSTM component. The output is combined with a character-based CNN model, and an XGBoost component in an ensemble model which achieves 0.675 accuracy on the evaluation data.

pdf bib
Length, Interchangeability, and External Knowledge: Observations from Predicting Argument Convincingness
Peter Potash | Robin Bhattacharya | Anna Rumshisky
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

In this work, we provide insight into three key aspects related to predicting argument convincingness. First, we explicitly display the power that text length possesses for predicting convincingness in an unsupervised setting. Second, we show that a bag-of-words embedding model posts state-of-the-art on a dataset of arguments annotated for convincingness, outperforming an SVM with numerous hand-crafted features as well as recurrent neural network models that attempt to capture semantic composition. Finally, we assess the feasibility of integrating external knowledge when predicting convincingness, as arguments are often more convincing when they contain abundant information and facts. We finish by analyzing the correlations between the various models we propose.

pdf bib
Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture
Yuanliang Meng | Anna Rumshisky | Alexey Romanov
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

In this paper, we propose to use a set of simple, uniform in architecture LSTM-based models to recover different kinds of temporal relations from text. Using the shortest dependency path between entities as input, the same architecture is used to extract intra-sentence, cross-sentence, and document creation time relations. A “double-checking” technique reverses entity pairs in classification, boosting the recall of positive cases and reducing misclassifications between opposite classes. An efficient pruning algorithm resolves conflicts globally. Evaluated on QA-TempEval (SemEval2015 Task 5), our proposed technique outperforms state-of-the-art methods by a large margin. We also conduct intrinsic evaluation and post state-of-the-art results on Timebank-Dense.

pdf bib
Here’s My Point: Joint Pointer Architecture for Argument Mining
Peter Potash | Alexey Romanov | Anna Rumshisky
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

In order to determine argument structure in text, one must understand how individual components of the overall argument are linked. This work presents the first neural network-based approach to link extraction in argument mining. Specifically, we propose a novel architecture that applies Pointer Network sequence-to-sequence attention modeling to structural prediction in discourse parsing tasks. We then develop a joint model that extends this architecture to simultaneously address the link extraction task and the classification of argument components. The proposed joint model achieves state-of-the-art results on two separate evaluation corpora, showing far superior performance than the previously proposed corpus-specific and heavily feature-engineered models. Furthermore, our results demonstrate that jointly optimizing for both tasks is crucial for high performance.

pdf bib
Towards Debate Automation: a Recurrent Model for Predicting Debate Winners
Peter Potash | Anna Rumshisky
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

In this paper we introduce a practical first step towards the creation of an automated debate agent: a state-of-the-art recurrent predictive model for predicting debate winners. By having an accurate predictive model, we are able to objectively rate the quality of a statement made at a specific turn in a debate. The model is based on a recurrent neural network architecture with attention, which allows the model to effectively account for the entire debate when making its prediction. Our model achieves state-of-the-art accuracy on a dataset of debate transcripts annotated with audience favorability of the debate teams. Finally, we discuss how future work can leverage our proposed model for the creation of an automated debate agent. We accomplish this by determining the model input that will maximize audience favorability toward a given side of a debate at an arbitrary turn.

pdf bib
Tracking Bias in News Sources Using Social Media: the Russia-Ukraine Maidan Crisis of 2013–2014
Peter Potash | Alexey Romanov | Mikhail Gronas | Anna Rumshisky | Mikhail Gronas
Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism

This paper addresses the task of identifying the bias in news articles published during a political or social conflict. We create a silver-standard corpus based on the actions of users in social media. Specifically, we reconceptualize bias in terms of how likely a given article is to be shared or liked by each of the opposing sides. We apply our methodology to a dataset of links collected in relation to the Russia-Ukraine Maidan crisis from 2013-2014. We show that on the task of predicting which side is likely to prefer a given article, a Naive Bayes classifier can record 90.3% accuracy looking only at domain names of the news sources. The best accuracy of 93.5% is achieved by a feed forward neural network. We also apply our methodology to gold-labeled set of articles annotated for bias, where the aforementioned Naive Bayes classifier records 82.6% accuracy and a feed-forward neural networks records 85.6% accuracy.

2016

pdf bib
SimiHawk at SemEval-2016 Task 1: A Deep Ensemble System for Semantic Textual Similarity
Peter Potash | William Boag | Alexey Romanov | Vasili Ramanishka | Anna Rumshisky
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf bib
Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)
Anna Rumshisky | Kirk Roberts | Steven Bethard | Tristan Naumann
Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)

pdf bib
MUTT: Metric Unit TesTing for Language Generation Tasks
William Boag | Renan Campos | Kate Saenko | Anna Rumshisky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
TwitterHawk: A Feature Bucket Based Approach to Sentiment Analysis
William Boag | Peter Potash | Anna Rumshisky
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf bib
GhostWriter: Using an LSTM for Automatic Rap Lyric Generation
Peter Potash | Alexey Romanov | Anna Rumshisky
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Catching the Red Priest: Using Historical Editions of Encyclopaedia Britannica to Track the Evolution of Reputations
Yen-Fu Luo | Anna Rumshisky | Mikhail Gronas
Proceedings of the 9th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH)

2013

pdf bib
Mixing in Some Knowledge: Enriched Context Patterns for Bayesian Word Sense Induction
Rachel Chasin | Anna Rumshisky
Proceedings of the 6th International Conference on Generative Approaches to the Lexicon (GL2013)

2012

pdf bib
Word Sense Inventories by Non-Experts.
Anna Rumshisky | Nick Botchan | Sophie Kushkuley | James Pustejovsky
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

In this paper, we explore different strategies for implementing a crowdsourcing methodology for a single-step construction of an empirically-derived sense inventory and the corresponding sense-annotated corpus. We report on the crowdsourcing experiments using implementation strategies with different HIT costs, worker qualification testing, and other restrictions. We describe multiple adjustments required to ensure successful HIT design, given significant changes within the crowdsourcing community over the last three years.

2011

pdf bib
Crowdsourcing Word Sense Definition
Anna Rumshisky
Proceedings of the 5th Linguistic Annotation Workshop

2010

pdf bib
SemEval-2010 Task 7: Argument Selection and Coercion
James Pustejovsky | Anna Rumshisky | Alex Plotnick | Elisabetta Jezek | Olga Batiukova | Valeria Quochi
Proceedings of the 5th International Workshop on Semantic Evaluation

2009

pdf bib
SemEval-2010 Task 7: Argument Selection and Coercion
James Pustejovsky | Anna Rumshisky
Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)

pdf bib
GLML: Annotating Argument Selection and Coercion
James Pustejovsky | Jessica Moszkowicz | Olga Batiukova | Anna Rumshisky
Proceedings of the Eight International Conference on Computational Semantics

2008

pdf bib
Polysemy in Verbs: Systematic Relations between Senses and their Effect on Annotation
Anna Rumshisky | Olga Batiukova
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics

2006

pdf bib
Towards a Generative Lexical Resource: The Brandeis Semantic Ontology
James Pustejovsky | Catherine Havasi | Jessica Littman | Anna Rumshisky | Marc Verhagen
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we describe the structure and development of the Brandeis Semantic Ontology (BSO), a large generative lexicon ontology and lexical database. The BSO has been designed to allow for more widespread access to Generative Lexicon-based lexical resources and help researchers in a variety of computational tasks. The specification of the type system used in the BSO largely follows that proposed by the SIMPLE specification (Busa et al., 2001), which was adopted by the EU-sponsored SIMPLE project (Lenci et al., 2000).

pdf bib
Inducing Sense-Discriminating Context Patterns from Sense-Tagged Corpora
Anna Rumshisky | James Pustejovsky
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Traditionally, context features used in word sense disambiguation are based on collocation statistics and use only minimal syntactic and semantic information. Corpus Pattern Analysis is a technique for producing knowledge-rich context features that capture sense distinctions. It involves (1) identifying sense-carrying context patterns and using the derived context features to discriminate between the unseen instances. Both stages require manual seeding. In this paper, we show how to automate inducing sense-discriminating context features from a sense-tagged corpus.

pdf bib
Classification of Discourse Coherence Relations: An Exploratory Study using Multiple Knowledge Sources
Ben Wellner | James Pustejovsky | Catherine Havasi | Anna Rumshisky | Roser Saurí
Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue

2005

pdf bib
Automating Temporal Annotation with TARSQI
Marc Verhagen | Inderjeet Mani | Roser Sauri | Jessica Littman | Robert Knippen | Seok B. Jang | Anna Rumshisky | John Phillips | James Pustejovsky
Proceedings of the ACL Interactive Poster and Demonstration Sessions

2004

pdf bib
Automated Induction of Sense in Context
James Pustejovsky | Patrick Hanks | Anna Rumshisky
Proceedings of the 5th International Workshop on Linguistically Interpreted Corpora

pdf bib
Automated Induction of Sense in Context
James Pustejovsky | Patrick Hanks | Anna Rumshisky
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

Search
Co-authors