publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2024
- Learning Representations for Hierarchies with Minimal SupportBenjamin Rozonoyer, Michael Boratko, Dhruvesh Patel, and 4 more authorsIn The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In general, observing every entry would be necessary to uniquely identify a graph, however if we know the graph has a certain property some entries can be omitted - for example, only half the entries would be required for a symmetric graph. In this work, we develop a novel framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. We give an explicit algorithm to compute the provably minimal set of entries, and demonstrate empirically that one can train node embedding models with greater efficiency and performance, provided the energy function has an appropriate inductive bias. We achieve robust performance on synthetic hierarchies and a larger real-world taxonomy, observing improved convergence rates in a resource-constrained setting while reducing the set of training examples by as much as 99%.
- Quasi-random Multi-Sample Inference for Large Language ModelsAditya Parashar, Aditya Vikram Singh, Avinash Amballa, and 2 more authorsarXiv preprint arXiv:2411.06251, 2024
Large language models (LLMs) are often equipped with multi-sample decoding strategies. An LLM implicitly defines an arithmetic code book, facilitating efficient and embarrassingly parallelizable arithmetic sampling to produce multiple samples using quasi-random codes. Traditional text generation methods, such as beam search and sampling-based techniques, have notable limitations: they lack parallelizability or diversity of sampled sequences. This study explores the potential of arithmetic sampling, contrasting it with ancestral sampling across two decoding tasks that employ multi-sample inference: chain-of-thought reasoning with self-consistency and machine translation with minimum Bayes risk decoding. Our results demonstrate that arithmetic sampling produces more diverse samples, significantly improving reasoning and translation performance as the sample size increases. We observe a 3-5% point increase in accuracy on the GSM8K dataset and a 0.45-0.89% point increment in COMET score for WMT19 tasks using arithmetic sampling without any significant computational overhead.
- Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence GenerationJiachen Zhao, Wenlong Zhao, Andrew Drozdov, and 5 more authorsIn Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
We study semi-supervised sequence generation tasks, where the few labeled examples are too scarce to finetune a model, and meanwhile, few-shot prompted large language models (LLMs) exhibit room for improvement. In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks. We find that the student is able to learn a general pattern from the high-quality pseudolabels produced by the teacher during knowledge distillation (KD), and favorably not a general pattern from the low-quality pseudolabels. Leveraging this discovery, we propose a new method, Multistage Collaborative Knowledge Distillation from an LLM (MCKD), for these tasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeled data. Then at each stage of an iterative KD process, a new pair of students is trained on disjoint partitions of the pseudolabeled data, and produces new and improved pseudolabels for their unseen partitions. We conduct extensive experiments on four syntactic and semantic parsing datasets and show the effectiveness of MCKD for low-resource semi-supervised sequence generation. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeled examples outperforms an LLM teacher and vanilla KD by 7.5% and 3.7% parsing F1, respectively, and matches the performance of supervised finetuning with 500 labeled examples.
- QueryBuilder: Human-in-the-Loop Query Development for Information RetrievalHemanth Kandula, Damianos Karakos, Haoling Qiu, and 4 more authorsarXiv preprint arXiv:2409.04667, 2024
Frequently, users of an Information Retrieval (IR) system start with an overarching information need (a.k.a., an analytic task) and proceed to define finer-grained queries covering various important aspects (i.e., sub-topics) of that analytic task. We present a novel, interactive system called QueryBuilder, which allows a novice, English-speaking user to create queries with a small amount of effort, through efficient exploration of an English development corpus in order to rapidly develop cross-lingual information retrieval queries corresponding to the user’s information needs. QueryBuilder performs near real-time retrieval of documents based on user-entered search terms; the user looks through the retrieved documents and marks sentences as relevant to the information needed. The marked sentences are used by the system as additional information in query formation and refinement: query terms (and, optionally, event features, which capture event ′triggers′ (indicator terms) and agent/patient roles) are appropriately weighted, and a neural-based system, which better captures textual meaning, retrieves other relevant content. The process of retrieval and marking is repeated as many times as desired, giving rise to increasingly refined queries in each iteration. The final product is a fine-grained query used in Cross-Lingual Information Retrieval (CLIR). Our experiments using analytic tasks and requests from the IARPA BETTER IR datasets show that with a small amount of effort (at most 10 minutes per sub-topic), novice users can form useful fine-grained queries including in languages they don’t understand. QueryBuilder also provides beneficial capabilities to the traditional corpus exploration and query formation process. A demonstration video is released at this [URL](https://vimeo.com/734795835)
2023
- Claim Extraction via Subgraph Matching over Modal and Syntactic DependenciesBenjamin Rozonoyer, Michael Selvaggio, David Zajic, and 1 more author2023
We propose the use of modal dependency parses (MDPs) aligned with syntactic dependency parse trees as an avenue for the novel task of claim extraction. MDPs provide a document-level structure that links linguistic expression of events to the conceivers responsible for those expressions. By defining the event- conceiver links as claims and using subgraph pattern matching to exploit the complementarity of these modal links and syntactic claim patterns, we outline a method for aggregating and classifying claims, with the potential for supplying a novel perspective on large natural language data sets. Abstracting away from the task of claim extraction, we prototype an interpretable information extraction (IE) paradigm over sentence- and document-level parse structures, framing inference as subgraph matching and learning as subgraph mining. We make our code open-sourced at https://github.com/BBN-E/nlp-graph-pattern-matching-and-mining.
2021
- Graph Convolutional Encoders for Syntax-aware AMR ParsingBenjamin RozonoyerBrandeis University, 2021
Graph Convolutional Networks (GCNs), a natural architecture for modeling graph-structured data, have recently entered the playing field of NLP as sentence encoders over dependency structure. Contemporary setups of semantic role labeling (SRL), neural machine translation (NMT), and event extraction have demonstrated the superiority of GCNs to CNN and RNN encoders, which expect inherently grid-like inputs. In this thesis, we explore GCN encoders in a fully neural paradigm of AMR parsing, taking Cai and Lam (2020)’s state-of-the-art parser as the framework. We hypothesize that GCN encoders are especially well suited for this problem, following the intuition that syntactic structure strongly informs graph-based semantic structure and can be viewed as an intermediate step towards obtaining it from sequential input. Unlike in previous setups, our GCN encoder has to compete with the extremely successful Transformer baseline (the parser’s default encoder), and performs only modestly worse while 1) having an order of magnitude fewer parameters, 2) incorporating explicit syntactic information, and 3) not relying on positional encoding. Our extensive experiments around GCN and Transformer (as well as BiLSTM and GAT) encoder configurations shed light on some of the settings that contribute to the successes of the respective architectures. We confirm that the “syntactic GCN” is the best-performing GCN layer, make empirical observations about Transformers and GCNs based on comparative results and dependency tree statistics, and draw parallels between the Transformer and GCN models in terms of their ability to learn relational structure.
- ExcavatorCovid: Extracting events and relations from text corpora for temporal and causal analysis for COVID-19Bonan Min, Benjamin Rozonoyer, Haoling Qiu, and 2 more authorsarXiv preprint arXiv:2105.01819, 2021
Timely responses from policy makers to mitigate the impact of the COVID-19 pandemic rely on a comprehensive grasp of events, their causes, and their impacts. These events are reported at such a speed and scale as to be overwhelming. In this paper, we present ExcavatorCovid, a machine reading system that ingests open-source text documents (e.g., news and scientific publications), extracts COVID-19 related events and relations between them, and builds a Temporal and Causal Analysis Graph (TCAG). Excavator will help government agencies alleviate the information overload, understand likely downstream effects of political and economic decisions and events related to the pandemic, and respond in a timely manner to mitigate the impact of COVID-19. We expect the utility of Excavator to outlive the COVID-19 pandemic: analysts and decision makers will be empowered by Excavator to better understand and solve complex problems in the future. A demonstration video is available at https://vimeo.com/528619007.
2020
- Aguaruna speculative clause: Evidentiality meets focusBenjamin RozonoyerProceedings of the Linguistic Society of America, 2020
The speculative clause in Aguaruna presents us with two distinctive and interacting semantic phenomena – evidentiality and focus – both of which have been objects of recent interest cross-linguistically. Following the alternative semantics theory of focus developed by Rooth (1992), I analyze Aguaruna’s alternating speculative focus enclitics, and incorporate the evidentiality-focus complex into a compositional semantics for Aguaruna. By formally modeling the interplay of evidentiality and focus, this analysis hopes to glean a more precise understanding of each phenomenon individually, and to contribute to a more complete typology of both.
- A small Universal Dependencies treebank for HittiteErik Andersen, and Benjamin RozonoyerIn Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020), 2020
We present the first Universal Dependencies treebank for Hittite. This paper expands on earlier efforts at Hittite corpus creation (Molina and Molin, 2016; Molina, 2016) and discussions of annotation guidelines for Hittite within the UD framework (Inglese, 2015; Inglese et al., 2018). We build on the expertise of the above works to create a small corpus which we hope will serve as a stepping-stone to more expansive UD treebanking for Hittite.
- Updates and Analysis of BBN Panorama for SM-KBPRoger Bock, Jordan Hashemi, Ilana Heintz, and 1 more author2020
We provide system updates and performance analysis regarding the 2020 version of the BBN Panorama multi-modal processing pipeline, as submitted to the 2020 Streaming Media Knowledge Base Population track.