Research Theme
I am interested in building reliable learning systems. Lack of reliability in deployed systems arises (majorly) due to misspecification of the task. I research on alternate forms of supervision (beyond example-label demonstrations) to specify the classification task better. During PhD, I explored leveraging meta-data identifying the source domain of an example in a problem known as domain generalization [ICLR18][ICML20][ICML20a][NeurIPS21][ICLR22]. I also explored the related problems of domain adaptation ([ACL19], [EMNLP20]) and stratified evaluation ([NeurIPS21]).
After PhD, again with the intent of building reliable systems, I have been developing algorithms that enable humans to explain (to a machine) what causes a label. So far, we studied supervising using local (i.e. example-level) explanations [NeurIPS23] and are making progress toward global (i.e. model-level) concept-based explanations [Preprint].
Check Google Scholar for an exhaustive list of publications.
- Estimation of Concept Explanations Should be Uncertainty Aware
Under review [forum].
with J Heo, K Collins, S Singh, A Weller.
Uncertainty aware estimation improves reliability of concept explanations - Use Perturbations when Learning from Explanations
NeurIPS 2023 [pdf].
with J Heo, M Wicker, A Weller.
Explanations are indispensable for avoiding learning of nuisance features or for label efficiency. We studied robustness (such as adversarial or certified-robustness learning) methods for learning from explanation constraints and found them to consistently and drastically out-perform existing methods that regularise using an interpretability tool. - Human-in-the-loop Mixup
UAI 2023 [pdf]
with K Collins, U Bhatt, W Liu, I Sucholutsky, B Love, A Weller
Synthetic labels used in mixup are not consistently aligned with human perceptual judgments; relabeling examples, with humans-in-the-loop and leveraging human uncertainty information, holds promise to increase downstream model reliability. - Robustness, Evaluation and Adaptation of Machine Learning Models in the Wild
PhD Thesis. [arxiv] - Focus on the Common Good: Group Distributional Robustness Follows.
ICLR 2022 [pdf] [code][slides]
with P Netrapalli, S Sarawagi
When train group sizes are highly disproportionate, how can we train such that we generalize well to all groups irrespective of their training sizes? We found that by just focussing the training on the groups that improve performance on other groups as well as on the self (common good) yield better group robustness.
- Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time.
NeurIPS 2021 [pdf][code]
with A Nasery, S Thakur, A De, S Sarawagi
How can we prepare models for future by training on past data? Standard approaches overfit on past and fail to extrapolate to future. We proposed to parameterize the predictor on time and regularized its dependence on time to avoid overfitting on past data.
- Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations.
NeurIPS 2021 [pdf][code][slides]
with S Chakrabarti, S Sarawagi
We argue that Prediction Services should report performance for every combination of prespecified and interpretable data shifts, and go beyond leaderboards and standard bechmarks. How can we efficiently estimate performance over combinatorially many shifts without access to equally large labeled data? Check out our paper to know more.
- NLP Service APIs and Models for Efficient Registration of New Clients.
EMNLP Findings 2020 [pdf][code]
with S Shah, S Chakrabarti, S Sarawagi
Prediction service APIs are expected to cater millions of client distributions. Adapting to each client is not scalable for computation and storage concerns. We propose an on-the-fly adaptation technique using client side corpus signature.
- Untapped Potential of Data Augmentation: A Domain Generalization Viewpoint.
ICML UDL Workshop 2020 [pdf][slides].
with S Shankar
We argue why plain augmentation leads to shallow parameter sharing between the original and augmented examples. We show how even SoTA augmentation methods could still overfit on augmentations and the scope for mitigating this overfit by employing domain generalization techniques.
- Efficient Domain Generalization via Common-Specific Low-Rank Decomposition.
ICML 2020 [pdf][code][talk][slides]
with P Netrapalli, S Sarawagi
When a model is trained on multiple sources, learned parameters are composed of both shared and domain-specific components. While domain-specific components improve the performance on seen train domains, they impede generalization to new domains. We propose a method called CSD, as a replacement for the final linear layer, that identifies and removes specific components, thereby retaining only the component common across all domains. Our method is simple, efficient and yet competitive.
- Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings.
ACL 2019 [pdf][code][talk][slides]
with S Chakrabarti, S Sarawagi
We answer three interesting questions about adapting word-embeddings. - Generalizing across domains via cross-gradient training.
ICLR 2018 [pdf][code]
with S Shankar, S. Chaudhuri, P. Jyothi, S Chakrabarti, S Sarawagi
We can avoid overfitting on train domains by hallucinating examples from unseen domains. We propose CrossGrad that augments label consistent examples from new domains that help in domain generalization.