


default search action
Chirag Agarwal
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i38]Tarun Ram Menta, Susmit Agrawal, Chirag Agarwal:
Analyzing Memorization in Large Language Models through the Lens of Model Attribution. CoRR abs/2501.05078 (2025) - [i37]Akash Ghosh, Debayan Datta, Sriparna Saha, Chirag Agarwal:
The Multilingual Mind : A Survey of Multilingual Reasoning in Language Models. CoRR abs/2502.09457 (2025) - 2024
- [c30]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
On the Trade-offs between Adversarial Robustness and Actionable Explanations. AIES (1) 2024: 784-795 - [c29]Sree Harsha Tanneru, Chirag Agarwal, Himabindu Lakkaraju:
Quantifying Uncertainty in Natural Language Explanations of Large Language Models. AISTATS 2024: 1072-1080 - [c28]Tarun Ram Menta, Surgan Jandial, Akash Patil, Saketh Bachu, Vimal K. B., Balaji Krishnamurthy, Vineeth N. Balasubramanian, Mausoom Sarkar, Chirag Agarwal:
Active Transferability Estimation. CVPR Workshops 2024: 2659-2670 - [c27]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Understanding the Effects of Iterative Prompting on Truthfulness. ICML 2024 - [c26]Tessa Han, Aounon Kumar, Chirag Agarwal, Himabindu Lakkaraju:
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models. NeurIPS 2024 - [i36]Chirag Agarwal, Sree Harsha Tanneru, Himabindu Lakkaraju:
Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models. CoRR abs/2402.04614 (2024) - [i35]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Understanding the Effects of Iterative Prompting on Truthfulness. CoRR abs/2402.06625 (2024) - [i34]Tessa Han, Aounon Kumar, Chirag Agarwal, Himabindu Lakkaraju:
Towards Safe and Aligned Large Language Models for Medicine. CoRR abs/2403.03744 (2024) - [i33]Sree Harsha Tanneru, Dan Ley, Chirag Agarwal, Himabindu Lakkaraju:
On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models. CoRR abs/2406.10625 (2024) - [i32]Abhinav Java, Simra Shahid, Chirag Agarwal:
Towards Operationalizing Right to Data Protection. CoRR abs/2411.08506 (2024) - [i31]Elita A. Lobo, Chirag Agarwal, Himabindu Lakkaraju:
On the Impact of Fine-Tuning on Chain-of-Thought Reasoning. CoRR abs/2411.15382 (2024) - [i30]Ashish Seth, Dinesh Manocha, Chirag Agarwal:
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language Models. CoRR abs/2412.20622 (2024) - 2023
- [c25]Ashish Seth, Mayur Hemani, Chirag Agarwal:
DeAR: Debiasing Vision-Language Models with Additive Residuals. CVPR 2023: 6820-6829 - [c24]Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik:
GNNDelete: A General Strategy for Unlearning in Graph Neural Networks. ICLR 2023 - [c23]Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian:
Explaining RL Decisions with Trajectories. ICLR 2023 - [c22]Michael Llordes
, Debasis Ganguly
, Sumit Bhatia
, Chirag Agarwal
:
Explain Like I am BM25: Interpreting a Dense Model's Ranked-List with a Sparse Approximation. SIGIR 2023: 1976-1980 - [i29]Tarun Ram Menta, Surgan Jandial, Akash Patil, Vimal KB, Saketh Bachu, Balaji Krishnamurthy, Vineeth N. Balasubramanian, Chirag Agarwal, Mausoom Sarkar:
Towards Estimating Transferability using Hard Subsets. CoRR abs/2301.06928 (2023) - [i28]Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik:
GNNDelete: A General Strategy for Unlearning in Graph Neural Networks. CoRR abs/2302.13406 (2023) - [i27]Ashish Seth, Mayur Hemani, Chirag Agarwal:
DeAR: Debiasing Vision-Language Models with Additive Residuals. CoRR abs/2303.10431 (2023) - [i26]Michael Llordes, Debasis Ganguly, Sumit Bhatia
, Chirag Agarwal:
Explain like I am BM25: Interpreting a Dense Model's Ranked-List with a Sparse Approximation. CoRR abs/2304.12631 (2023) - [i25]Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian:
Explaining RL Decisions with Trajectories. CoRR abs/2305.04073 (2023) - [i24]Shripad Vilasrao Deshmukh, Srivatsan R, Supriti Vijay, Jayakumar Subramanian, Chirag Agarwal:
Counterfactual Explanation Policies in RL. CoRR abs/2307.13192 (2023) - [i23]Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Soheil Feizi, Hima Lakkaraju:
Certifying LLM Safety against Adversarial Prompting. CoRR abs/2309.02705 (2023) - [i22]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
On the Trade-offs between Adversarial Robustness and Actionable Explanations. CoRR abs/2309.16452 (2023) - [i21]Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Are Large Language Models Post Hoc Explainers? CoRR abs/2310.05797 (2023) - [i20]Sree Harsha Tanneru, Chirag Agarwal, Himabindu Lakkaraju:
Quantifying Uncertainty in Natural Language Explanations of Large Language Models. CoRR abs/2311.03533 (2023) - 2022
- [c21]Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, Himabindu Lakkaraju:
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. AISTATS 2022: 4574-4594 - [c20]Chirag Agarwal, Marinka Zitnik, Himabindu Lakkaraju:
Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. AISTATS 2022: 8969-8996 - [c19]Chirag Agarwal, Daniel D'souza, Sara Hooker:
Estimating Example Difficulty using Variance of Gradients. CVPR 2022: 10358-10368 - [c18]Valentina Giunchiglia, Chirag Varun Shukla, Guadalupe Gonzalez, Chirag Agarwal:
Towards Training GNNs Using Explanation Directed Message Passing. LoG 2022: 28 - [c17]Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju:
OpenXAI: Towards a Transparent Evaluation of Model Explanations. NeurIPS 2022 - [i19]Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju:
Rethinking Stability for Attribution-based Explanations. CoRR abs/2203.06877 (2022) - [i18]Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju:
OpenXAI: Towards a Transparent Evaluation of Model Explanations. CoRR abs/2206.11104 (2022) - [i17]Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik:
Evaluating Explainability for Graph Neural Networks. CoRR abs/2208.09339 (2022) - [i16]Valentina Giunchiglia
, Chirag Varun Shukla
, Guadalupe Gonzalez, Chirag Agarwal:
Towards Training GNNs using Explanation Directed Message Passing. CoRR abs/2211.16731 (2022) - 2021
- [c16]Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, Himabindu Lakkaraju:
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations. ICML 2021: 110-119 - [c15]Chirag Agarwal, Shahin Khobahi, Dan Schonfeld, Mojtaba Soltanalian
:
CoroNet: a deep network architecture for enhanced identification of COVID-19 from chest x-ray images. Computer-Aided Diagnosis 2021 - [c14]Chirag Agarwal, Himabindu Lakkaraju, Marinka Zitnik:
Towards a unified framework for fair and stable graph representation learning. UAI 2021: 2114-2124 - [i15]Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Zhiwei Steven Wu, Himabindu Lakkaraju:
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations. CoRR abs/2102.10618 (2021) - [i14]Chirag Agarwal, Himabindu Lakkaraju, Marinka Zitnik:
Towards a Unified Framework for Fair and Stable Graph Representation Learning. CoRR abs/2102.13186 (2021) - [i13]Chirag Agarwal, Marinka Zitnik, Himabindu Lakkaraju:
Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations. CoRR abs/2106.09078 (2021) - [i12]Martin Pawelczyk, Shalmali Joshi, Chirag Agarwal, Sohini Upadhyay, Himabindu Lakkaraju:
On the Connections between Counterfactual Explanations and Adversarial Examples. CoRR abs/2106.09992 (2021) - [i11]Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker:
A Tale Of Two Long Tails. CoRR abs/2107.13098 (2021) - 2020
- [c13]Chirag Agarwal, Anh Nguyen:
Explaining Image Classifiers by Removing Input Features Using Generative Models. ACCV (6) 2020: 101-118 - [c12]Naman Bansal, Chirag Agarwal, Anh Nguyen:
SAM: The Sensitivity of Attribution Methods to Hyperparameters. CVPR Workshops 2020: 11-21 - [c11]Naman Bansal, Chirag Agarwal, Anh Nguyen:
SAM: The Sensitivity of Attribution Methods to Hyperparameters. CVPR 2020: 8670-8680 - [c10]Chirag Agarwal, Shahin Khobahi, Arindam Bose
, Mojtaba Soltanalian
, Dan Schonfeld:
DEEP-URL: A Model-Aware Approach to Blind Deconvolution Based on Deep Unfolded Richardson-Lucy Network. ICIP 2020: 3299-3303 - [i10]Chirag Agarwal, Shahin Khobahi, Arindam Bose, Mojtaba Soltanalian, Dan Schonfeld:
Deep-URL: A Model-Aware Approach To Blind Deconvolution Based On Deep Unfolded Richardson-Lucy Network. CoRR abs/2002.01053 (2020) - [i9]Naman Bansal, Chirag Agarwal, Anh Nguyen:
SAM: The Sensitivity of Attribution Methods to Hyperparameters. CoRR abs/2003.08754 (2020) - [i8]Chirag Agarwal, Peijie Chen, Anh Nguyen:
Intriguing generalization and simplicity of adversarially trained neural networks. CoRR abs/2006.09373 (2020) - [i7]Chirag Agarwal, Sara Hooker:
Estimating Example Difficulty using Variance of Gradients. CoRR abs/2008.11600 (2020)
2010 – 2019
- 2019
- [c9]Chirag Agarwal, Anh Nguyen, Dan Schonfeld:
Improving Robustness to Adversarial Examples by Encouraging Discriminative Features. ICIP 2019: 3801-3505 - [c8]Mohammed Aloraini
, Mehdi Sharifzadeh, Chirag Agarwal, Dan Schonfeld:
Statistical Sequential Analysis for Object-based Video Forgery Detection. Media Watermarking, Security, and Forensics 2019 - [i6]Chirag Agarwal, Dan Schonfeld, Anh Nguyen:
Removing input features via a generative model to explain their attributions to classifier's decisions. CoRR abs/1910.04256 (2019) - 2018
- [c7]Nivedita Khobragade
, Chirag Agarwal:
Multi-class segmentation of neuronal electron microscopy images using deep learning. Image Processing 2018: 105742W - [c6]Chirag Agarwal, Mehdi Sharifzadeh, Dan Schonfeld:
CrossEncoders: A complex neural network compression framework. Visual Information Processing and Communication 2018 - [i5]Chirag Agarwal, Bo Dong, Dan Schonfeld, Anthony Hoogs:
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks. CoRR abs/1806.01477 (2018) - [i4]Chirag Agarwal, Anh Nguyen, Dan Schonfeld:
Improving Adversarial Robustness by Encouraging Discriminative Features. CoRR abs/1811.00621 (2018) - 2017
- [c5]Ahmed H. Dallal
, Chirag Agarwal, Mohammad R. Arbabshirani, Aalpen Patel, Gregory J. Moore:
Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images. Computer-Aided Diagnosis 2017: 101340K - [c4]Chirag Agarwal, Ahmed H. Dallal
, Mohammad R. Arbabshirani, Aalpen Patel, Gregory J. Moore:
Unsupervised quantification of abdominal fat from CT images using Greedy Snakes. Image Processing 2017: 101332T - [c3]Mohammad R. Arbabshirani, Ahmed H. Dallal
, Chirag Agarwal, Aalpen Patel, Gregory J. Moore:
Accurate segmentation of lung fields on chest radiographs using deep convolutional networks. Image Processing 2017: 1013305 - [c2]Mehdi Sharifzadeh, Chirag Agarwal, Mohammed Aloraini
, Dan Schonfeld:
Convolutional neural network steganalysis's application to steganography. VCIP 2017: 1-4 - [i3]Chirag Agarwal, Mehdi Sharifzadeh, Joe Klobusicky, Dan Schonfeld:
CrossNets : A New Approach to Complex Learning. CoRR abs/1705.07404 (2017) - [i2]Mehdi Sharifzadeh, Chirag Agarwal, Mahdi Salarian, Dan Schonfeld:
A New Parallel Message-distribution Technique for Cost-based Steganography. CoRR abs/1705.08616 (2017) - [i1]Mehdi Sharifzadeh, Chirag Agarwal, Mohammed Aloraini, Dan Schonfeld:
Convolutional Neural Network Steganalysis's Application to Steganography. CoRR abs/1711.02581 (2017) - 2015
- [c1]Chirag Agarwal, Paul Hylander, Yogesh Mahajan, Jonathan Michelson, Vigyan Singhal:
Compositional Reasoning Gotchas in Practice. FMCAD 2015: 17-24
Coauthor Index
aka: Hima Lakkaraju

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-03-13 20:20 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint