


default search action
Sarath Chandar
Person information
- affiliation: University of Montreal, Department of Computer Science and Operations Research, Canada
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i85]Gabriele Prato, Jerry Huang, Prasannna Parthasarathi, Shagun Sodhani, Sarath Chandar:
Do Large Language Models Know How Much They Know? CoRR abs/2502.19573 (2025) - [i84]Lola Le Breton, Quentin Fournier, Mariam El Mezouar, Sarath Chandar:
NeoBERT: A Next-Generation BERT. CoRR abs/2502.19587 (2025) - 2024
- [j7]Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar:
Promoting Exploration in Memory-Augmented Adam using Critical Momenta. Trans. Mach. Learn. Res. 2024 (2024) - [c58]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar:
Fairness-Aware Structured Pruning in Transformers. AAAI 2024: 22484-22492 - [c57]Andreas Madsen, Sarath Chandar, Siva Reddy:
Are self-explanations from Large Language Models faithful? ACL (Findings) 2024: 295-337 - [c56]Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar:
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques. ACL (1) 2024: 5732-5745 - [c55]Abdelrahman Zayed, Gonçalo Mordido, Ioana Baldini, Sarath Chandar:
Why Don't Prompt-Based Fairness Metrics Correlate? ACL (1) 2024: 9002-9019 - [c54]Louis Clouâtre, Amal Zouaq, Sarath Chandar:
MVP: Minimal Viable Phrase for Long Text Understanding. LREC/COLING 2024: 12016-12026 - [c53]Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Côté:
Sub-goal Distillation: A Method to Improve Small Language Agents. CoLLAs 2024: 1053-1075 - [c52]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models. EMNLP 2024: 5817-5830 - [c51]Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, Sarath Chandar:
Do Large Language Models Know How Much They Know? EMNLP 2024: 6054-6070 - [c50]Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido, Sarath Chandar:
Exploring Quantization for Efficient Pre-Training of Transformer Language Models. EMNLP (Findings) 2024: 13473-13487 - [c49]Darshan Patil, Janarthanan Rajendran, Glen Berseth, Sarath Chandar:
Intelligent Switching for Reset-Free RL. ICLR 2024 - [c48]Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar:
Mastering Memory Tasks with World Models. ICLR 2024 - [c47]Andreas Madsen, Siva Reddy, Sarath Chandar:
Faithfulness Measurable Masked Language Models. ICML 2024 - [c46]Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar:
Lookbehind-SAM: k steps back, 1 step forward. ICML 2024 - [c45]Doriane Olewicki
, Sarra Habchi
, Mathieu Nayrolles
, Mojtaba Faramarzi
, Sarath Chandar
, Bram Adams
:
On the Costs and Benefits of Adopting Lifelong Learning for Software Analytics - Empirical Study on Brown Build and Risk Prediction. ICSE-SEIP 2024: 275-286 - [c44]Rached Bouchoucha, Ahmed Haj Yahmed, Darshan Patil, Janarthanan Rajendran, Amin Nikanjam, Sarath Chandar, Foutse Khomh:
Toward Debugging Deep Reinforcement Learning Programs with RLExplorer. ICSME 2024: 87-99 - [c43]Matthew Riemer, Khimya Khetarpal, Janarthanan Rajendran, Sarath Chandar:
Balancing Context Length and Mixing Times for Reinforcement Learning at Scale. NeurIPS 2024 - [e3]Vincenzo Lomonaco, Stefano Melacci, Tinne Tuytelaars, Sarath Chandar, Razvan Pascanu:
Conference on Lifelong Learning Agents, 29-1 August 2024, University of Pisa, Pisa, Italy. Proceedings of Machine Learning Research 274, PMLR 2024 [contents] - [i83]Andreas Madsen, Sarath Chandar, Siva Reddy:
Are self-explanations from Large Language Models faithful? CoRR abs/2401.07927 (2024) - [i82]Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar:
Mastering Memory Tasks with World Models. CoRR abs/2403.04253 (2024) - [i81]Jerry Huang
, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Towards Practical Tool Usage for Continually Learning LLMs. CoRR abs/2404.09339 (2024) - [i80]Darshan Patil, Janarthanan Rajendran, Glen Berseth, Sarath Chandar:
Intelligent Switching for Reset-Free RL. CoRR abs/2405.01684 (2024) - [i79]Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Côté:
Sub-goal Distillation: A Method to Improve Small Language Agents. CoRR abs/2405.02749 (2024) - [i78]Andreas Madsen, Himabindu Lakkaraju, Siva Reddy, Sarath Chandar:
Interpretability Needs a New Paradigm. CoRR abs/2405.05386 (2024) - [i77]Pranshu Malviya, Jerry Huang
, Quentin Fournier, Sarath Chandar:
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective. CoRR abs/2405.15895 (2024) - [i76]Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar, Alex Zhavoronkov:
BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning. CoRR abs/2406.03686 (2024) - [i75]Megh Thakkar, Quentin Fournier, Matthew D. Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar:
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques. CoRR abs/2406.04879 (2024) - [i74]Abdelrahman Zayed, Gonçalo Mordido, Ioana Baldini, Sarath Chandar:
Why Don't Prompt-Based Fairness Metrics Correlate? CoRR abs/2406.05918 (2024) - [i73]Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido, Sarath Chandar:
Exploring Quantization for Efficient Pre-Training of Transformer Language Models. CoRR abs/2407.11722 (2024) - [i72]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models. CoRR abs/2408.08470 (2024) - [i71]Rached Bouchoucha, Ahmed Haj Yahmed, Darshan Patil, Janarthanan Rajendran, Amin Nikanjam, Sarath Chandar, Foutse Khomh:
Toward Debugging Deep Reinforcement Learning Programs with RLExplorer. CoRR abs/2410.04322 (2024) - [i70]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Boxing Chen, Sarath Chandar:
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination. CoRR abs/2410.17477 (2024) - [i69]Megh Thakkar, Yash More, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar:
Combining Domain and Alignment Vectors to Achieve Better Knowledge-Safety Trade-offs in LLMs. CoRR abs/2411.06824 (2024) - [i68]Mohammad Reza Samsami, Mats Leon Richter, Juan Rodriguez, Megh Thakkar, Sarath Chandar, Maxime Gasse:
Too Big to Fool: Resisting Deception in Language Models. CoRR abs/2412.10558 (2024) - [i67]Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Gintare Karolina Dziugaite, Razvan Pascanu, Sarath Chandar:
Torque-Aware Momentum. CoRR abs/2412.18790 (2024) - 2023
- [j6]Andreas Madsen
, Siva Reddy
, Sarath Chandar
:
Post-hoc Interpretability for Neural NLP: A Survey. ACM Comput. Surv. 55(8): 155:1-155:42 (2023) - [j5]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. J. Mach. Learn. Res. 24: 214:1-214:50 (2023) - [c42]Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar:
Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness. AAAI 2023: 14593-14601 - [c41]Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar:
Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning. CoLLAs 2023: 21-42 - [c40]Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar:
Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning. CoLLAs 2023: 376-398 - [c39]Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, Sarath Chandar:
Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi. CoLLAs 2023: 861-877 - [c38]Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, Partha Talukdar:
Self-Influence Guided Data Reweighting for Language Model Pre-training. EMNLP 2023: 2033-2045 - [c37]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. EMNLP (Findings) 2023: 4305-4319 - [c36]Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, Sarath Chandar:
EpiK-Eval: Evaluation for Language Models as Epistemic Models. EMNLP 2023: 9523-9557 - [c35]Xutong Zhao, Yangchen Pan
, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran:
Conditionally optimistic exploration for cooperative deep multi-agent reinforcement learning. UAI 2023: 2529-2540 - [e2]Sarath Chandar, Razvan Pascanu, Hanie Sedghi, Doina Precup:
Conference on Lifelong Learning Agents, 22-25 August 2023, McGill University, Montréal, Québec, Canada. Proceedings of Machine Learning Research 232, PMLR 2023 [contents] - [i66]Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar:
Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning. CoRR abs/2302.02792 (2023) - [i65]Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar:
Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning. CoRR abs/2303.08690 (2023) - [i64]Xutong Zhao, Yangchen Pan
, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran:
Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning. CoRR abs/2303.09032 (2023) - [i63]Doriane Olewicki, Sarra Habchi, Mathieu Nayrolles, Mojtaba Faramarzi, Sarath Chandar, Bram Adams:
Towards Lifelong Learning for Software Analytics Models: Empirical Study on Brown Build and Risk Prediction. CoRR abs/2305.09824 (2023) - [i62]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Sarath Chandar:
Should We Attend More or Less? Modulating Attention for Fairness. CoRR abs/2305.13088 (2023) - [i61]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. CoRR abs/2305.14775 (2023) - [i60]Jarrid Rector-Brooks, Kanika Madan, Moksh Jain, Maksym Korablyov, Cheng-Hao Liu, Sarath Chandar, Nikolay Malkin, Yoshua Bengio:
Thompson sampling for improved exploration in GFlowNets. CoRR abs/2306.17693 (2023) - [i59]Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar:
Promoting Exploration in Memory-Augmented Adam using Critical Momenta. CoRR abs/2307.09638 (2023) - [i58]Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar:
Lookbehind Optimizer: k steps back, 1 step forward. CoRR abs/2307.16704 (2023) - [i57]Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, Sarath Chandar:
Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi. CoRR abs/2308.10284 (2023) - [i56]Andreas Madsen, Siva Reddy, Sarath Chandar:
Faithfulness Measurable Masked Language Models. CoRR abs/2310.07819 (2023) - [i55]Gabriele Prato, Jerry Huang
, Prasanna Parthasarathi, Shagun Sodhani, Sarath Chandar:
EpiK-Eval: Evaluation for Language Models as Epistemic Models. CoRR abs/2310.15372 (2023) - [i54]Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, Partha Talukdar:
Self-Influence Guided Data Reweighting for Language Model Pre-training. CoRR abs/2311.00913 (2023) - [i53]Arjun Vaithilingam Sudhakar, Prasanna Parthasarathi, Janarthanan Rajendran, Sarath Chandar:
Language Model-In-The-Loop: Data Optimal Approach to Learn-To-Recommend Actions in Text Games. CoRR abs/2311.07687 (2023) - [i52]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar:
Fairness-Aware Structured Pruning in Transformers. CoRR abs/2312.15398 (2023) - 2022
- [c34]Mojtaba Faramarzi
, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar:
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks. AAAI 2022: 589-597 - [c33]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most: Perturbation Study in NLU. ACL (Findings) 2022: 3712-3731 - [c32]Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. CoLLAs 2022: 213-230 - [c31]Pranshu Malviya, Balaraman Ravindran, Sarath Chandar:
TAG: Task-based Accumulated Gradients for Lifelong learning. CoLLAs 2022: 366-389 - [c30]Daphné Lafleur, Sarath Chandar, Gilles Pesant:
Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints. CP 2022: 30:1-30:16 - [c29]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. EMNLP (Findings) 2022: 5375-5396 - [c28]Paul-Aymeric Martin McRae, Prasanna Parthasarathi, Mido Assran, Sarath Chandar:
Memory Augmented Optimizers for Deep Learning. ICLR 2022 - [c27]Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen:
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods. ICML 2022: 22536-22561 - [c26]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most in Most Languages. AACL/IJCNLP (2) 2022: 285-294 - [e1]Sarath Chandar, Razvan Pascanu, Doina Precup:
Conference on Lifelong Learning Agents, CoLLAs 2022, 22-24 August 2022, McGill University, Montréal, Québec, Canada. Proceedings of Machine Learning Research 199, PMLR 2022 [contents] - [i51]Amir Ardalan Kalantari, Mohammad Amini, Sarath Chandar, Doina Precup:
Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers. CoRR abs/2202.00710 (2022) - [i50]Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen:
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods. CoRR abs/2204.11464 (2022) - [i49]Shagun Sodhani, Mojtaba Faramarzi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed A. Abdelsalam, Janarthanan Rajendran, Sarath Chandar:
An Introduction to Lifelong Supervised Learning. CoRR abs/2207.04354 (2022) - [i48]Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. CoRR abs/2208.02377 (2022) - [i47]Enamundram Naga Karthik, Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, Sarath Chandar, Julien Cohen-Adad:
Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch? CoRR abs/2210.15091 (2022) - [i46]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. CoRR abs/2211.05015 (2022) - [i45]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most in Most Languages. CoRR abs/2211.05025 (2022) - [i44]Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar:
Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness. CoRR abs/2211.11109 (2022) - [i43]Gonçalo Mordido, Sarath Chandar, François Leduc-Primeau:
Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators. CoRR abs/2211.11561 (2022) - [i42]Gabriele Prato, Yale Song, Janarthanan Rajendran, R. Devon Hjelm, Neel Joshi, Sarath Chandar:
PatchBlender: A Motion Prior for Video Transformers. CoRR abs/2211.14449 (2022) - 2021
- [c25]Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, Sarath Chandar:
Towered Actor Critic For Handling Multiple Action Types In Reinforcement Learning For Drug Discovery. AAAI 2021: 142-150 - [c24]Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard H. Hovy:
A Survey of Data Augmentation Approaches for NLP. ACL/IJCNLP (Findings) 2021: 968-988 - [c23]Louis Clouâtre, Philippe Trempe, Amal Zouaq, Sarath Chandar:
MLMLM: Link Prediction with Mean Likelihood Masked Language Model. ACL/IJCNLP (Findings) 2021: 4321-4331 - [c22]Mohamed A. Abdelsalam, Mojtaba Faramarzi
, Shagun Sodhani, Sarath Chandar:
IIRC: Incremental Implicitly-Refined Classification. CVPR 2021: 11038-11047 - [c21]Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron C. Courville, Sarath Chandar:
Continuous Coordination As a Realistic Scenario for Lifelong Learning. ICML 2021: 8016-8024 - [c20]Prasanna Parthasarathi, Mohamed A. Abdelsalam, Sarath Chandar, Joelle Pineau:
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss. SIGDIAL 2021: 469-476 - [c19]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ? SIGDIAL 2021: 477-488 - [i41]Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron C. Courville, Sarath Chandar:
Continuous Coordination As a Realistic Scenario for Lifelong Learning. CoRR abs/2103.03216 (2021) - [i40]Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard H. Hovy:
A Survey of Data Augmentation Approaches for NLP. CoRR abs/2105.03075 (2021) - [i39]Pranshu Malviya, Balaraman Ravindran, Sarath Chandar:
TAG: Task-based Accumulated Gradients for Lifelong learning. CoRR abs/2105.05155 (2021) - [i38]Prasanna Parthasarathi, Mohamed A. Abdelsalam, Joelle Pineau, Sarath Chandar:
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss. CoRR abs/2106.10619 (2021) - [i37]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ? CoRR abs/2106.10622 (2021) - [i36]Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran, Sarath Chandar:
Memory Augmented Optimizers for Deep Learning. CoRR abs/2106.10708 (2021) - [i35]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Demystifying Neural Language Models' Insensitivity to Word-Order. CoRR abs/2107.13955 (2021) - [i34]Andreas Madsen
, Siva Reddy, Sarath Chandar:
Post-hoc Interpretability for Neural NLP: A Survey. CoRR abs/2108.04840 (2021) - [i33]Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish, Sarath Chandar:
Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers. CoRR abs/2110.06990 (2021) - [i32]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. CoRR abs/2112.09153 (2021) - 2020
- [j4]Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, Michael Bowling:
The Hanabi challenge: A new frontier for AI research. Artif. Intell. 280: 103216 (2020) - [j3]Shagun Sodhani, Sarath Chandar, Yoshua Bengio:
Toward Training Recurrent Neural Networks for Lifelong Learning. Neural Comput. 32(1): 1-35 (2020) - [c18]Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Simon Blackburn, Karam M. J. Thomas, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio:
Learning to Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning. ICML 2020: 3668-3679 - [c17]Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar:
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning. NeurIPS 2020 - [i31]Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio:
Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning. CoRR abs/2004.12485 (2020) - [i30]Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar:
PatchUp: A Regularization Technique for Convolutional Neural Networks. CoRR abs/2006.07794 (2020) - [i29]Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar:
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning. CoRR abs/2007.03158 (2020) - [i28]Evan Racah, Sarath Chandar:
Slot Contrastive Networks: A Contrastive Approach for Representing Objects. CoRR abs/2007.09294 (2020) - [i27]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics. CoRR abs/2008.10427 (2020) - [i26]Louis Clouâtre, Philippe Trempe, Amal Zouaq, Sarath Chandar:
MLMLM: Link Prediction with Mean Likelihood Masked Language Model. CoRR abs/2009.07058 (2020) - [i25]Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar:
Maximum Reward Formulation In Reinforcement Learning. CoRR abs/2010.03744 (2020) - [i24]Mohamed A. Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar:
IIRC: Incremental Implicitly-Refined Classification. CoRR abs/2012.12477 (2020)
2010 – 2019
- 2019
- [c16]Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio:
Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies. AAAI 2019: 3280-3287 - [c15]Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio:
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. ACL (1) 2019: 32-37 - [c14]Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp:
Towards Lossless Encoding of Sentences. ACL (1) 2019: 1577-1583 - [c13]Vardaan Pahuja, Jie Fu, Sarath Chandar, Christopher Joseph Pal:
Structure Learning for Neural Module Networks. LANTERN@EMNLP-IJCNLP 2019: 1-10 - [c12]Revanth Reddy, Sarath Chandar, Balaraman Ravindran
:
Edge Replacement Grammars : A Formal Language Approach for Generating Graphs. SDM 2019: 351-359 - [i23]Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, Michael Bowling:
The Hanabi Challenge: A New Frontier for AI Research. CoRR abs/1902.00506 (2019) - [i22]Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio:
Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies. CoRR abs/1902.06704 (2019) - [i21]Revanth Reddy, Sarath Chandar, Balaraman Ravindran:
Edge Replacement Grammars: A Formal Language Approach for Generating Graphs. CoRR abs/1902.07159 (2019) - [i20]Vardaan Pahuja, Jie Fu, Sarath Chandar, Christopher J. Pal:
Structure Learning for Neural Module Networks. CoRR abs/1905.11532 (2019) - [i19]Chinnadhurai Sankar, Sandeep Subramanian, Christopher J. Pal, Sarath Chandar, Yoshua Bengio:
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. CoRR abs/1906.01603 (2019) - [i18]Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp:
Towards Lossless Encoding of Sentences. CoRR abs/1906.01659 (2019) - 2018
- [j2]Çaglar Gülçehre
, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio:
Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes. Neural Comput. 30(4) (2018) - [c11]Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar:
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. AAAI 2018: 705-713 - [i17]Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeswar, Alexandre de Brébisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio:
A Deep Reinforcement Learning Chatbot (Short Version). CoRR abs/1801.06700 (2018) - [i16]Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar:
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. CoRR abs/1801.10314 (2018) - [i15]Ghulam Ahmed Ansari, Sagar J. P, Sarath Chandar, Balaraman Ravindran:
Language Expansion In Text-Based Games. CoRR abs/1805.07274 (2018) - [i14]Shagun Sodhani, Sarath Chandar, Yoshua Bengio:
On Training Recurrent Neural Networks for Lifelong Learning. CoRR abs/1811.07017 (2018) - [i13]Khimya Khetarpal, Shagun Sodhani, Sarath Chandar, Doina Precup:
Environments for Lifelong Reinforcement Learning. CoRR abs/1811.10732 (2018) - 2017
- [c10]Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron C. Courville:
GuessWhat?! Visual Object Discovery through Multi-modal Dialogue. CVPR 2017: 4466-4475 - [i12]Çaglar Gülçehre, Sarath Chandar, Yoshua Bengio:
Memory Augmented Neural Networks with Wormhole Connections. CoRR abs/1701.08718 (2017) - [i11]Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Mudumba, Alexandre de Brébisson, Jose Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio:
A Deep Reinforcement Learning Chatbot. CoRR abs/1709.02349 (2017) - 2016
- [j1]Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran
:
Correlational Neural Networks. Neural Comput. 28(2): 257-285 (2016) - [c9]Iulian Vlad Serban, Alberto García-Durán, Çaglar Gülçehre, Sungjin Ahn, Sarath Chandar, Aaron C. Courville, Yoshua Bengio:
Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus. ACL (1) 2016 - [c8]Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho:
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation. COLING 2016: 109-118 - [c7]Mitesh M. Khapra, Sarath Chandar:
Multilingual Multimodal Language Processing Using Neural Networks. HLT-NAACL Tutorials 2016: 6-7 - [c6]Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman Ravindran
:
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning. HLT-NAACL 2016: 171-181 - [i10]Iulian Vlad Serban, Alberto García-Durán, Çaglar Gülçehre, Sungjin Ahn, Sarath Chandar, Aaron C. Courville, Yoshua Bengio:
Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus. CoRR abs/1603.06807 (2016) - [i9]Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio:
Hierarchical Memory Networks. CoRR abs/1605.07427 (2016) - [i8]Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho:
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation. CoRR abs/1606.04754 (2016) - [i7]Çaglar Gülçehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio:
Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes. CoRR abs/1607.00036 (2016) - [i6]Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron C. Courville:
GuessWhat?! Visual object discovery through multi-modal dialogue. CoRR abs/1611.08481 (2016) - 2015
- [c5]Subendhu Rongali, A. P. Sarath Chandar, Balaraman Ravindran
:
From multiple views to single view: a neural network approach. CODS 2015: 104-109 - [i5]Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran:
Correlational Neural Networks. CoRR abs/1504.07225 (2015) - [i4]Sridhar Mahadevan, Sarath Chandar:
Reasoning about Linguistic Regularities in Word Embeddings using Matrix Manifolds. CoRR abs/1507.07636 (2015) - [i3]P. Prasanna, Sarath Chandar, Balaraman Ravindran:
TSEB: More Efficient Thompson Sampling for Policy Learning. CoRR abs/1510.02874 (2015) - [i2]Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman Ravindran:
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning. CoRR abs/1510.03519 (2015) - 2014
- [c4]A. P. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, Amrita Saha:
An Autoencoder Approach to Learning Bilingual Word Representations. NIPS 2014: 1853-1861 - [i1]A. P. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, Amrita Saha:
An Autoencoder Approach to Learning Bilingual Word Representations. CoRR abs/1402.1454 (2014) - 2011
- [c3]Susan Elias, A. P. Sarath Chandar, Kamala Krithivasan, S. V. Raghavan:
An Adaptive e-Learning Environment Using Distributed Spiking Neural P Systems. T4E 2011: 56-60 - 2010
- [c2]A. P. Sarath Chandar, S. G. Dheeban, Deepak V, Susan Elias:
Personalized e-course composition approach using digital pheromones in improved particle swarm optimization. ICNC 2010: 2677-2681 - [c1]A. P. Sarath Chandar, S. Arun Balaji, G. Venkatesh, Susan Elias:
CDPN: Communicating Dynamic Petri Net for Adaptive Multimedia Presentation. ICT 2010: 165-170
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-03-24 00:29 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint