Sparks of artificial general intelligence: Early experiments with gpt-4 S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ... arXiv preprint arXiv:2303.12712, 2023 | 3592 | 2023 |
Machine unlearning L Bourtoule, V Chandrasekaran, CA Choquette-Choo, H Jia, A Travers, ... 2021 IEEE Symposium on Security and Privacy (SP), 141-159, 2021 | 854 | 2021 |
Entangled watermarks as a defense against model extraction H Jia, CA Choquette-Choo, V Chandrasekaran, N Papernot 30th USENIX security symposium (USENIX Security 21), 1937-1954, 2021 | 280 | 2021 |
Exploring connections between active learning and model extraction V Chandrasekaran, K Chaudhuri, I Giacomelli, S Jha, S Yan 29th USENIX Security Symposium (USENIX Security 20), 1309-1326, 2020 | 185 | 2020 |
Unrolling sgd: Understanding factors influencing machine unlearning A Thudi, G Deza, V Chandrasekaran, N Papernot 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), 303-319, 2022 | 150 | 2022 |
On the effectiveness of mitigating data poisoning attacks with gradient shaping S Hong, V Chandrasekaran, Y Kaya, T Dumitraş, N Papernot arXiv preprint arXiv:2002.11497, 2020 | 149 | 2020 |
Proof-of-learning: Definitions and practice H Jia, M Yaghini, CA Choquette-Choo, N Dullerud, A Thudi, ... 2021 IEEE Symposium on Security and Privacy (SP), 1039-1056, 2021 | 106 | 2021 |
Face-off: Adversarial face obfuscation V Chandrasekaran, C Gao, B Tang, K Fawaz, S Jha, S Banerjee arXiv preprint arXiv:2003.08861, 2020 | 51 | 2020 |
A general framework for detecting anomalous inputs to dnn classifiers J Raghuram, V Chandrasekaran, S Jha, S Banerjee International Conference on Machine Learning, 8764-8775, 2021 | 45* | 2021 |
Powercut and obfuscator: an exploration of the design space for privacy-preserving interventions for voice assistants V Chandrasekaran, S Banerjee, B Mutlu, K Fawaz arXiv preprint arXiv:1812.00263, 2018 | 40* | 2018 |
Traversing the quagmire that is privacy in your smart home C Gao, V Chandrasekaran, K Fawaz, S Banerjee Proceedings of the 2018 Workshop on IoT Security and Privacy, 22-28, 2018 | 35 | 2018 |
Analyzing and improving neural networks by generating semantic counterexamples through differentiable rendering L Jain, V Chandrasekaran, U Jang, W Wu, A Lee, A Yan, S Chen, S Jha, ... arXiv preprint arXiv:1910.00727, 2019 | 32* | 2019 |
Attention satisfies: A constraint-satisfaction lens on factual errors of language models M Yuksekgonul, V Chandrasekaran, E Jones, S Gunasekar, R Naik, ... arXiv preprint arXiv:2309.15098, 2023 | 30 | 2023 |
Proof-of-learning is currently more broken than you think C Fang, H Jia, A Thudi, M Yaghini, CA Choquette-Choo, N Dullerud, ... 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), 797-816, 2023 | 24* | 2023 |
SoK: Machine learning governance V Chandrasekaran, H Jia, A Thudi, A Travers, M Yaghini, N Papernot arXiv preprint arXiv:2109.10870, 2021 | 24 | 2021 |
Teaching language models to hallucinate less with synthetic tasks E Jones, H Palangi, C Simões, V Chandrasekaran, S Mukherjee, A Mitra, ... arXiv preprint arXiv:2310.06827, 2023 | 23 | 2023 |
Verifiable and provably secure machine unlearning T Eisenhofer, D Riepel, V Chandrasekaran, E Ghosh, O Ohrimenko, ... arXiv preprint arXiv:2210.09126, 2022 | 22 | 2022 |
A framework for analyzing spectrum characteristics in large spatio-temporal scales Y Zeng, V Chandrasekaran, S Banerjee, D Giustiniano The 25th Annual International Conference on Mobile Computing and Networking …, 2019 | 21 | 2019 |
Confidant: A privacy controller for social robots B Tang, D Sullivan, B Cagiltay, V Chandrasekaran, K Fawaz, B Mutlu 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2022 | 17 | 2022 |
Diversity of thought improves reasoning abilities of large language models R Naik, V Chandrasekaran, M Yuksekgonul, H Palangi, B Nushi arXiv preprint arXiv:2310.07088, 2023 | 11 | 2023 |