Haoran Li 李浩然Ph.D.
Computer Science
|
|
I have just passed my thesis examinations and will be conferred a Computer Science Ph.D degree at the Hong Kong University of Science and Technology in 31 January, 2025. I am fortunate to be advised by Prof. Yangqiu Song during my Ph.D study. I obtained my B.S. degree, majoring in Computer Science and Math-CS track, from the Hong Kong University of Science and Technology in 2020.
I was an intern student in the LLM Alignment team at Minimax from July to November 2024. My task is to implement the insturction hierarchy for their models with enhanced robustness against prompt injection attacks. In addition, I was an intern student in the Toutiao AI Lab, Bytedance for NLP research during Summer of 2022.My research interest is mainly about privacy studies in NLP that include:
I am actively looking for postdoc positions and collaborations about LLM privacy and safety. If you are interested in the topics, feel free to reach out via email.
Simulate and Eliminate: Revoke Backdoors for Generative Large Language Models
Haoran Li*, Yulin Chen*, Zihao Zheng, Qi Hu, Chunkit Chan, Heshan Liu, Yangqiu Song. AAAI 2025. [ Code ] [ Paper ] In this paper, we present Simulate and Eliminate (SANDE) to erase the undesired backdoored mappings for generative LLMs. Unlike other works that assume access to cleanly trained models, our safety-enhanced LLMs are able to revoke backdoors without any reference. |
|
GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory
Wei Fan, Haoran Li*†, Zheye Deng, Weiqi Wang, Yangqiu Song († denotes the Corresponding Author). EMNLP 2024 Outstanding Paper Award. [ Code ] [ Paper ] In this paper, we introduce a novel framework, GOLDCOIN, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. |
|
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
Haoran Li*, Dadi Guo*, Donghao Li*, Wei Fan, Qi Hu, Xin Liu, Chunkit Chan, Duanyi Yao, Yuan Yao, Yangqiu Song. ACL 2024 (Oral). [ Code ] [ Paper ] In this paper, we present PrivLM-Bench, a multi-perspective privacy evaluation benchmark to empirically and intuitively quantify the privacy leakage of LMs. Instead of only reporting DP parameters, PrivLM-Bench sheds light on the neglected inference data privacy during actual usage. In addition, we integrate PEFT into PrivLM-Bench. |
|
Privacy-Preserving Neural Graph Databases
Qi Hu, Haoran Li, Jiaxin Bai, Yangqiu Song. KDD 2024. [ Code ] [ Paper ] The usage of neural embedding storage and Complex neural logical Query Answering (CQA) provides NGDBs with generalization ability. When the graph is incomplete, by extracting latent patterns and representations, neural graph databases can fill gaps in the graph structure, revealing hidden relationships and enabling accurate query answering. |
|
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Haoran Li*, Dadi Guo*, Wei Fan, Mingshi Xu, Jie Huang, Fanpu Meng, Yangqiu Song Findings of EMNLP 2023 [ Code ] [ Paper ] In this paper, we study the privacy threats from OpenAI's model APIs and New Bing enhanced by ChatGPT and show that application-integrated LLMs may cause more severe privacy threats ever than before. |
|
Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence
Haoran Li, Mingshi Xu, Yangqiu Song Findings of ACL 2023 [ Code ] [ Paper ] In this work, we further investigate the information leakage issue and propose a generative embedding inversion attack (GEIA) that aims to reconstruct input sequences based only on their sentence embeddings. |
|
You Don't Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas
Haoran Li, Yangqiu Song, Lixin Fan NAACL 2022 (Oral) [ Code ] [ Paper ] We investigate the privacy leakage of the hidden states of chatbots trained by language modeling which has not been well studied yet. We show that speakers' personas can be inferred through a simple neural network with high accuracy. To this end, we propose effective defense objectives to protect persona leakage from hidden states. |
|
FedAssistant: Dialog Agents with Two-side Modeling
Haoran Li*, Ying Su*, Qi Hu, Jiaxin Bai, Yilun Jin, Yangqiu Song FL-IJCAI'22 [ Code ] ] [ Paper ] We propose a framework named FedAssistant to training neural dialog systems in a federated learning setting. Our framework can be trained on multiple data owners with no raw data leakage during the process of training and inference. (Code and paper will appear later.) |
|
Differentially Private Federated Knowledge Graphs Embedding
Hao Peng*, Haoran Li*, Yangqiu Song, Vincent Zheng, Jianxin Li CIKM 2021 (Oral) [ Code ] [ Paper ] We propose a novel decentralized scalable learning framework, Federated Knowledge Graphs Embedding (FKGE), where embeddings from different knowledge graphs can be learnt in an asynchronous and peer-to-peer manner while being privacy-preserving. |
|
Self-supervised Dance Video Synthesis Conditioned on Music
Xuanchi Ren, Haoran Li, Zijian Huang, Qifeng Chen ACM International Conference on Multimedia (ACM MM), 2020 (Oral) [ Code ] [ Paper ] Undergraduate Final Year Project. |
Frequent Reviewer at ARR Rolling Review |
Reviewer at KDD 2023, KDD 2024 |
Reviewer at NIPS 2024 |
Reviewer at ICLR 2025 |
Reviewer at AAAI 2025 |