Squeeze-Loss: A Utility-Free Defense Against Membership Inference Attacks

Y Zhang, H Yan, G Lin, S Peng, Z Zhang… - … Symposium on Security …, 2022 - Springer
Y Zhang, H Yan, G Lin, S Peng, Z Zhang, Y Wang
International Symposium on Security and Privacy in Social Networks and Big Data, 2022Springer
Membership inference attacks can infer whether a data sample exists in the training set of
the target model based on limited adversary knowledge, which results in serious leakage of
privacy. A large number of recent studies have shown that model overfitting is one of the
main reasons why membership inference attacks can be executed successfully. Therefore,
some classic methods to solve model overfitting are used to defend against membership
inference attacks, such as dropout, spatial dropout, and differential privacy. However, it is …
Abstract
Membership inference attacks can infer whether a data sample exists in the training set of the target model based on limited adversary knowledge, which results in serious leakage of privacy. A large number of recent studies have shown that model overfitting is one of the main reasons why membership inference attacks can be executed successfully. Therefore, some classic methods to solve model overfitting are used to defend against membership inference attacks, such as dropout, spatial dropout, and differential privacy. However, it is difficult for these defense methods to achieve an available trade-off in defense success rate and model utility. In this paper, we focus on the impact of model training loss on model overfitting, and we design a Squeeze-Loss strategy to dynamically find the training loss that achieves the best balance between model utility and privacy. Extensive experimental results show that our strategy can limit the success rate of membership inference attacks to the level of random guesses with almost no loss of model utility, which always outperforms other defense methods.
Springer
Showing the best result for this search. See all results