Deepmarks: A secure fingerprinting framework for digital rights management of deep learning models
Proceedings of the 2019 on International Conference on Multimedia Retrieval, 2019•dl.acm.org
Deep Neural Networks (DNNs) are revolutionizing various critical fields by providing an
unprecedented leap in terms of accuracy and functionality. Due to the costly training
procedure, high-performance DNNs are typically considered as the Intellectual Property (IP)
of the model builder and need to be protected. While DNNs are increasingly
commercialized, the pre-trained models might be illegally copied or redistributed after they
are delivered to malicious users. In this paper, we introduce DeepMarks, the first end-to-end …
unprecedented leap in terms of accuracy and functionality. Due to the costly training
procedure, high-performance DNNs are typically considered as the Intellectual Property (IP)
of the model builder and need to be protected. While DNNs are increasingly
commercialized, the pre-trained models might be illegally copied or redistributed after they
are delivered to malicious users. In this paper, we introduce DeepMarks, the first end-to-end …
Deep Neural Networks (DNNs) are revolutionizing various critical fields by providing an unprecedented leap in terms of accuracy and functionality. Due to the costly training procedure, high-performance DNNs are typically considered as the Intellectual Property (IP) of the model builder and need to be protected. While DNNs are increasingly commercialized, the pre-trained models might be illegally copied or redistributed after they are delivered to malicious users. In this paper, we introduce DeepMarks, the first end-to-end collusion-secure fingerprinting framework that enables the owner to retrieve model authorship information and identification of unique users in the context of deep learning (DL). DeepMarks consists of two main modules: (i) Designing unique fingerprints using anti-collusion codebooks for individual users; and (ii) Encoding each constructed fingerprint (FP) in the probability density function (pdf) of the weights by incorporating an FP-specific regularization loss during DNN re-training. We investigate the performance of DeepMarks on various datasets and DNN architectures. Experimental results show that the embedded FP preserves the accuracy of the host DNN and is robust against different model modifications that might be conducted by the malicious user. Furthermore, our framework is scalable and yields perfect detection rates and no false alarms when identifying the participants of FP collusion attacks under theoretical guarantee. The runtime overhead of retrieving the embedded FP from the marked DNN can be as low as 0.056%.
ACM Digital Library
Showing the best result for this search. See all results