Using Differential Privacy Technique To Measure Incrementality of Ads Performance
Using Differential Privacy Technique To Measure Incrementality of Ads Performance
Abstract
Incrementality measurement is critical for evaluating the causal impact of digital ad campaigns.
Typically, these analyses rely on precise user-level data for randomized control trials (RCTs), often
necessitating the exchange of Personally Identifiable Information (PII) between ad platforms and
advertisers. Differential Privacy (DP) offers a robust solution to this challenge by introducing noise
into the data, thereby ensuring privacy without the need for PII exchange. This paper presents a
detailed methodology for applying DP to incrementality measurement in digital advertising. We
formulate the problem mathematically, outline a framework for incorporating DP mechanisms, and
explore practical considerations such as privacy budget management, noise scaling, and the balance
between privacy and utility. We also provide an in-depth simulation study to quantify the effectiveness
of DP in protecting user privacy while maintaining accurate causal lift estimation.
Keywords: Privacy enhancing technologies (PETs), Causal inference, Randomized control Trials,
Differential Privacy
Introduction
In the digital advertising ecosystem, understanding the incremental effect of an advertising campaign is
paramount. Incrementality refers to the causal lift produced by an ad campaign compared to a control group.
Traditionally, ad platforms and advertisers conduct randomized control trials (RCTs) to estimate lift, with
randomization at the user level and conversion measurement at the advertiser level. However, this approach
requires extensive access to granular user data, raising significant privacy concerns.
Differential Privacy (DP) is a mathematical framework that allows organizations to aggregate and analyze
data without exposing individual user information. By adding controlled noise to the data, DP ensures that
the inclusion or exclusion of any single data point (such as a user) does not significantly affect the overall
result. In this paper, we propose an approach that applies DP to incrementality measurement, focusing on
preserving user privacy while ensuring accurate causal inferences.
Where:
• D and D' are neighboring datasets differing by one individual,
• S is any subset of the output space,
• ε is the privacy parameter that governs the amount of noise added to the data.
The mechanism typically adds noise that scales with the "sensitivity" of the function being computed.
Sensitivity measures how much the output of a function can change when one individual’s data is altered.
Laplace Mechanism
For functions that involve counting or summing over users, the Laplace mechanism is commonly used,
adding noise drawn from the Laplace distribution:
In the DP context, the conversion rates for the test and control groups are computed with added noise to
preserve privacy.
2. Adjusting for Differential Privacy
In practice, to ensure that the conversion rates computed for the test and control groups adhere to differential
privacy, noise is added as follows:
For the test group:
• True Lift: 5%
• Privacy Parameter (ε): 0.1, 0.5, 1.0
• Mechanism: Laplace mechanism applied to conversion rates
Results
The simulation revealed that as the privacy parameter ε decreases, the variance in the measured lift increases
due to the higher noise introduced by the DP mechanism. However, when the number of users in the sample
increases, the noise impact diminishes, improving the reliability of the incrementality estimate.
Impact of Misclassification
The misclassification due to randomization misalignment was shown to cause a significant understatement
in measured lift, especially for households with multiple users. Adjustments using DP noise compounded
this effect but also ensured that the privacy of user-level data was maintained.
Conclusion
Differential Privacy provides a powerful tool for ensuring privacy-preserving incrementality measurement
in digital advertising. While challenges remain in mitigating the impact of misclassification and household-
level effects, the proposed DP framework offers a mathematically sound and scalable solution to privacy
concerns in causal lift analysis. By balancing the privacy budget and carefully managing noise scaling, DP
can be used to provide reliable, privacy-compliant incrementality insights.
Future work will focus on refining the models to better account for household-level effects and optimize the
trade-off between privacy and utility in real-world advertising contexts.
References
1. Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy.
2. Mironov, I. (2017). Rényi Differential Privacy. IEEE Computer Society.
3. McSherry, F., & Talwar, K. (2007). Mechanism Design via Differential Privacy. FOCS.
4. Abadi, M., et al. (2016). Deep Learning with Differential Privacy. Proceedings of the ACM.
5. Karwa, V., & Duchi, J. (2018). Optimal Privacy for Statistical Inference. Journal of Privacy and
Confidentiality.
6. ElEmam, K., &Samarati, P. (2011). A Survey of Differential Privacy Techniques in Data Mining. Data
Mining and Knowledge Discovery.
7. Applebaum, B., et al. (2019). Using Differential Privacy for Aggregate Analysis of Advertising
Campaigns. Journal of Advertising Research.
8. He, X., & Yang, L. (2020). *Achieving Differential Privacy in Real