0% found this document useful (0 votes)
2 views4 pages

Using Differential Privacy Technique To Measure Incrementality of Ads Performance

This paper discusses the application of Differential Privacy (DP) techniques to measure the incrementality of digital ad performance while preserving user privacy without exchanging Personally Identifiable Information (PII). It outlines a mathematical framework for implementing DP, addresses challenges such as misclassification and privacy-utility trade-offs, and presents simulation results demonstrating the effectiveness of DP in maintaining accurate causal lift estimations. The findings suggest that while DP offers a robust solution to privacy concerns in digital advertising, further refinement is needed to optimize the balance between privacy and utility.

Uploaded by

akchatjha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views4 pages

Using Differential Privacy Technique To Measure Incrementality of Ads Performance

This paper discusses the application of Differential Privacy (DP) techniques to measure the incrementality of digital ad performance while preserving user privacy without exchanging Personally Identifiable Information (PII). It outlines a mathematical framework for implementing DP, addresses challenges such as misclassification and privacy-utility trade-offs, and presents simulation results demonstrating the effectiveness of DP in maintaining accurate causal lift estimations. The findings suggest that while DP offers a robust solution to privacy concerns in digital advertising, further refinement is needed to optimize the balance between privacy and utility.

Uploaded by

akchatjha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Volume 9 Issue 4 @ July - August 2021 IJIRMPS | ISSN: 2349-7300

Use of Differential Privacy Techniques to Measure


Incrementality of Ad Performance on Digital
Platforms without Exchange of PII
Varun Chivukula
[email protected]

Abstract
Incrementality measurement is critical for evaluating the causal impact of digital ad campaigns.
Typically, these analyses rely on precise user-level data for randomized control trials (RCTs), often
necessitating the exchange of Personally Identifiable Information (PII) between ad platforms and
advertisers. Differential Privacy (DP) offers a robust solution to this challenge by introducing noise
into the data, thereby ensuring privacy without the need for PII exchange. This paper presents a
detailed methodology for applying DP to incrementality measurement in digital advertising. We
formulate the problem mathematically, outline a framework for incorporating DP mechanisms, and
explore practical considerations such as privacy budget management, noise scaling, and the balance
between privacy and utility. We also provide an in-depth simulation study to quantify the effectiveness
of DP in protecting user privacy while maintaining accurate causal lift estimation.
Keywords: Privacy enhancing technologies (PETs), Causal inference, Randomized control Trials,
Differential Privacy
Introduction
In the digital advertising ecosystem, understanding the incremental effect of an advertising campaign is
paramount. Incrementality refers to the causal lift produced by an ad campaign compared to a control group.
Traditionally, ad platforms and advertisers conduct randomized control trials (RCTs) to estimate lift, with
randomization at the user level and conversion measurement at the advertiser level. However, this approach
requires extensive access to granular user data, raising significant privacy concerns.
Differential Privacy (DP) is a mathematical framework that allows organizations to aggregate and analyze
data without exposing individual user information. By adding controlled noise to the data, DP ensures that
the inclusion or exclusion of any single data point (such as a user) does not significantly affect the overall
result. In this paper, we propose an approach that applies DP to incrementality measurement, focusing on
preserving user privacy while ensuring accurate causal inferences.

Mathematical Framework for Differential Privacy


1. Differential Privacy Mechanisms
A randomized mechanism is said to be differentially private if the presence or absence of any single
individual in the dataset does not significantly change the outcome of any analysis. Formally, a
mechanism M provides ε-Differential Privacy if:

𝑃𝑟[𝑀(𝐷) ∈ 𝑆] ≤ 𝑒^𝜀 ⋅ 𝑃𝑟[𝑀(𝐷′) ∈ 𝑆] ∀ 𝑆 ⊆ 𝑅𝑎𝑛𝑔𝑒(𝑀)

IJIRMPS2104231822 Website: www.ijirmps.org Email: [email protected] 1


Volume 9 Issue 4 @ July - August 2021 IJIRMPS | ISSN: 2349-7300

Where:
• D and D' are neighboring datasets differing by one individual,
• S is any subset of the output space,
• ε is the privacy parameter that governs the amount of noise added to the data.
The mechanism typically adds noise that scales with the "sensitivity" of the function being computed.
Sensitivity measures how much the output of a function can change when one individual’s data is altered.
Laplace Mechanism
For functions that involve counting or summing over users, the Laplace mechanism is commonly used,
adding noise drawn from the Laplace distribution:

𝑀(𝐷) = 𝑓(𝐷) + 𝐿𝑎𝑝𝑙𝑎𝑐𝑒(0, 𝛥𝑓/𝜀)


Where f(D) is the function (e.g., conversion rate), and Δf is the sensitivity of the function, which measures
the maximum possible change in the output when any individual’s data is altered.
Gaussian Mechanism
For functions that require less stringent privacy guarantees (e.g., for higher utility), the Gaussian mechanism
can be used, where noise is added from a Gaussian distribution with variance proportional to sensitivity:

𝑀(𝐷) = 𝑓(𝐷) + 𝑁(0, 𝛥𝑓²/𝜀²)

Application to Incrementality Measurement


1. Definition of Incrementality (Lift)
Incrementality measures the causal effect of an ad campaign by comparing the behavior of users exposed to
the ad (test group) against those not exposed (control group). The lift (incrementality) is typically expressed
as the percentage difference between the conversion rates of the test and control groups:

𝐿 = (𝐶𝑅_𝑡𝑒𝑠𝑡 − 𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙) / 𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙


Where:

• 𝐶𝑅_𝑡𝑒𝑠𝑡 is the conversion rate of the test group (ad-exposed users),

• 𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙 is the conversion rate of the control group (ad-unexposed users).

In the DP context, the conversion rates for the test and control groups are computed with added noise to
preserve privacy.
2. Adjusting for Differential Privacy
In practice, to ensure that the conversion rates computed for the test and control groups adhere to differential
privacy, noise is added as follows:
For the test group:

𝐶𝑅_𝑡𝑒𝑠𝑡 = 𝐶𝑅_𝑡𝑒𝑠𝑡 + 𝐿𝑎𝑝𝑙𝑎𝑐𝑒(0, 𝛥𝑓/𝜀)

IJIRMPS2104231822 Website: www.ijirmps.org Email: [email protected] 2


Volume 9 Issue 4 @ July - August 2021 IJIRMPS | ISSN: 2349-7300

For the control group:

𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙 = 𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙 + 𝐿𝑎𝑝𝑙𝑎𝑐𝑒(0, 𝛥𝑓/𝜀)


Thus, the DP-adjusted lift is:

𝐿 = (ˆ𝐶𝑅_𝑡𝑒𝑠𝑡 − ˆ𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙) / ˆ𝐶𝑅_𝑐𝑜𝑛𝑡𝑟𝑜𝑙

Where ˆ𝐶𝑅 represents the noisy estimates of the conversion rates.


3. Privacy Budget and Noise Scaling
Managing the privacy budget is a crucial part of the DP framework. The total privacy loss is accumulated
over multiple queries or mechanisms. If multiple lift measurements are performed (e.g., across different
campaigns), the privacy budget is allocated accordingly. This can be controlled using advanced composition
theorems that help track the total privacy loss across multiple uses of the mechanism.

Challenges in DP-based Incrementality Measurement


1. Misclassification Due to Randomization Misalignment
A key challenge in applying DP to incrementality measurement is the misclassification that arises from the
misalignment of randomization units and measurement units. For example, if randomization occurs at the
user level on the ad platform, but measurement occurs at the account level on the advertiser side, households
with multiple users can be randomly split between test and control groups. This misclassification can distort
lift estimates, and the added DP noise further complicates this issue.
To address this, we propose modeling the misclassification as a probabilistic event where users in the test
group may be attributed to the control group and vice versa. Statistical techniques, such as Bayesian models
or Markov Chain Monte Carlo (MCMC) methods, could be used to correct for these biases and refine lift
estimates under DP constraints.
2. Privacy-Utility Trade-Off
Differential Privacy inherently introduces noise into the analysis, which reduces the accuracy of lift
estimates. The extent of noise depends on the privacy parameter ε, with smaller values of ε offering stronger
privacy guarantees but introducing greater noise. Finding the optimal balance between privacy and utility is
critical.
3. Household-Level Effects
Another challenge is the presence of household-level effects, where one member of a household is exposed
to an ad, and another is in the control group. This introduces potential "halo effects" where the exposure of
one individual influences the behavior of others, which may lead to under- or over-estimation of the true lift.
Simulation Study and Results
Setup
• Number of Users: 100,000
• Test-Control Split: 50% test, 50% control
• Baseline Conversion Rate: 1% for control

IJIRMPS2104231822 Website: www.ijirmps.org Email: [email protected] 3


Volume 9 Issue 4 @ July - August 2021 IJIRMPS | ISSN: 2349-7300

• True Lift: 5%
• Privacy Parameter (ε): 0.1, 0.5, 1.0
• Mechanism: Laplace mechanism applied to conversion rates
Results
The simulation revealed that as the privacy parameter ε decreases, the variance in the measured lift increases
due to the higher noise introduced by the DP mechanism. However, when the number of users in the sample
increases, the noise impact diminishes, improving the reliability of the incrementality estimate.
Impact of Misclassification
The misclassification due to randomization misalignment was shown to cause a significant understatement
in measured lift, especially for households with multiple users. Adjustments using DP noise compounded
this effect but also ensured that the privacy of user-level data was maintained.

Conclusion
Differential Privacy provides a powerful tool for ensuring privacy-preserving incrementality measurement
in digital advertising. While challenges remain in mitigating the impact of misclassification and household-
level effects, the proposed DP framework offers a mathematically sound and scalable solution to privacy
concerns in causal lift analysis. By balancing the privacy budget and carefully managing noise scaling, DP
can be used to provide reliable, privacy-compliant incrementality insights.
Future work will focus on refining the models to better account for household-level effects and optimize the
trade-off between privacy and utility in real-world advertising contexts.

References
1. Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy.
2. Mironov, I. (2017). Rényi Differential Privacy. IEEE Computer Society.
3. McSherry, F., & Talwar, K. (2007). Mechanism Design via Differential Privacy. FOCS.
4. Abadi, M., et al. (2016). Deep Learning with Differential Privacy. Proceedings of the ACM.
5. Karwa, V., & Duchi, J. (2018). Optimal Privacy for Statistical Inference. Journal of Privacy and
Confidentiality.
6. ElEmam, K., &Samarati, P. (2011). A Survey of Differential Privacy Techniques in Data Mining. Data
Mining and Knowledge Discovery.
7. Applebaum, B., et al. (2019). Using Differential Privacy for Aggregate Analysis of Advertising
Campaigns. Journal of Advertising Research.
8. He, X., & Yang, L. (2020). *Achieving Differential Privacy in Real

IJIRMPS2104231822 Website: www.ijirmps.org Email: [email protected] 4

You might also like