0% found this document useful (0 votes)
5 views6 pages

Bayesian Differential Privacy For Linear Dynamic System

This document introduces Bayesian differential privacy for linear dynamical systems, addressing the limitations of conventional differential privacy that fails to protect against outlier data. It proposes a mechanism that incorporates prior distributions to ensure privacy for distant input data pairs while balancing privacy and information utility. The authors derive conditions for noise addition to achieve desired privacy levels and discuss the implications for control system security.

Uploaded by

nandhini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views6 pages

Bayesian Differential Privacy For Linear Dynamic System

This document introduces Bayesian differential privacy for linear dynamical systems, addressing the limitations of conventional differential privacy that fails to protect against outlier data. It proposes a mechanism that incorporates prior distributions to ensure privacy for distant input data pairs while balancing privacy and information utility. The authors derive conditions for noise addition to achieve desired privacy levels and discuss the implications for control system security.

Uploaded by

nandhini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

896 IEEE CONTROL SYSTEMS LETTERS, VOL.

6, 2022

Bayesian Differential Privacy for Linear


Dynamical Systems
Genki Sugiura , Kaito Ito, Student Member, IEEE, and Kenji Kashima , Senior Member, IEEE

Abstract—Differential privacy is a privacy measure and methods of controller design with privacy protection in
based on the difficulty of discriminating between simi- mind have been studied [9]–[11].
lar input data. In differential privacy analysis, similar data Conventional differential privacy is a privacy measure based
usually implies that their distance does not exceed a pre-
determined threshold. It, consequently, does not take into on the difficulty of distinguishing similar data, where x and x
account the difficulty of distinguishing data sets that are far are regarded as being similar if |x − x | ≤ c for a prescribed
apart, which often contain highly private information. This c > 0. To put it conversely, there is no indistinguishability
problem has been pointed out in the research on differen- guarantee for x and x if |x−x | > c. This implies there is a risk
tial privacy for static data, and Bayesian differential privacy of information leakage when there are outliers from normal
has been proposed, which provides a privacy protection
level even for outlier data by utilizing the prior distribution data as pointed out in [12]. For example, unusual electricity
of the data. In this letter, we introduce this Bayesian differ- consumption patterns may contain highly private information
ential privacy to dynamical systems, and provide privacy about the lifestyle. In the literature [13], a new concept called
guarantees for distant input data pairs and reveal its fun- Bayesian Differential Privacy is developed for static data to
damental property. For example, we design a mechanism solve this problem. Bayesian differential privacy considers the
that satisfies the desired level of privacy protection, which
characterizes the trade-off between privacy and information underlying probability distribution of the data, and attempts to
utility. guarantee privacy for data sets that are far apart.
In this letter, we consider a prior distribution for the signal
Index Terms—Control system security, differential pri-
vacy, stochastic system. that we want to keep secret and introduce Bayesian differ-
ential privacy for linear dynamical systems. Similar to the
conventional differential privacy cases [9], we consider a
mechanism where stochastic noise is added to the output data.
I. I NTRODUCTION Note that applying a large noise increases the privacy pro-
tection level, but decreases the information usefulness [14].

A S THE Internet-of-Things (IoT) and cloud computing are


attracting more and more attention for their convenience,
privacy protection and security have become key technologies
In Theorem 2 below, a lower bound of noise scale to guar-
antee a prescribed Bayesian differential privacy level will be
derived. Other properties including the relation to the con-
in control systems. To cope with privacy threats, many privacy ventional case are investigated based on this. The rest of this
protection methods have been studied so far [1]–[3]. Among letter is organized as follows. In Section II, we introduce dif-
them, differential privacy [4] has been used to solve many ferential privacy for dynamical systems. In Section III, we
privacy-related problems in areas such as smart grid [5], health propose Bayesian differential privacy for dynamical systems
management [6], and blockchain [7], because it can mathe- and derive a sufficient condition for added noise to achieve
matically quantify privacy guarantees. Differential privacy has its privacy guarantee. In Section IV, considering the trade-
originally been applied to static data, but as shown in the off between privacy and information utility, we derive the
example of power systems above, there is an urgent need to Gaussian noise with the minimum energy while guaranteeing
establish privacy protection techniques for dynamical systems. the Bayesian differential privacy. In Section V, the usefulness
In recent years, the concept of differential privacy has been of Bayesian differential privacy is described via a numerical
introduced to dynamical systems [8], and from the viewpoint example. Some concluding remarks are given in Section VI.
of control systems theory, the relationship between privacy Notations: The sets of real numbers and nonnegative inte-
protection and the observability of systems has been clarified, gers are denoted by R and Z+ , respectively. The imaginary
unit is denoted by j. For vectors x1 , . . . , xm ∈ Rn , a collec-
Manuscript received March 3, 2021; revised May 11, 2021; accepted tive vector [x1 · · · xm
 ] ∈ Rnm is described by [x ; · · · ; x ]
1 m
June 2, 2021. Date of publication June 7, 2021; date of current
version June 30, 2021. This work was supported in part by JSPS for the sake of simplicity of description. For a sequence
KAKENHI under Grant JP21H04875. Recommended by Senior Editor u(t) ∈ Rn , t = 0, 1, . . . , T, a collective vector is denoted by
S. Tarbouriech. (Corresponding author: Kenji Kashima.) UT := [u(0); · · · ; u(T)] ∈ R(T+1)n using a capital alphabet.
The authors are with the Graduate School of Informatics, Kyoto
University, Kyoto 606-8501, Japan (e-mail: [email protected]). For a square matrix A ∈ Rn×n , its determinant is denoted by
Digital Object Identifier 10.1109/LCSYS.2021.3087096 det(A), and when its eigenvalues are real, its maximum and
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
2475-1456 
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.
SUGIURA et al.: BAYESIAN DIFFERENTIAL PRIVACY FOR LINEAR DYNAMICAL SYSTEMS 897

minimum eigenvalues are denoted by λmax (A) and λmin (A),


respectively. We write A  0 (resp. A  0) if A is positive
definite (resp. semidefinite). For A  0, the principal square
root of A is denoted by A1/2 . The identity matrix of size n
is denoted by In . The subscript n is omitted when it is clear Fig. 1. Mechanism with output noise.
from the context. The Euclidean norm of a vector x ∈ Rn is
denoted by |x|, and its weighted norm with A  0 is denoted
by |x|A := (x Ax)1/2 . The indicator function of a set S ⊂ Rn which defines a mapping M : R(T+1)m ×   (UT , ω)  →
is denoted by 1S , i.e., 1S (x) = 1 if x ∈ S, and 0, otherwise. Yw,T ∈ R(T+1)q . In differential privacy analysis, this mapping
For a topological space X, the Borel algebra on X is denoted is called a mechanism.
by B(X). Fix some complete probability space (, F , P), and Next, the definition of the differential privacy is given. We
let E be the expectation with respect to P. For an Rn -valued begin with the definition of the data similarity:
random vector w, w ∼ Nn (μ, ) means that w has a nonde- Definition 1: Given a positive definite matrix K ∈
generate multivariate Gaussian distribution with mean μ ∈ Rn R(T+1)m×(T+1)m , a pair of input data (UT , UT ) ∈ R(T+1)m ×
and covariance matrix   0. The so called Q-function is R(T+1)m is said to belong to the binary relation K-adjacency if
 ∞ v2
defined by Q(c) := √1 c e− 2 dv, where Q(c) < 1/2 for
2π 
c > 0, and R(ε, δ) := (Q−1 (δ) + (Q−1 (δ))2 + 2ε)/2ε. |UT − UT |K ≤ 1. (6)
The gamma function is denoted by (·). A random variable
The set of all pairs of the input data that are K-adjacent is
z is said to have a χ 2 distribution with k degrees of free-
denoted by AdjK .
dom, denoted by z ∼ χk2 , if its distribution has the following
This K-adjacency is an extension of c-adjacency, which corre-
probability density:
sponds to K = I/c2 , for the 2-norm in previous work [9]. Next,
zk/2−1 e−z/2 we describe the definition of (K, ε, δ)-differential privacy in
p(z; k) = , z ≥ 0, k ∈ {1, 2, . . .}. dynamical systems in the same way as for static data [9,
2k/2 (k/2)
Definition 2.4].
Definition 2 [(K, ε, δ)-Differential Privacy]: Given ε > 0
II. C ONVENTIONAL D IFFERENTIAL P RIVACY FOR and δ ≥ 0, the mechanism (5) is said to be (K, ε, δ)-
DYNAMICAL S YSTEMS differentially private ((K, ε, δ)-DP) at a finite time instant
In this section, we briefly overview fundamental results T ∈ Z+ , if
on differential privacy for dynamical systems. Consider the
following discrete-time linear system: P[NT UT + WT ∈ S]
 
 ≤ eε P[NT UT + WT ∈ S] + δ, ∀S ∈ B R(T+1)q (7)
x(t + 1) = Ax(t) + Bu(t),
(1)
y(t) = Cx(t) + Du(t),
for any (UT , UT ) ∈ AdjK .
for t ∈ Z+ , where x(t) ∈ Rn , u(t) ∈ Rm , and y(t) ∈ Rq Suppose that the output sequence Yw,T and the state equa-
denote the state, input, and output, respectively, and A ∈ Rn×n , tion (1) are available for an attacker trying to estimate the
B ∈ Rn×m , C ∈ Rq×n , and D ∈ Rq×m . For simplicity, we value of the input sequence UT . Differential privacy requires
assume x(0) = 0 and the information to be kept secret is the the output sequence statistics are close enough at least for
input sequence UT up to a finite time T ∈ Z+ . For (1), the adjacent data pairs. A sufficient condition for the mechanism
output sequence YT ∈ R(T+1)q is described by induced by Gaussian noise to be (ε, δ)-differentially private
under c-adjacency is derived in [9, Th. 2.6]. This result can
YT = N T U T , (2) be straightforwardly extended as follows:
Theorem 1: The Gaussian mechanism (5) induced by WT ∼
where NT ∈ R(T+1)q×(T+1)m is
N(T+1)q (μw , w ) is (K, ε, δ)-differentially private at a finite
NT = TT (A, B, C, D) (3) time T ∈ Z+ with ε > 0 and 1/2 > δ > 0, if the covariance
⎡ ⎤ matrix w  0 is chosen such that
D 0 ··· ··· 0
⎢ .. .. ⎥ −1/2  
⎢ CB D . .⎥ λmax OK,w ,T ≥ R(ε, δ), (8)
⎢ ⎥
⎢ .. .. ⎥
:= ⎢ CAB CB D . .⎥ . (4)
⎢ ⎥ where
⎢ .. .. .. .. ⎥
⎣ . . . . 0⎦
OK,w ,T := K −1/2 NT w−1 NT K −1/2 . (9)
CAT−1 B CAT−2 B ··· CB D

To proceed with differential privacy analysis, we consider the Larger noise WT and lower threshold for the adjacency in
output yw (t) := y(t) + w(t) after adding the noise w(t) ∈ Rq ; the sense of  and K, respectively, make the left-hand side
see Fig. 1. From (2), Yw,T ∈ R(T+1)q can be described by of (8) larger. This implies that differential privacy is ensured
for smaller ε and δ because R is a decreasing function. Further
Yw,T = NT UT + WT , (5) insight of K will be revealed in Corollary 1 below.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.
898 IEEE CONTROL SYSTEMS LETTERS, VOL. 6, 2022

III. B AYESIAN D IFFERENTIAL P RIVACY In (17), the outer (resp. inner) P is taken with respect to
FOR DYNAMICAL S YSTEMS (UT , UT ) (resp. WT ). Roughly speaking, the definition of BDP
A. Formulation is that the probability that the mechanism satisfies (K, ε, δ)-DP
is greater than or equal to γ . Note that this definition places
In Definition 2, the difficulty of distinguishing data pairs
no direct restriction on the distance between a pair of input
whose K-weighted distance is larger than a threshold 1 is not
data UT , UT .
taken into account. Note that there is no design guideline for
K. In this section, we introduce Bayesian differential privacy B. Sufficient Condition for Noise Scale
for dynamical systems. To this end, we assume the following
It is desirable that the added noise w is small to retain
availability of the prior distribution of the data to be protected,
the data usefulness; see, e.g., Section V. The following theo-
and provide a privacy guarantee that takes into account the
rem gives a sufficient condition for noise scale to guarantee
discrimination difficulty of data pairs based on the prior.
(PUT , γ , ε, δ)-Bayesian differential privacy.
Assumption 1: The input data to the mechanism, UT , is
Theorem 2: Suppose that the prior distribution PUT of UT
an R(T+1)m -valued random variable with distribution PUT . In
is N(T+1)m (0, ). The Gaussian mechanism (5) induced by
addition, one can use the prior PUT to design a mechanism.
WT ∼ N(T+1)q (μw , w ) is (PUT , γ , ε, δ)-Bayesian differen-
The following is a typical example where a private input
tially private at a finite time T ∈ Z+ with 1 ≥ γ ≥ 0, ε > 0,
signal is a realization of a random variable.
and 1/2 > δ > 0, if the covariance matrix w  0 is chosen
Example 1: Suppose that the input data u(t) to be protected
such that
is the reference for tracking control; see [14]. In many applica-
−1/2
tions, tracking to the reference signal over specified frequency λmax (O,w ,T ) ≥ c(γ , T)R(ε, δ) (18)
ranges is required. Such a control objective can be represented
where O,w ,T is defined by
by filtering white noise ξ(t). To be more precise, we assume
u(t) = r(t) is generated by O,w ,T :=  1/2 NT w−1 NT  1/2 (19)
xr (t + 1) = Ar xr (t) + Br ξ(t), xr (0) = 0, (10) and c(γ , T) is the unique c > 0 that satisfies
r(t) = Cr xr (t) + Dr ξ(t), (11)  c2 /2
1
ξ(t) ∼ N (0, I), t ∈ {0, 1, . . . , T}, (12) γ = x(T+1)m/2−1 e−x/2 dx.
0 2 (T+1)m/2 ((T + 1)m/2)
where Ar ∈ Rl×l ,
Br ∈ Rl×m ,
Cr ∈ Rm×l ,
Dr ∈ Rm×m . (20)
The power spectrum of u(t) is characterized by the frequency
Proof: Using a similar argument as in the proof for [9, Th.
transfer function
2.6], for any fixed U, U  ∈ R(T+1)m , one has
Gr (ejλ ) := Cr (ejλ I − Ar )−1 Br + Dr , λ ∈ [0, π ). (13)
P[NT UT + WT ∈ S | UT = U]
In this case,
 ≤ eε P[NT UT + WT ∈ S | UT = U  ]
Dr ξ(0) (t = 0),  
1
r(t) = t j−1
Dr ξ(t) + j=1 Cr Ar Br ξ(t − j) (t ≥ 1),
(14) + P Z ≥ εh − | UT = U, UT = U 
2h
and the prior distribution is given by UT ∼ N (0, U ) with where
U := T 
T, (15) h := |Y − Y  |−1−1 , Y := NT UT , Y  := NT UT ,
w
T := TT (Ar , Br , Cr , Dr ). (16)
and Z ∼ N (0, 1). Then, the mechanism is (PUT , γ , ε, δ)-
Note that the step reference signal whose value obeys r(t) ≡ 1
Bayesian differentially private, if Q(εh − 2h ) ≤ δ with
r̄ ∼ N (0, s ) can be modeled by setting Ar = Cr = I, Br = probability at least γ , i.e.,
Dr = 0 with xr (0) ∼ N (0, s ) in (10), (11). This corresponds
to the case where the initial state is the private information P[h ≥ R(ε, δ)] ≥ γ . (21)
rather than the input sequence UT as discussed in [9]. The inequality (21) holds if (18) is satisfied. This is because
Then, based on the Bayesian differential privacy for static
data [13], we define (PUT , γ , ε, δ)-Bayesian differential pri- h−2 = |NT (UT − UT )|2 −1
w
vacy, which is an extension of differential privacy for =   −1/2
(UT − UT )  O,w ,T  −1/2 (UT − UT )
dynamical systems. −1/2  2
≤ | (UT − UT )| λmax (O,w ,T ),
Definition 3 [(PUT , γ , ε, δ)-Bayesian Differential Privacy]:
Assume that the random variables UT , UT are independent and and then, from the fact that | −1/2 (UT − UT )|2 /2 follows
both follow the distribution PUT . Given 1 ≥ γ ≥ 0, ε > 0 χ 2 distribution with (T + 1)m degrees of freedom and the
and δ ≥ 0, the mechanism (5) is said to be (PUT , γ , ε, δ)- definition of c(γ , T), | −1/2 (UT − UT )|2 ≤ c(γ , T)2 with
Bayesian differentially private ((PUT , γ , ε, δ)-BDP) at a finite probability γ .
time instant T ∈ Z+ , if In order to clarify the connection between conventional
 and Bayesian DP, it is worthwhile comparing Theorems 1
P P[NT UT + WT ∈ S | UT ]
 and 2. Bayesian differential privacy with the prior distribu-
≤ eε P[NT UT + WT ∈ S | UT ] + δ ≥ γ , ∀S ∈ B (R(T+1)q ). tion PUT ∼ N(T+1)m (0, ) corresponds to differential privacy
(17) with an adjacency weight K.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.
SUGIURA et al.: BAYESIAN DIFFERENTIAL PRIVACY FOR LINEAR DYNAMICAL SYSTEMS 899

Corollary 1: Suppose that the prior distribution PUT of


UT is N(T+1)m (0, ). Let a finite time T ∈ Z+ with 1 ≥
γ ≥ 0, ε > 0, and 1/2 > δ > 0 be given. Then, the
Gaussian mechanism (5) induced by WT ∼ N(T+1)q (μw , w )
is (PUT , γ , ε, δ)-Bayesian differentially private at the time T, if
the mechanism is (K, ε, δ)-differentially private at the time T
with
K :=  −1 /c(γ , T)2 (22)
with c(γ , T) defined in Theorem 2.
Proof: Let us define Yw,T := NT UT + WT , Yw,T  := NT UT +
WT . If the mechanism is (K, ε, δ)-differentially private with K
Fig. 2. Graph of c(γ , T ).
defined in (22),
 

P P[Yw,T ∈ S | UT ] ≤ eε P[Yw,T ∈ S | UT ] + δ
 IV. D ESIGN OF M ECHANISM
≥ P P[Yw,T ∈ S | UT ] ≤

  To motivate additional analysis in this section, let us con-
eε P[Yw,T ∈ S | UT ] + δ  |UT − UT | −1 ≤ c(γ , T)
  sider a feedback interconnection of the plant P and the
× P |UT − UT | −1 ≤ c(γ , T) . controller C:

Note that it holds xp (t + 1) = Ap xp (t) + Bp up (t),
 P :
yp (t) = Cp xp (t),
P P[Yw,T ∈ S | UT ] ≤ 

  xc (t + 1) = Ac xc (t) + Bc e(t),
eε P[Yw,T ∈ S | UT ] + δ  |UT − UT | −1 ≤ c(γ , T) = 1, C :
up (t) = Cc xc (t),
since the mechanism satisfies
where the control objective is to make the tracking error

P[Yw,T ∈ S | UT ] ≤ eε P[Yw,T ∈ S | UT ] + δ e(t) := r(t) − yp (t) small, and its private information is the
reference signal r(t). The attacker, who can access to the out-
whenever |UT − UT | −1 ≤ c(γ , T) (i.e., (UT , UT ) ∈ AdjK ).
put yp (t), attempts to estimate r(t). To prevent this inference,
Next, by definition of c(γ , T), we have
  we add noise v to r, which leads to the following closed loop
P |UT − UT | −1 ≤ c(γ , T) = γ . dynamics:

Consequently, we obtain the desired result. ⎨ x̄(t + 1) = Āx̄(t) + B̄(r(t) + v(t)),
It should be emphasized that such a simple relation is y (t) = C̄x̄(t), (23)
obtained since the prior is Gaussian and the system is linear. ⎩ p
e(t) = r(t) − C̄x̄(t),
where x̄(t) := [xp (t); xc (t)] and
C. Asymptotic Analysis
   
For the conventional DP, it is known that when the Ap Bp Cc 0  
Ā := , B̄ := , C̄ := Cp 0.
system (1) is asymptotically stable, one can design a Gaussian −Bc Cp Ac Bc
noise which makes the induced mechanism differentially pri- Suppose that the distribution of VT is given by N (0, v ).
vate for any time horizon T [9, Corollary 2.9]. This is because, Then, larger noise v fluctuates e more so that the variance
for an asymptotically stable system, the incremental gain from of ET := [e(0); · · · ; e(T)] is given by
|UT − UT | to |YT − YT | is bounded by its H ∞ -norm for any
T, and by the definition of DP, the distance |UT − UT | is also T v 
T (24)
bounded by a predetermined threshold. That is, even when with T := TT (Ā, B̄, −C̄, 0).
the horizon of the data to be protected becomes longer, the
distance between data sets where their indistinguishability is
A. Minimization of the Noise
guaranteed is fixed.
On the other hand, for the proposed BDP, as T becomes The expression (24) motivates us to seek the Gaussian noise
larger, |UT − UT | tends to take larger values according to the with the minimum energy among those satisfying the suffi-
prior PUT . Consequently, to achieve BDP for a large time hori- cient condition (18) for Bayesian differential privacy derived
zon T, large noise is required. To see this from Theorem 2, by Theorem 2. More specifically, we consider the following
c(γ , T) > 0 is plotted in Fig. 2 as a function of T. As can optimization problem with the covariance matrix of Gaussian
be seen, as T increases, c(γ , T) becomes large, and therefore, noise w  0 as the decision variable.
from (18), the scale parameter w of the noise is required to Problem 1:
be large to guarantee BDP. This fact suggests that the privacy minimize Tr(w ) (25)
requirement of BDP (with fixed ε, γ , δ) for the long (possibly w 0
infinite) horizon case is too strong. This issue will be resolved 1
subject to λmax ( 1/2 NT w−1 NT  1/2 ) ≤ .
by an appropriate scaling with T to quantify the long-time c(γ , T)2 R(ε, δ)2
average privacy. (26)

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.
900 IEEE CONTROL SYSTEMS LETTERS, VOL. 6, 2022

Fig. 3. Mechanism with input noise. Fig. 4. Feedback system with input noise mechanism.

The constraint (26) is an inequality that is equivalent to the


inequality (18). Under certain assumptions, this solution can
be obtained as follows.
Theorem 3: Assume that NT is full row rank. The optimal
solution to problem 1 is w∗ := c(γ , T)2 R(ε, δ)2 NT NT .
Proof: Denote N := c(γ , T)R(ε, δ)NT  1/2 so that w∗ =
NN . Then, (26) is equivalent to I − N w−1 N  0. By the
Schur complement,
 
w N
 0. (27)
N I
This implies w w∗ , and consequently Tr(w ) ≥ Tr(w∗ ). Fig. 5. Bode gain diagram for the frequency domain reference model Gr
The obtained optimal solution w∗ is a constant multiple of and the transfer functions for the feedback system (23) with (32), (33).
the covariance matrix of the distribution of the output data
YT = NT UT when the covariance matrix of the input data UT
is . This means that it is possible to efficiently conceal the itself . Similarly, (30) does not depend on the system matrices
input data from the output data by applying the noise having in (1) either. The difference for the Bayesian case is that (30)
the same statistics (up to scaling) as the observed data. depends on the covariance  of the prior distribution of the
signals to be protected.
B. Input Noise Mechanism Remark 1: It is clear from Corollary 2 and Theorem 3 that
the minimum energy Gaussian noise that satisfies the suffi-
In this subsection, we study the case where noise is added
cient condition for privacy guarantee (30) for the input noise
to the input channel; see Fig. 3. Consider the following system
mechanism can be easily obtained by
with input noise:
 v∗ = c(γ , T)2 R(ε, δ)2 . (31)
x(t + 1) = Ax(t) + B(u(t) + v(t)),
(28)
yv (t) = Cx(t) + D(u(t) + v(t)). This characterization allows the natural interpretation that
large noise is needed to protect large inputs; see also the next
As in the aforementioned section, we assume x(0) = 0. The
section.
output sequence Yv,T = [yv (0); · · · ; yv (T)] can be described as
Yv,T = NT UT + NT VT . (29) V. N UMERICAL E XAMPLE
For the system in (1), adding noise VT to the input channel is Consider the feedback system in Fig. 4, where the plant and
equivalent to adding noise WT = NT VT to the output channel. controller in (23) are given by
 
For simplicity, we assume that NT is square and nonsingular; 1.2 − 0.5
Ap = , Bp = [ − 0.3, 0] , Cp = [0.2, 0], (32)
this can be relaxed as in [9, Corollary 2.16]. From Theorem 2, 1 0
we obtain the following corollary.  
1 1
Corollary 2: Suppose that the prior distribution PUT of UT Ac = , Bc = [0, −1] , Cc = [1.5, 0]. (33)
0 0.1
is N(T+1)m (0, ). The Gaussian mechanism (29) induced by
VT ∼ N(T+1)q (μv , v ) is (PUT , γ , ε, δ)-Bayesian differentially The integral property of the controller enhances the low-
private at a finite time T ∈ Z+ with 1 ≥ γ ≥ 0, ε > 0, and frequency tracking performance. The Bode gain diagram is
1/2 > δ > 0, if the covariance matrix v  0 is chosen such shown in Fig. 5. Suppose that the reference r is the signals to
that be protected, and it is public information that its spectrum is
concentrated over the frequency range below 3×10−2 rad/s. To
1/2
λmin ( −1/2 v  −1/2 ) ≥ c(γ , T)R(ε, δ) (30) represent this prior information, we took the frequency model
for r as in Example 1, which is set to be a lowpass filter (gen-
with c(γ , T) > 0 defined in Theorem 2.
erated by lowpass(xi, 3e-2) in MATLAB). Recall that
Proof: The desired result is a straightforward consequence
UT ∼ N (0, U ) with (15) and (16).
of Theorem 2.
We design input noise to make the system Bayesian differ-
In [9, Corollary 2.16], a sufficient condition for (ε, δ)-
entially private for γ = 0.5, T = 100, ε = 100, δ = 0.1.
differential privacy in the sense of Definition 2 is given.
This leads to
The result concludes that the differential privacy level for
the input noise mechanism does not depend on the system c(γ , T) = 14.1657, R(ε, δ) = 0.0774. (34)

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.
SUGIURA et al.: BAYESIAN DIFFERENTIAL PRIVACY FOR LINEAR DYNAMICAL SYSTEMS 901

Remark 3: Lastly, we would like to note that for the


Gaussian prior N (0, U ) in the numerical example, a refer-
ence signal UT that deviates significantly from mean 0 in the
sense of |0 − UT | −1 can be seen as an outlier. Therefore, out-
U
of-band signals having large values |UT | −1 can be regarded
U
as outliers. Bayesian differentially private mechanism provides
privacy guarantees not only for in-band signals but also for
out-of-band ones. In particular, the parameter γ for Bayesian
differential privacy determines the extent to which privacy
guarantees can be provided for out-of-band signals.

Fig. 6. Reference signal r (t) and plant output yp (t) for three mecha- VI. C ONCLUSION
nisms. In this letter, we introduced Bayesian differential privacy for
linear dynamical systems using prior distributions of input data
to provide privacy guarantees even for input data pairs with
In what follows we compare the following three cases: large differences, and gave sufficient conditions to achieve it.
• noise-free,
2 I) with
Furthermore, we derived the minimum energy Gaussian noise
• i.i.d. noise: VT ∼ N(T+1)m (0, σiid
that satisfies the condition. As noticed in Section III-C, any
2
finite noise cannot guarantee the Bayesian differential privacy
Tr(σiid I) = c(γ , T)2 R(ε, δ)2 λmax (U )(T + 1)m for the infinite horizon case. This issue will be addressed in
= 121.604, (35) future work.

where σiid is the minimum σ satisfying (30), and R EFERENCES


• the minimum noise obtained in Theorem 3: VT ∼
[1] L. Sweeney, “k-Anonymity: A model for protecting privacy,” Int. J.
N(T+1)m (0, v∗ ), v∗ := c(γ , T)2 R(ε, δ)2 U with Uncertainty Fuzziness Knowl. Based Syst., vol. 10, no. 5, pp. 557–570,
2002.
Tr(v∗ ) = 8.2574. (36) [2] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam,
“-Diversity: Privacy beyond k-anonymity,” in Proc. 22nd IEEE Int.
Conf. Data Eng., Apr. 2006, p. 24.
Fig. 6 shows the reference signal r(t) and plant output yp (t) [3] N. Li, T. Li, and S. Venkatasubramanian, “t-Closeness: Privacy beyond
for these three cases. It can be seen that the output error k-anonymity and l-diversity,” in Proc. 23rd IEEE Int. Conf. Data Eng.,
Apr. 2007, pp. 106–115.
for the noise-free case is the smallest. This is because the [4] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to
(realized) trajectory of r is fully utilized allowing for some sensitivity in private data analysis,” in Proc. 3rd Theory Cryptogr. Conf.,
possibility that information about r may leak from yp . On the 2006, pp. 265–284.
[5] H. Sandberg, G. Dán, and R. Thobaben, “Differentially private state
other hand, the other two cases guarantee the same level of estimation in distribution networks with smart meters,” in Proc. 54th
Bayesian differential privacy. Note that the error fluctuation is IEEE Conf. Decis. Control, Dec. 2015, pp. 4492–4498.
suppressed in the minimum noise case. Statistically, the out- [6] F. K. Dankar and K. El Emam, “The application of differential privacy to
health data,” in Proc. Int. Conf. Extending Database Technol., Mar. 2012,
put fluctuation caused by the added noise v can be evaluated pp. 158–166.
by (24). The value Tr(T v  2
T )/(c(γ , T)R(ε, δ)) is given by [7] M. Yang, A. Margheri, R. Hu, and V. Sassone, “Differentially pri-

Tr(T U T ) = 8.1998 for the minimum noise case, which vate data sharing in a cloud federation with blockchain,” IEEE Cloud
Comput., vol. 5, no. 6, pp. 69–79, Nov./Dec. 2018.
is smaller than λmax (U )Tr(T  T ) = 55.4202 for the i.i.d. [8] J. Le Ny and G. J. Pappas, “Differentially private filtering,” IEEE Trans.
case. Autom. Control, vol. 59, no. 2, pp. 341–354, Feb. 2014.
The interpretation is as follows: The i.i.d. noise has uniform [9] Y. Kawano and M. Cao, “Design of privacy-preserving dynamic con-
trollers,” IEEE Trans. Autom. Control, vol. 65, no. 9, pp. 3863–3878,
frequency distribution, which implies it adds more out-of-band Sep. 2020.
noise than the minimum one. However, this out-of-band com- [10] J. Cortés, G. E. Dullerud, S. Han, J. Le Ny, S. Mitra, and G. J. Pappas,
ponent does not contribute to the protection of r since it is “Differential privacy in control and network systems,” in Proc. 55th
IEEE Conf. Decis. Control, Dec. 2016, pp. 4252–4272.
easily distinguished from r thanks to its prior information in [11] V. Katewa, F. Pasqualetti, and V. Gupta, “On privacy vs cooperation
Fig. 5. Nevertheless, this out-of-band noise largely degrades in multi-agent systems,” Int. J. Control, vol. 91, no. 7, pp. 1693–1707,
the tracking performance. 2018.
[12] K. Ito, Y. Kawano, and K. Kashima, “Privacy protection with heavy-
Remark 2: The out-of-band noise as in the i.i.d. case is tailed noise for linear dynamical systems,” Automatica, to be published.
effective when the prior distribution of the signals to be pro- [13] A. Triastcyn and B. Faltings, “Bayesian differential privacy for machine
tected is not public information. That is, this noise can prevent learning,” in Proc. Int. Conf. Mach. Learn., Nov. 2020, pp. 9583–9592.
[14] Y. Kawano, K. Kashima, and M. Cao, “Modular control under privacy
the attacker from inferring the prior distribution, e.g., via the protection: Fundamental trade-offs,” Automatica, vol. 127, May 2021,
empirical Bayes. Art. no. 109518.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY CALICUT. Downloaded on January 03,2025 at 10:43:15 UTC from IEEE Xplore. Restrictions apply.

You might also like