Adjacent Initial States Based Differential Privacy F - 2024 - Expert Systems Wit
Adjacent Initial States Based Differential Privacy F - 2024 - Expert Systems Wit
Keywords: Privacy protection has received widespread attention from the community of discrete event systems to protect
Discrete event system the sensitive information of users or organizations from being leaked. The existing privacy protection methods
Privacy protection cannot protect the state information of probabilistic discrete event systems via repeated observations, which
Probability distribution
represents the information pertaining to system resource configurations. This work introduces differential
Supervisory control
privacy into the framework of probabilistic labeled Petri nets to solve the problems pertaining to the initial
state protection. For two initial states that are adjacent under a specified measure, a state differential privacy
verification method is proposed by determining whether the probability distributions of observations generated
from adjacent initial states are similar. An external attacker is unlikely to infer the initial state via repeated
observations if the system satisfies state differential privacy for certain adjacent initial states. For a probabilistic
labeled Petri net, which does not satisfy state differential privacy, a supervisory control method is proposed for
enforcement. A maximally permissive controller can be constructed based on the control specification proposed
in this paper. Experimental studies show that the method proposed in the paper can effectively protect the
privacy of given adjacent initial states.
∗ Corresponding author.
E-mail addresses: [email protected] (Y. Teng), [email protected] (L. Yin), [email protected] (Z. Li), [email protected]
(N. Wu).
https://doi.org/10.1016/j.eswa.2023.121454
Received 14 June 2023; Received in revised form 2 August 2023; Accepted 2 September 2023
Available online 7 September 2023
0957-4174/© 2023 Elsevier Ltd. All rights reserved.
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
To protect the language information of a deterministic transition (1) The notion of state differential privacy is formulated to protect
system (a standard model of DESs) whose output is strings or words adjacent initial states, which are defined to represent similar ini-
of symbols (non-numerical), an exponential mechanism is developed tial resource configurations of a Petri net. We present the concept
to obfuscate a sensitive string using a randomly chosen string that is of PLPNs and provide a method for computing the probability
likely to be near to it (Jones et al., 2019). The notion of ‘‘near’’ is distributions of the behavior of two PLPNs with adjacent initial
evaluated by the Levenshtein distance, which is used to control the states. A state differential privacy verification method is proposed
similarity or nearness of a sensitive string and its output counterpart. to analyze whether the adjacent initial states are protected using
By using the framework of differential privacy, the work provides a PLPNs.
guarantee that privatized behavior (information) will reveal nothing
(2) A supervisory control method is proposed for the enforcement
meaningful regarding the underlying sensitive behavior (information).
of state differential privacy to control the behavior of a PLPN.
However, this method fails to define and protect the state information
The strictest control specification is proposed to make a system
of a symbolic control system, which motivates us to borrow the idea
satisfy 0-state differential privacy with adjacent initial states.
of differential privacy to the DES area for state information protection.
A controller constructed by our supervisory control method is
Petri nets are a graphical modeling tool for DESs with solid mathemati-
maximally permissive for the enforcement of 0-state differential
cal support and can well analyze stochastic or timed systems (Cabasino
privacy.
et al., 2014; Ding et al., 2018; Ma et al., 2020). In this work, we focus on
the privatization of the initial state of DESs modeled with probabilistic (3) Experimental studies show that the proposed method achieves the
Petri nets. verification and enforcement of state differential privacy in the
The initial state of a real-world DES usually represents its initial considered class of PLPNs.
resource configuration, whose protection is of great significance. For
example, in sequential auctions, the pre-determined auction sequence The rest of the paper is organized as follows. Section 2 introduces
of items to be auctioned tremendously affects the strategic considera- the backgrounds of PLPNs and the notion of differential privacy in
tions of both bidders and auctioneers. Auctioneers do not want that the sensitive data security. Section 3 is a problem statement, which presents
auction sequence can be estimated by bidders based on the auctioned the notion of state differential privacy and two problems involving
items. The initial state protection has received much attention from state differential privacy in PLPNs. The verification of state differential
the community of DESs. The existing methods for protecting the initial privacy is formulated in Section 4. Section 5 designs a controller for
state of a DES are developed through the formation of the notion of state differential privacy enforcement. A numerical example is shown
initial-state opacity (Basile et al., 2023; Tong et al., 2017a; Zhu et al., in Section 6. Section 7 concludes this paper.
2018). It is assumed that a malicious attacker (observer) fully knows the
structure of a system, but only partially observes the event occurrences 2. Preliminaries
in it (unobservable events are invisible to the attacker). The attacker
does not know the initial state unless it can be inferred by observations. In this section, we introduce the backgrounds of PLPNs and the stan-
Given a secret described by a set of states, a system is said to be initial- dard concept of differential privacy. Let N be the set of non-negative
state opaque with respect to the secret if the attacker is never able to integers and N+ be the set of positive integers.
infer that the initial state of the system is within the secret. However,
2.1. Probabilistic labeled Petri nets
some systems may be repeatedly observed by an external observer.
An external observer can initialize a system multiple times to observe
A Petri net is a structure 𝑁 = (𝑃 , 𝑇 , 𝑃 𝑟𝑒, 𝑃 𝑜𝑠𝑡), where 𝑃 is a finite
its behavior for a certain initial state. The initial state information of
set of 𝑚 places and 𝑇 is a finite set of 𝑛 transitions with 𝑃 ∪ 𝑇 ≠ ∅ and
a DES under the probabilistic framework cannot be protected by the
𝑃 ∩ 𝑇 = ∅. 𝑃 𝑟𝑒 ∶ 𝑃 × 𝑇 → N and 𝑃 𝑜𝑠𝑡 ∶ 𝑃 × 𝑇 → N are the pre-
existing methods, as the initial state can be inferred via the probability
and post-incidence functions that respectively specify the arcs directed
distribution of the language generated from the system by an external
from places to transitions and transitions to places. Thanks to their
attacker. Compared with the initial-state opacity methods, differential
definitions, function 𝑃 𝑟𝑒 or 𝑃 𝑜𝑠𝑡 can be tabulated in a rectangular array
privacy can protect similar initial resource configurations modeled by
and further represented by an 𝑚 × 𝑛 matrix indexed by 𝑃 and 𝑇 . The
adjacent initial states.
incidence matrix 𝐶 of a net is defined by 𝐶 = 𝑃 𝑜𝑠𝑡 − 𝑃 𝑟𝑒.
Research Gap and Innovation: The verification and implementa-
Let 𝑥 ∈ 𝑃 ∪ 𝑇 be a node in a Petri net. Its pre-set is defined as
tion of differential privacy in DESs have not been well defined and fully ∙ 𝑥 = {𝑦 ∈ 𝑃 ∪ 𝑇 ∣ 𝑃 𝑟𝑒(𝑦, 𝑥) > 0} and its post-set is defined as
explored. The documented results listed in Table 1 do not consider 𝑥∙ = {𝑦 ∈ 𝑃 ∪ 𝑇 ∣ 𝑃 𝑜𝑠𝑡(𝑦, 𝑥) > 0}. A node sequence 𝑥1 𝑥2 ⋯ 𝑥𝑟 ∈ (𝑃 ∪ 𝑇 )∗
differential privacy and state information protection simultaneously in a Petri net is called a path if 𝑥𝑖 ∈ 𝑥∙𝑖−1 holds for 𝑖 = 2, … , 𝑟. A path
in the DES framework. The existing literature uses the framework 𝑥1 𝑥2 ⋯ 𝑥𝑟 is said to be a cycle if 𝑥1 = 𝑥𝑟 . A net is said to be acyclic or
of differential privacy to protect sensitive language information of a cycle-free if it has no cycle.
symbolic control system. However, the current study does not consider A marking or state of a Petri net is defined as a mapping 𝑀 ∶ 𝑃 → N
the problems pertaining to the initial state protection for repeatedly that assigns to each place of a Petri net a non-negative integer of tokens.
observable DESs. We propose a probabilistic model, called probabilistic A marking 𝑀 can be treated as a column vector indexed by 𝑃 for
labeled Petri nets (PLPNs), to estimate the likelihood of specific events the sake of mathematical convenience. We use 𝑀(𝑝) to indicate the
occurring. If the probability distributions of observations generated number of tokens in place 𝑝 at marking 𝑀. For economy of space, 𝑀
from the two initial states that are adjacent under a specified measure can be compactly written as a multi-set over place set 𝑃 , i.e., 𝑀 =
are similar, an attacker is unlikely to distinguish the two initial states ∑
𝑝∈𝑃 𝑀(𝑝)𝑝. A Petri net system ⟨𝑁, 𝑀0 ⟩ is a net structure 𝑁 with an
via repeated observations. To this end, differential privacy is introduced initial marking 𝑀0 .
to the community of DESs that are in this particular research modeled A transition 𝑡 is enabled at a marking 𝑀, denoted by 𝑀[𝑡⟩, if 𝑀 ≥
by PLPNs. 𝑃 𝑟𝑒(⋅, 𝑡) and may fire yielding a marking 𝑀 ′ = 𝑀 + 𝐶(⋅, 𝑡), denoted as
Contributions: This work addresses the problem of state informa- 𝑀[𝑡⟩𝑀 ′ . The set of transitions that are enabled at a marking 𝑀 defines
tion protection for repeatedly observable DESs. We achieve differential 𝑇 (𝑀) = {𝑡 ∈ 𝑇 ∣ 𝑀[𝑡⟩}. We write 𝑀[𝜎⟩ to denote that the sequence of
privacy verification and enforcement in the framework of PLPNs to transitions 𝜎 = 𝑡1 ⋯ 𝑡𝑘 ∈ 𝑇 ∗ (𝑘 ∈ N) is enabled at 𝑀, and 𝑀[𝜎⟩𝑀 ′ to
protect adjacent initial states. The main contributions of this work are denote that the firing of 𝜎 from 𝑀 yields 𝑀 ′ . A marking 𝑀 is reachable
summarized as follows: in ⟨𝑁, 𝑀0 ⟩ if there exists a sequence 𝜎 such that 𝑀0 [𝜎⟩𝑀 holds. The
2
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Table 1
Recent research contributions.
Literature Focused Focused DES Focused language Focused state
differential privacy privacy protection information protection information protection
Hassan et al. (2020) *
Jiang et al. (2023) *
Gu and Zhang (2023) *
McSherry (2010) *
Soria-Comas et al. (2017) *
Yin et al. (2018) *
Zhu et al. (2017) *
Dwork (2008) *
Jones et al. (2019) * * *
Basile et al. (2023) * *
Tong et al. (2017a) * *
Zhu et al. (2018) * *
Current research * * *
set of all markings reachable from 𝑀0 defines the reachability set of where (𝐷1 ) (or (𝐷2 )) is the output of on input 𝐷1 (or 𝐷2 ), P ∶
⟨𝑁, 𝑀0 ⟩, denoted by 𝑅(𝑁, 𝑀0 ), i.e., 𝑅(𝑁, 𝑀0 ) = {𝑀 ∈ N|𝑃 | ∣ (∃𝜎 ∈ 𝑂 → (0, 1] is the probability function, mapping an output of to a real
𝑇 ∗ )𝑀0 [𝜎⟩𝑀}. number between zero and one (including one), and 𝜖 is the privacy
A labeled Petri net (LPN) is a four-tuple = (𝑁, 𝑀0 , 𝐸, 𝓁), where budget parameter that stipulates the level of privacy protection with
⟨𝑁, 𝑀0 ⟩ is a Petri net system, 𝐸 is the alphabet (a finite set of labels) 𝜖 ∈ R and 𝜖 ≥ 0.
and 𝓁 ∶ 𝑇 → 𝐸 ∪ {𝜀} is the labeling function that assigns to each
transition 𝑡 ∈ 𝑇 either a symbol from 𝐸 or the empty word 𝜀. The 3. Problem statement
transitions in can be partitioned into two disjoint sets 𝑇 = 𝑇𝑜 ∪ 𝑇𝑢 ,
where 𝑇𝑜 = {𝑡 ∈ 𝑇 ∣ 𝓁(𝑡) ∈ 𝐸} is the set of observable transitions and In this section, differential privacy is introduced to protect the initial
𝑇𝑢 = 𝑇 ∖𝑇𝑜 = {𝑡 ∈ 𝑇 ∣ 𝓁(𝑡) = 𝜀} is the set of unobservable transitions. state of DESs modeled by PLPNs. We first define our notation and
The labeling function can be extended to firing sequences 𝓁 ∶ 𝑇 ∗ → 𝐸 ∗ , establish its mathematical developments below.
i.e., 𝓁(𝜀) = 𝜀 and 𝓁(𝜎𝑡) = 𝓁(𝜎)𝓁(𝑡) with 𝜎 ∈ 𝑇 ∗ and 𝑡 ∈ 𝑇 .
Given an LPN = (𝑁, 𝑀0 , 𝐸, 𝓁) and a marking 𝑀 ∈ 𝑅(𝑁, 𝑀0 ),
3.1. State differential privacy
we define the language of generated from 𝑀 as
(𝑁, 𝑀) = {𝜔 ∈ 𝐸 ∗ |∃𝜎 ∈ 𝑇 ∗ ∶ 𝑀[𝜎⟩ & 𝓁(𝜎) = 𝜔}. Let 𝑎, 𝑏 ∈ N be two non-negative integers. We associate 𝑎 and 𝑏 with
a binary scalar 𝜉(𝑎, 𝑏), defined as follows:
A string belonging to (𝑁, 𝑀0 ) is called an observation. To investi- {
gate the differential privacy problems in DESs, this paper proposes a 1, 𝑎 ≠ 𝑏
new formalism called probabilistic labeled Petri nets, motivated by the 𝜉(𝑎, 𝑏) =
0, 𝑎 = 𝑏
work in Cabasino et al. (2015).
Definition 2. [Adjacent States] Given a net structure 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵),
Definition 1. [Probabilistic Labeled Petri Nets] A probabilistic labeled
a set of possible initial markings 0 , and two markings (states) 𝑀1 ,
Petri net 𝐺 = (𝑁, 𝑀0 , 𝐸, 𝓁, 𝐵) is a five-tuple such that the following
𝑀2 ∈ 0 , 𝑀1 and 𝑀2 are said to be 𝜃-adjacent if
statements hold:
{∑
1. (𝑁, 𝑀0 , 𝐸, 𝓁) is a labeled Petri net. 𝑝∈𝑃 𝜉(𝑀1 (𝑝), 𝑀2 (𝑝)) = 𝜃
2. 𝐵 ∶ 𝑅(𝑁, 𝑀0 ) × 𝑇 → [0, 1] is a mapping that assigns to a pair of a ∑ ∑
𝑝∈𝑃 𝑀1 (𝑝) = 𝑝∈𝑃 𝑀2 (𝑝)
reachable marking 𝑀 and a transition 𝑡 ∈ 𝑇 a real number between
∑
0 and 1 such that 𝑡∈𝑇 (𝑀) 𝐵(𝑀, 𝑡) = 1. 𝐵(𝑀, 𝑡) indicates the firing where 𝜃 ∈ N+ is usually close to zero. ⋄
probability of the transition 𝑡 at the marking 𝑀. ⋄
In plain words, two markings or states 𝑀1 and 𝑀2 are 𝜃-adjacent if
In what follows, 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵) is called a PLPN structure, i.e., a (1) the number of places containing different numbers of tokens at 𝑀1
PLPN 𝐺 = (𝑁, 𝑀0 , 𝐸, 𝓁, 𝐵) is a net structure 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵) with and 𝑀2 is 𝜃; (2) the token sums at 𝑀1 and 𝑀2 are equal. Accordingly,
an initial marking 𝑀0 . Write 𝐺 = (𝑁, 𝑀0 , 𝐸, 𝓁, 𝐵) as 𝐺(𝑀0 ) if 𝐺𝑠 is any of two adjacent states 𝑀1 and 𝑀2 can be assigned to a PLPN as an
implicitly defined. initial marking. In this case, they are called 𝜃-adjacent initial states for
the underlying PLPN structure.
2.2. Differential privacy Given a transition sequence 𝜎 = 𝑡1 𝑡2 ⋯ 𝑡𝑘 ∈ 𝑇 ∗ , where for all
𝑖, 𝑗 ∈ {1, 2, … , 𝑘}, 𝑖 ≠ 𝑗 does not necessarily imply 𝑡𝑖 ≠ 𝑡𝑗 , 𝑘 is said
Traditionally, a randomized algorithm is said to satisfy differential to be the length of 𝜎, denoted by |𝜎|. The 𝑖th element of 𝜎 is denoted
privacy if an attacker is unlikely to distinguish the output distributions by 𝜎[𝑖].
of two data sets differing on at most one element. It guarantees that a Suppose that an attacker observes an observation 𝜔 ∈ 𝐸 ∗ at a time
randomized algorithm behaves similarly on similar (or adjacent) input instance, where the length of 𝜔 is 𝑘 ∈ N (called a current observation
data sets. Let R be the set of real numbers. length), i.e., |𝜔| = 𝑘. Now we find the firing transition sequences from
Two data sets 𝐷1 and 𝐷2 are said to be adjacent if they differ on the initial marking whose corresponding observations are bounded by
at most one element, i.e., either 𝐷1 = 𝐷2 or there exists a datum 𝑑 length 𝑘.
such that 𝐷1 ∪ {𝑑} = 𝐷2 or 𝐷2 ∪ {𝑑} = 𝐷1 (Dwork & Roth, 2013).
By Chaudhuri et al. (2011), a randomized algorithm satisfies 𝜖- Definition 3. [Firing Transition Sequences] Given a PLPN 𝐺 = (𝑁,
differential privacy if for any two adjacent input data sets 𝐷1 and 𝐷2 , 𝑀0 , 𝐸, 𝓁, 𝐵), the set of firing transition sequences generated from 𝑀0
and for any set of outputs 𝑂, bounded by an observation length 𝑘 ∈ N is defined as
𝑒𝑥𝑝(−𝜖) × P( (𝐷2 ) ∈ 𝑂) ≤ P( (𝐷1 ) ∈ 𝑂) ≤ 𝑒𝑥𝑝(𝜖) × P( (𝐷2 ) ∈ 𝑂) 𝑓 (𝑀0 , 𝑘) = {𝜎 ∈ 𝑇 ∗ ∣ 𝑀0 [𝜎⟩, 𝓁(𝜎) = 𝜔, |𝜔| ≤ 𝑘}. ⋄
3
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Definition 4. [Observations] Given a PLPN 𝐺 = (𝑁, 𝑀0 , 𝐸, 𝓁, 𝐵), the Given a finite set of transitions 𝑇 , the concatenation operation over
set of all observations generated from 𝑀0 bounded by 𝑘 ∈ N is defined 𝑇 ∗ is a function 𝑐𝑎𝑡 ∶ 𝑇 ∗ × 𝑇 ∗ → 𝑇 ∗ by defining 𝑐𝑎𝑡(𝑥, 𝑥′ ) = 𝑥𝑥′ ,
as where 𝑥, 𝑥′ ∈ 𝑇 ∗ . For example, given 𝜎 = 𝑡1 𝑡2 𝑡3 𝑡4 ∈ 𝑇 ∗ and 𝑡5 ∈ 𝑇 ∗ ,
𝑐𝑎𝑡(𝜎, 𝑡5 ) = 𝑡1 𝑡2 𝑡3 𝑡4 𝑡5 ∈ 𝑇 ∗ holds.
𝑜 (𝑀0 , 𝑘) = {𝜔 ∈ 𝐸 ∗ ∣ 𝜔 = 𝓁(𝜎), 𝜎 ∈ 𝑓 (𝑀0 , 𝑘)}. ⋄
The set of firing transition sequences that are consistent with an
observation 𝜔 generated at 𝑀0 is defined as
Definition 5. [Probability Distributions] Given a PLPN structure 𝐺𝑠 =
(𝑁, 𝐸, 𝓁, 𝐵), a set of possible initial markings 0 , and 𝑀 ∈ 0 , a 𝑆(𝜔, 𝑀0 ) = {𝜎 ∈ 𝑇 ∗ ∣ 𝑀0 [𝜎⟩, 𝓁(𝜎) = 𝜔}.
function Pr ∶ {𝑀} × 𝑜 (𝑀, 𝑘) → (0, 1] is the probability distribution,
Before the probability of generating an observation is computed,
mapping a pair of an initial marking and an observation to a real
let us first propose the probability computation of a firing transition
number between zero and one (including one). ⋄
sequence. Given a PLPN 𝐺 = (𝑁, 𝑀0 , 𝐸, 𝓁, 𝐵), let 𝜎 ∈ 𝑓 (𝑀0 , 𝑘) be
Given a net structure 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵) and two 𝜃-adjacent ini- a (feasible) firing transition sequence such that 𝑀0 [𝜎[1]⟩𝑀1 [𝜎[2]⟩𝑀2 ⋯
tial states 𝑀1 , 𝑀2 (leading to two PLPNs 𝐺(𝑀1 ) and 𝐺(𝑀2 )), let [𝜎[|𝜎|]⟩𝑀|𝜎| . The probability of generating 𝜎 from 𝑀0 is defined as
0 = {𝑀1 , 𝑀2 }. The probability distribution function of adjacent ini-
|𝜎|
∏
tial states can be extended to Pr ∶ 0 × (𝑜 (𝑀1 , 𝑘) ∪ 𝑜 (𝑀2 , 𝑘)) → [0, 1],
Pr(𝑀0 , 𝜎) = 𝐵(𝑀𝑖−1 , 𝜎[𝑖]) × 𝐵(𝑀|𝜎| , 𝜀) (1)
mapping a pair of an adjacent initial state and an observation generated 𝑖=1
from the two 𝜃-adjacent initial states to a real number between zero and
where 𝐵(𝑀|𝜎| , 𝜀) ∈ (0, 1] is the probability that there is no transition
one (including zero and one).
firing at 𝑀|𝜎| .
The probability of generating a firing transition sequence 𝜎 from the
Definition 6. [State Differential Privacy] Given a net structure 𝐺𝑠 =
initial state 𝑀0 is the product of firing probabilities of the transitions
(𝑁, 𝐸, 𝓁, 𝐵) and two 𝜃-adjacent initial states 𝑀1 , 𝑀2 (leading to two
contained in 𝜎 and the probability that there is no transition firing at
PLPNs 𝐺(𝑀1 ) and 𝐺(𝑀2 )), 𝐺(𝑀1 ) and 𝐺(𝑀2 ) satisfy 𝜖-state differential
𝑀|𝜎| after generating 𝜎, denoted by 𝐵(𝑀|𝜎| , 𝜀).
privacy if for all 𝑘 ∈ N and for all 𝜔 ∈ 𝑜 (𝑀1 , 𝑘) ∪ 𝑜 (𝑀2 , 𝑘), it holds
Since the generation of 𝜎 and all firing transition sequences of
𝑒𝑥𝑝(−𝜖) × Pr(𝑀2 , 𝜔) ≤ Pr(𝑀1 , 𝜔) ≤ 𝑒𝑥𝑝(𝜖) × Pr(𝑀2 , 𝜔) the from 𝑐𝑎𝑡(𝜎, 𝜎 ′ ) (𝜎 ′ ∈ 𝑇 𝑇 ∗ ) are independent, 𝐵(𝑀|𝜎| , 𝜀) is used
to indicate the probability that the sequence 𝜎 ′ ∈ 𝑇 ∗ is not gen-
where the parameter 𝜖 ∈ R and 𝜖 ≥ 0 stipulates the level of privacy
erated from 𝑀|𝜎| (𝑀0 [𝜎⟩𝑀|𝜎| ). For example, given two firing tran-
protection. ⋄
sition sequences 𝑡1 𝑡2 such that 𝑀0 [𝑡1 ⟩𝑀1 [𝑡2 ⟩𝑀2 and 𝑡1 𝑡2 𝑡3 such that
By observation for an arbitrarily long time, if the probability distri- 𝑀0 [𝑡1 ⟩𝑀1 [𝑡2 ⟩𝑀2 [𝑡3 ⟩𝑀3 , we have Pr(𝑀0 , 𝑡1 𝑡2 ) = 𝐵(𝑀0 , 𝑡1 ) × 𝐵(𝑀1 , 𝑡2 ) ×
butions of generating all observations from two 𝜃-adjacent initial states 𝐵(𝑀2 , 𝜀), where 𝐵(𝑀2 , 𝜀), showing that there is no transition firing at
are similar, the PLPN satisfies state differential privacy with adjacent 𝑀2 , is exactly the probability that 𝑡3 does not fire at 𝑀2 .
initial states. An attacker is unlikely to distinguish the two adjacent The probability of generating an observation 𝜔 is the sum of firing
initial states. In this paper, we restrict ourselves to state differential probabilities of all the firing transition sequences that are consis-
privacy. tent with 𝜔. For an observation 𝜔 ∈ 𝑜 (𝑀0 , 𝑘), the probability of 𝜔
generated from 𝑀0 is defined as
3.2. Problems ∑
Pr(𝑀0 , 𝜔) = Pr(𝑀0 , 𝜎) (2)
𝜎∈𝑆(𝜔,𝑀0 )
In this paper, we consider two problems involving pre-defined state
differential privacy in PLPNs. First, this work engages to verify whether Algorithm 1 computes the probability distribution of generating
the private information regarding the initial state of a DES modeled all the observations bounded by length 𝑘 from an initial state. The
with a PLPN is protected. number of enabled transitions at a state 𝑀 is denoted by |𝑇 (𝑀)|.
The firing probability of any enabled transition 𝑡 at 𝑀 with 𝑀[𝑡⟩ is
Problem 1. Given a PLPN structure 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵) and two 𝜃- denoted by 𝐵(𝑀, 𝑡) = 1∕(|𝑇 (𝑀)| + 1). For a firing transition sequence
adjacent initial states 𝑀0 and 𝑀0′ , verify whether the PLPN satisfies 𝜎 ∈ 𝑓 (𝑀0 , 𝑘), the probability of generating 𝜎 from the initial state
𝜖-state differential privacy with 𝑀0 and 𝑀0′ by the probability distri- 𝑀0 is denoted by Pr(𝑀0 , 𝜎). If 𝑀 is the marking by firing 𝜎 from
butions of generating observations bounded by the length 𝑘 from 𝑀0 𝑀0 , i.e., 𝑀0 [𝜎⟩𝑀, and the length of 𝓁(𝜎) is less than 𝑘, the proba-
and 𝑀0′ . bility that there is no transition firing at 𝑀 is denoted by 𝐵(𝑀, 𝜀) =
Next, we propose a controller to supervise the behavior of a PLPN 1∕(|𝑇 (𝑀)| + 1). If the length of 𝓁(𝜎) is equal to 𝑘, then there is no tran-
such that the controlled system satisfies state differential privacy to be ∏
sition firing at 𝑀, i.e., 𝐵(𝑀, 𝜀) = 1. Let Pr(𝑀0 , 𝜎) = 𝑖 𝐵(𝑀𝑖−1 , 𝜎[𝑖]) ×
defined. 𝐵(𝑀, 𝜀), where 1 ≤ 𝑖 ≤ |𝜎|, 𝑖 ∈ N+ , 𝑀𝑖−1 [𝜎[𝑖]⟩𝑀𝑖 , and 𝑀0 [𝜎⟩𝑀.
The obtained Pr(𝑀0 , 𝜎) is the product of firing probabilities of the
Problem 2. Given a PLPN structure 𝐺𝑠 = (𝑁, 𝐸, 𝓁, 𝐵) and two 𝜃- transitions contained in 𝜎 and the probability that there is no transition
adjacent initial states 𝑀0 and 𝑀0′ , construct a controller 𝐺𝑐 = (𝑐 , 𝑇𝑐 , firing at 𝑀 with 𝑀0 [𝜎⟩𝑀, as shown in Eq. (1). The probability of
𝛥𝑐 , 𝑀𝑐 0 ) such that the PLPN satisfies 0-state differential privacy with generating an observation 𝜔, denoted by Pr(𝑀0 , 𝜔), is the sum of firing
𝑀0 and 𝑀0′ and the controller is maximally permissive. probabilities of all firing transition sequences that are consistent with 𝜔
A solution to Problem 2 would ensure that an attacker in unlikely from 𝑀0 , as shown in Eq. (2). Algorithm 1 formalizes the computation
to infer the initial state of the PLPN by observation for an arbitrarily given in Eqs. (1) and (2).
long time. Notice that to apply the proposed approach, two assumptions are
made:
4. Verification of state differential privacy (A1) there is no unobservable cycle in a PLPN; and
(A2) all enabled transitions at a reachable state in a PLPN are
This section verifies whether a PLPN satisfies state differential pri- equally likely to fire.
vacy with respect to two adjacent initial states by the probability Assumption (A1) guarantees that the sets 𝑓 (𝑀0 , 𝑘) and 𝑜 (𝑀0 , 𝑘)
distributions of observations generated from the two adjacent ini- are finite and thus Algorithm 1 can terminate within finite time. As-
tial states. A method for computing the probability distribution of sumption (A2) provides a basis for computing probability distributions
observations generated from an initial state in a PLPN is provided first. of generating observations from an initial state.
4
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
5
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
(as defined in Tong et al. (2017b)) based on the reachability graphs (RGs) Algorithm 2: Construction of an extended RG
of Petri nets. Input: Two RGs 𝐺𝑟1 = (1 , 𝑇 , 𝛥1 , 𝑀0 ) and 𝐺𝑟2 = (2 , 𝑇 , 𝛥2 ,
An RG of a bounded Petri net system (𝑃 , 𝑇 , 𝑃 𝑟𝑒, 𝑃 𝑜𝑠𝑡, 𝑀0 ) can be 𝑀0′ )
regarded as a finite state automaton 𝐺𝑟 = (, 𝑇 , 𝛥, 𝑀0 ), where Output: An extended RG 𝐺𝑒 = (𝑒 , 𝑇𝑒 , 𝛥𝑒 , 𝑀𝑒0 ) by 𝐺𝑟1 and
= 𝑅(𝑁, 𝑀0 ) is a finite set of states, the set of transitions 𝑇 is
𝐺𝑟2
the alphabet, 𝑀0 ∈ is an initial state, and 𝛥 ⊆ × 𝑇 × is
1 𝑀𝑒0 ← (𝑀0 , 𝑀0′ ); 𝑒 ← {𝑀𝑒0 }; ′𝑒 ← {𝑀𝑒0 }; 𝑇𝑒 ← ∅; 𝛥𝑒 ← ∅;
the transition relation (it is actually a partial function). Given a PLPN
2 foreach (𝑀𝑎 , 𝑀𝑏 ) ∈ ′𝑒 do
structure 𝐺𝑠 with transition set 𝑇 and two 𝜃-adjacent initial states 𝑀0 ,
3 foreach 𝑀𝑎′ ∈ (𝑀𝑎 ) do
𝑀0′ (leading to two PLPNs 𝐺(𝑀0 ) and 𝐺(𝑀0′ )), we accordingly obtain
4 foreach 𝑀𝑏′ ∈ (𝑀𝑏 ) do
two RGs 𝐺𝑟1 = (1 , 𝑇 , 𝛥1 , 𝑀0 ) and 𝐺𝑟2 = (2 , 𝑇 , 𝛥2 , 𝑀0′ ), respectively.
Given a state 𝑀 ∈ in 𝐺𝑟 = (, 𝑇 , 𝛥, 𝑀0 ), the set of one-step 5 𝑒 ← 𝑒 ∪ {(𝑀𝑎′ , 𝑀𝑏′ )};
reachable states of 𝑀 is defined as 6 ′𝑒 ← ′𝑒 ∪ {(𝑀𝑎′ , 𝑀𝑏′ )};
′ ′
7 𝑇𝑒 ← 𝑇𝑒 ∪ (𝑀𝑎 , 𝑀𝑎′ ) × (𝑀𝑏 , 𝑀𝑏′ );
(𝑀) = {𝑀 ∈ ∣ (∃𝑡 ∈ 𝑇 )(𝑀, 𝑡, 𝑀 ) ∈ 𝛥}.
8 𝛥𝑒 ←
Accordingly, the set of transitions firing from 𝑀 to 𝑀 ′ is defined as 𝛥𝑒 ∪ {((𝑀𝑎 , 𝑀𝑏 ), (𝑀𝑎 , 𝑀𝑎′ ) × (𝑀𝑏 , 𝑀𝑏′ ), (𝑀𝑎′ , 𝑀𝑏′ ))};
6
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
production1 starting from 𝑀𝑒0 and ending at 𝑀𝑒 such that the length where (𝑀𝑒 , 𝑀𝑒′ ) = | (𝑀𝑒 , 𝑀𝑒′ ) ∩ (𝑀𝑎 , 𝑀𝑎′ )| and
of any production from 𝑀𝑒0 to any other state in the cycle is greater 𝑀𝑒′ = (𝑀𝑎′ , 𝑀𝑏′ ). The tag of 𝑀𝑒 is reset to ‘‘old’’ and the state 𝑀𝑒
than that of the production from 𝑀𝑒0 to 𝑀𝑒 . is removed from ′𝑒 . This procedure runs iteratively until there
is no state in ′𝑒 .
Definition 8. [Profit Function] Given an extended RG 𝐺𝑒 = (𝑒 , 𝑇𝑒 , Definition 8 is introduced to select the states from 𝐺𝑒 for a con-
𝛥𝑒 , 𝑀𝑒0 ) by two RGs 𝐺𝑟1 = (1 , 𝑇 , 𝛥1 , 𝑀0 ) and 𝐺𝑟2 = (2 , 𝑇 , 𝛥2 , 𝑀0′ ), troller. As seen later, a state 𝑀𝑒 ∈ 𝑒 with (𝑀𝑒 ) = −1 will be
∶ 𝑒 → N ∪ {−1} is said to be a profit function, mapping a state in forbidden by a controller. The computation of a profit function is
𝐺𝑒 to a natural number or −1. ⋄ implemented by Algorithm 4. Lines 1–7, 8–15, and 16–26 achieve steps
The computation of a profit function can be conducted by the (1), (2.1), and (2.2)–(2.4), respectively.
following steps in a recursive way.
Example 3. Let us consider again the extended RG 𝐺𝑒 in Fig. 3. We first
(1) All the states in 𝐺𝑒 are first initialized to a natural number or −1. perform step (1) of the computation of the profit function of states in
𝐺𝑒 . Since (𝑀𝑒4 ) = ∅, the profit of 𝑀𝑒4 is initialized to zero. By noting
(1.1) For all 𝑀𝑒 ∈ 𝑒 , if (𝑀𝑒 ) = ∅, the profit of 𝑀𝑒 is initialized that 𝑀𝑒0 is a first-cycled state and |𝑒 | × |𝑇 | = 20, the profit of 𝑀𝑒0 is
to zero and 𝑀𝑒 is assigned a tag ‘‘old’’. initialized to 20. The profits of other states in 𝐺𝑒 are initialized to −1.
(1.2) For all 𝑀𝑒 ∈ 𝑒 , if (𝑀𝑒 ) ≠ ∅ and 𝑀𝑒 is a first-cycled For the states 𝑀𝑒2 and 𝑀𝑒4 , by (𝑀𝑒2 , 𝑀𝑒4 ) = ∅, the profit of 𝑀𝑒4 is
state, the profit of 𝑀𝑒 is initialized to |𝑒 | × |𝑇 | and 𝑀𝑒 is reset to be −1 due to step (2.1) of the computation of the profit function
assigned a tag ‘‘old’’. of states in 𝐺𝑒 . By (𝑀𝑒3 , 𝑀𝑒0 ) = 1, the profit of 𝑀𝑒3 is equal to 21 by
(1.3) If the conditions in (1.1) and (1.2) are not satisfied, the Eq. (3).
profit of 𝑀𝑒 is initialized to −1, and 𝑀𝑒 is assigned no tag.
Consider state 𝑀𝑒2 = (𝑀2 , 𝑀2′ ) and label 𝜀, we have 𝑎 (𝑀2 , 𝜀) =
(2) A set ′𝑒 is initialized by ′𝑒 = 𝑒 . Given a state 𝑀𝑒 ∈ ′𝑒 , {𝑀3 } and 𝑏 (𝑀2′ , 𝜀) = {𝑀3′ }. By step (2.4) of the computation of the
if the tags of all 𝑀𝑒′ ∈ (𝑀𝑒 ) are ‘‘old’’, the profit of 𝑀𝑒 needs profit function of states in 𝐺𝑒 , we do not have to reset the profit of 𝑀𝑒3 .
to be re-computed. To this end, let us first reset the profits of all By (𝑀𝑒2 , 𝑀𝑒3 ) = 1, we find (𝑀𝑒2 ) = (𝑀𝑒3 ) + (𝑀𝑒2 , 𝑀𝑒3 ) = 22 by
𝑀𝑒′ ∈ (𝑀𝑒 ). Let 𝑀𝑒 = (𝑀𝑎 , 𝑀𝑏 ). For all 𝑒 ∈ 𝐸 ∪ {𝜀}, the sets of Eq. (3). Similarly, we obtain (𝑀𝑒1 ) = (𝑀𝑒2 ) + (𝑀𝑒1 , 𝑀𝑒2 ) = 23 and
one-step reachable states from 𝑀𝑎 in 𝐺𝑟1 and 𝑀𝑏 in 𝐺𝑟2 by firing reset (𝑀𝑒0 ) = (𝑀𝑒1 ) + (𝑀𝑒0 , 𝑀𝑒1 ) = 24. ■
a candidate transition 𝑡 ∈ (𝑀𝑒 , 𝑀𝑒′ ) with 𝓁(𝑡) = 𝑒 are denoted
by 𝑎 (𝑀𝑒 , 𝑒) and 𝑏 (𝑀𝑒 , 𝑒), respectively. Definition 9. [Controller] Given an extended RG 𝐺𝑒 = (𝑒 , 𝑇𝑒 , 𝛥𝑒 ,
(2.1) For any 𝑀𝑒′ ∈ (𝑀𝑒 ), if (𝑀𝑒 , 𝑀𝑒′ )
= ∅, reset (𝑀𝑒′ )
= −1. 𝑀𝑒0 ) by two RGs 𝐺𝑟1 = (1 , 𝑇 , 𝛥1 , 𝑀0 ) and 𝐺𝑟2 = (2 , 𝑇 , 𝛥2 , 𝑀0′ )
with the profit function , a controller 𝐺𝑐 = (𝑐 , 𝑇𝑐 , 𝛥𝑐 , 𝑀𝑐 0 ) is a
(2.2) If |𝑎 (𝑀𝑒 , 𝑒)| > |𝑏 (𝑀𝑒 , 𝑒)|, select |𝑎 (𝑀𝑒 , 𝑒)| − |𝑏 (𝑀𝑒 , 𝑒)|
finite state automaton such that the following statements hold:
states from (𝑀𝑒 ) ∩ (𝑎 (𝑀𝑒 , 𝑒) × 2 ) such that the profit of
1. 𝑐 is the transition relation defined as:
7
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
8
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Theorem 2. The controller 𝐺𝑐 due to Algorithm 5 is maximally permissive An example of a hospital management system is provided to illus-
for the enforcement of 0-state differential privacy. trate the proposed method. Based on the public database of medical
examinations for diseases recorded in a hospital, an external observer
Proof. To show that 𝐺𝑐 is maximally permissive for the enforcement of can obtain the probability distributions of the observed behavior of pa-
0-state differential privacy, we need to prove that there is no controller tients performing examinations with different diseases in the hospital.
where the state and transition relation is larger than that of 𝐺𝑐 . An attacker (a malicious observer) may obtain the location information
Given 𝐺𝑐 = (𝑐 , 𝑇𝑐 , 𝛥𝑐 , 𝑀𝑐 0 ) derived from an extended RG 𝐺𝑒 = of a large number of patients in the surgical building by invading
(𝑒 , 𝑇𝑒 , 𝛥𝑒 , 𝑀𝑒0 ) with the profit function , for all 𝑀𝑒 = (𝑀𝑎 , 𝑀𝑏 ) ∈ mobile base stations. Due to limited detection accuracy, the attacker
𝑒 and all 𝑒 ∈ 𝐸 ∪{𝜀}, by resetting the profits of all one-step reachable only knows which area the patients have gone to but does not know
states of 𝑀𝑒 , the numbers of one-step reachable states from 𝑀𝑎 in 𝐺𝑟1 the specific initial department. Patients in similar initial locations are
considered to depart from the same initial department, meaning that
and 𝑀𝑏 in 𝐺𝑟2 by firing a candidate transition 𝑡 with 𝓁(𝑡) = 𝑒 are the
patients have the same disease information. The disease information
same. In this way, the controlled PLPNs 𝐺𝑐 (𝑀0 ) and 𝐺𝑐 (𝑀0′ ) can reach
of patients varies depending on the location of the initial department.
the maximum number of states while satisfying 0-state differential
For patients with similar initial locations, the attacker knows that these
privacy.
patients are from the same initial department, but does not know what
For all 𝑀𝑒 = (𝑀𝑎 , 𝑀𝑏 ) and all 𝑀𝑒′ = (𝑀𝑎′ , 𝑀𝑏′ ) ∈ (𝑀𝑒 ), the set their initial department is.
(𝑀𝑒 , 𝑀𝑒′ ) denotes all candidate transitions from 𝑀𝑒 to 𝑀𝑒′ . By com- An attacker can obtain the probability distribution of the observed
puting candidate transitions between a state and its one-step reachable behavior of patients from the same initial department by observing
state from 𝐺𝑒 , the number of transitions from a state and its one-step the subsequent examinations performed by these patients. The attacker
reachable state in 𝐺𝑐 is maximum while the firing probabilities of any may infer the initial department by comparing the observed probability
label from 𝑀𝑎 yielding 𝑀𝑎′ and from 𝑀𝑏 yielding 𝑀𝑏′ are equal. Since distribution with the previously obtained probability distributions, that
the selection of states in 𝐺𝑒 as the states in 𝐺𝑐 follows the profit func- is, obtaining the disease information of the patients. In this way, the
tion, and the profit of a state 𝑀𝑒 is actually the maximum number of attacker can achieve the purpose of recommending drugs to certain
transitions from 𝑀𝑒 to any state in 𝐺𝑒 , the controlled PLPNs 𝐺𝑐 (𝑀0 ) and patients. The disease information of patients is not intended to be
𝐺𝑐 (𝑀0′ ) contain the maximum number of transition relations satisfying disclosed to the public.
0-state differential privacy. ■ The proposed method in this paper introduces differential privacy
into hospital management systems to protect the disease information
of patients. For patients from two different initial departments, if the
Example 4. Consider again the extended RG 𝐺𝑒 in Fig. 3 with the profit
probability distributions of the observed behavior of the patients per-
function . A controller can be constructed for two 2-adjacent initial
forming examinations in a hospital are similar, an attacker is unlikely
states 𝑀0 and 𝑀0′ . The initial state 𝑀𝑐 0 of 𝐺𝑐 is that of 𝐺𝑒 , i.e., 𝑀𝑐 0 =
to infer the initial department of the patients. Therefore, the proposed
𝑀𝑒0 . By (𝑀𝑒0 ) = {𝑀𝑒1 }, (𝑀𝑒1 ) ≥ 0, and (𝑀𝑒0 , 𝑀𝑒1 ) = {𝑡1 }, we
method in this paper protects the privacy of patients.
obtain 𝑀𝑐 1 = 𝑀𝑒1 and (𝑀𝑐 0 , {𝑡1 }, 𝑀𝑐 1 ) ∈ 𝛥𝑐 . Similarly, 𝑀𝑐 2 = 𝑀𝑒2 A PLPN is used to simulate the behavior of patients performing
and (𝑀𝑐 1 , {𝑡2 }, 𝑀𝑐 2 ) ∈ 𝛥𝑐 hold. For 𝑀𝑐 2 , by (𝑀𝑒2 ) = {𝑀𝑒3 , 𝑀𝑒4 }, examinations in a hospital, whose structure 𝐺2𝑠 is shown in Fig. 5.
(𝑀𝑒3 ) ≥ 0, (𝑀𝑒2 , 𝑀𝑒3 ) = {𝑡3 }, and (𝑀𝑒4 ) < 0, 𝑀𝑐 3 = 𝑀𝑒3 It is assumed that an external attacker fully knows the structure of
holds and 𝑀𝑒4 is not a state in 𝐺𝑐 . It holds (𝑀𝑐 2 , {𝑡3 }, 𝑀𝑐 3 ) ∈ 𝛥𝑐 . the hospital management system, but only partially observes the event
Furthermore, we obtain (𝑀𝑐 3 , {𝑡4 }, 𝑀𝑐 0 ) ∈ 𝛥𝑐 . The controller 𝐺𝑐 is occurrences in it. For 𝐺2𝑠 , the labels 𝑎, 𝑏, 𝑐, 𝑑, 𝑒, and 𝑓 of the
shown in Fig. 4. ■ observable transitions 𝑡1 –𝑡4 , 𝑡7 –𝑡11 , 𝑡13 –𝑡17 , 𝑡19 –𝑡23 , and 𝑡26 represent
CT, X-ray, ultrasound scan, magnetic resonance, electrocardiograph, and
9
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Fig. 7. Probability distributions of observations generated from 𝑀0 and 𝑀0′ bounded by 𝑘 = 10 in 𝐺2 (𝑀0 ) and 𝐺2 (𝑀0′ ).
10
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Fig. 10. Probability distributions of observations generated from 𝑀0 and 𝑀0′ bounded by 𝑘 = 10 in 𝐺2 𝑐 (𝑀0 ) and 𝐺2 𝑐 (𝑀0′ ).
11
Y. Teng et al. Expert Systems With Applications 237 (2024) 121454
Ding, Z., Zhou, Y., & Zhou, M. (2018). Modeling self-adaptive software systems by Soria-Comas, J., Domingo-Ferrer, J., Sanchez, D., & Megias, D. (2017). Individual
fuzzy rules and Petri nets. IEEE Transactions on Fuzzy Systems, 26(2), 967–984. differential privacy: A utility-preserving formulation of differential privacy guar-
http://dx.doi.org/10.1109/TFUZZ.2017.2700286. antees. IEEE Transactions on Information Forensics and Security, 12(6), 1418–1429.
Dwork, C. (2008). Differential privacy: A survey of results. In Proc. 5th international http://dx.doi.org/10.1109/TIFS.2017.2663337.
conference on theory and applications of models of computation (pp. 1–19). http: Sweeney, L. (2002). K-anonymity: A model for protecting privacy. International Journal
//dx.doi.org/10.1007/978-3-540-79228-4_1. of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(5), 557–570. http://dx.
Dwork, C., & Roth, A. (2013). The algorithmic foundations of differential privacy. doi.org/10.1142/S0218488502001648.
Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–406. http: Tong, Y., Li, Z., Seatzu, C., & Giua, A. (2017a). Decidability of opacity verification
//dx.doi.org/10.1561/0400000042. problems in labeled Petri net systems. Automatica, 80, 48–53. http://dx.doi.org/
Fiore, D., & Russo, G. (2019). Resilient consensus for multi-agent systems subject 10.1016/j.automatica.2017.01.013.
to differential privacy requirements. Automatica, 106, 18–26. http://dx.doi.org/10. Tong, Y., Li, Z., Seatzu, C., & Giua, A. (2017b). Verification of state-based opacity
1016/j.automatica.2019.04.029. using Petri nets. IEEE Transactions on Automatic Control, 62(6), 2823–2837. http:
Gu, Z., & Zhang, G. (2023). Trajectory data publication based on differential privacy. //dx.doi.org/10.1109/TAC.2016.2620429.
International Journal of Information Security and Privacy, 17(1), http://dx.doi.org/ Varghese, F., & Sasikala, P. (2023). A detailed review based on secure data transmission
10.4018/IJISP.315593. using cryptography and steganography. Wireless Personal Communications, 129(4),
Hassan, M., Rehmani, M., & Chen, J. (2020). Differential privacy techniques for 2291–2318. http://dx.doi.org/10.1007/s11277-023-10183-z.
cyber physical systems: A survey. IEEE Communications Surveys & Tutorials, 22(1), Wang, X., Lu, F., Zhou, M., & Zeng, Q. (2022). A synergy-effect-incorporated fuzzy
746–789. http://dx.doi.org/10.1109/COMST.2019.2944748. Petri net modeling paradigm with application in risk assessment. Expert Systems
Hu, Y., & Cao, S. (2023). Asynchronous diagnosability enforcement in discrete event with Applications, 199, http://dx.doi.org/10.1016/j.eswa.2022.117037.
systems based on supervisory control. IEEE Sensors Journal, 23(9), 10071–10079. Yang, J., Deng, W., Qiu, D., & Jiang, C. (2020). Opacity of networked discrete event
http://dx.doi.org/10.1109/JSEN.2023.3259524. systems. Information Sciences, 543, 328–344. http://dx.doi.org/10.1016/j.ins.2020.
Jiang, H., Pei, J., Yu, D., Yu, J., Gong, B., & Cheng, X. (2023). Applications of 07.017.
differential privacy in social network analysis: A survey. IEEE Transactions on Yang, B., & Li, H. (2018). A novel dynamic timed fuzzy Petri nets modeling method with
Knowledge and Data Engineering, 35(1), 108–127. http://dx.doi.org/10.1109/TKDE. applications to industrial processes. Expert Systems with Applications, 97, 276–289.
2021.3073062. http://dx.doi.org/10.1016/j.eswa.2017.12.027.
Jones, A., Leahy, K., & Hale, M. (2019). Towards differential privacy for symbolic Yin, C., Xi, J., Sun, R., & Wang, J. (2018). Location privacy protection based on
systems. In Proc. American control conference (pp. 372–377). differential privacy strategy for big data in industrial Internet of Things. IEEE
Li, D., Wu, J., Le, J., Liao, X., & Xiang, T. (2023). A novel privacy-preserving location- Transactions on Industrial Informatics, 14(8), 3628–3636. http://dx.doi.org/10.1109/
based services search scheme in outsourced cloud. IEEE Transactions on Cloud TII.2017.2773646.
Computing, 11(1), 457–469. http://dx.doi.org/10.1109/TCC.2021.3098420. Yu, Y., Liu, G., & Hu, W. (2022). Security tracking control for discrete-time stochastic
Ma, Z., Li, Z., & Giua, A. (2020). Marking estimation in a class of time labeled Petri systems subject to cyber attacks. ISA Transactions, 127, 133–145. http://dx.doi.org/
nets. IEEE Transactions on Automatic Control, 65(2), 493–506. http://dx.doi.org/10. 10.1016/j.isatra.2022.02.001.
1109/TAC.2019.2907413. Zhu, G., Li, Z., & Wu, N. (2018). Model-based fault identification of discrete event
McSherry, F. (2010). Privacy integrated queries: An extensible platform for privacy- systems using partially observed Petri nets. Automatica, 96, 201–212. http://dx.
preserving data analysis. Communications of the ACM, 53(9), 89–97. http://dx.doi. doi.org/10.1016/j.automatica.2018.06.039.
org/10.1145/1810891.1810916. Zhu, T., Li, G., Zhou, W., & Yu, P. (2017). Differentially private data publishing and
analysis: A survey. IEEE Transactions on Knowledge and Data Engineering, 29(8),
1619–1638. http://dx.doi.org/10.1109/TKDE.2017.2697856.
12