Next Article in Journal
Heterogeneous Nucleation in Solutions on Rough Solid Surfaces: Generalized Gibbs Approach
Next Article in Special Issue
Some New Results on the Gaussian Wiretap Feedback Channel
Previous Article in Journal
The Rényi Entropies Operate in Positive Semifields
Previous Article in Special Issue
Implications of Coding Layers on Physical-Layer Security: A Secrecy Benefit Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Theoretic Security for Broadcasting of Two Encrypted Sources under Side-Channel Attacks

Department of Computer and Network Engineering, University of Electro-Communications, Tokyo 182-8585, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2019 IEEE International Symposium on Information Theory (ISIT) B. Santoso and Y. Oohama: “Secure Broadcasting of Two Encrypted Sources under Side-Channel Attacks”, ISIT 2019.
Current address: 1-5-1 Chofugaoka, Tokyo 182-8585, Japan.
Submission received: 8 June 2019 / Revised: 2 August 2019 / Accepted: 6 August 2019 / Published: 9 August 2019
(This article belongs to the Special Issue Information-Theoretic Security II)

Abstract

:
In this paper, we propose a theoretical framework to analyze the secure communication problem for broadcasting two encrypted sources in the presence of an adversary which launches side-channel attacks. The adversary is not only allowed to eavesdrop the ciphertexts in the public communication channel, but is also allowed to gather additional information on the secret keys via the side-channels, physical phenomenon leaked by the encryption devices during the encryption process, such as the fluctuations of power consumption, heat, or electromagnetic radiation generated by the encryption devices. Based on our framework, we propose a countermeasure against such adversary by using the post-encryption-compression (PEC) paradigm, in the case of one-time-pad encryption. We implement the PEC paradigm using affine encoders constructed from linear encoders and derive the explicit the sufficient conditions to attain the exponential decay of the information leakage as the block lengths of encrypted sources become large. One interesting feature of the proposed countermeasure is that its performance is independent from the type of side information leaked by the encryption devices.

1. Introduction

In recent years, it has become very common that one person holds multiple wireless communication devices and broadcasts the messages through multiple devices. In order to ensure secrecy, it is a standard practice to encrypt the data before broadcasting them into the public communication channel. The usual security problem that is considered in such system of broadcasting encrypted sources is the secrecy against an adversary which eavesdrops the ciphertexts sent via the public communication channel. However, Kocher et al. [1,2] have shown that an adversary may also learn “side” information about the secret keys from “side-channel”, i.e., the measurements of physical phenomenon that occur in the physical devices where the encryption procedures are implemented. Such adversary is called as side-channel adversary. Examples of the physical phenomenon exploited by the side-channel adversaries are the fluctuations of time cost [1], the fluctuations of power consumption [2], and the electromagnetic (EM) radiation [3]. In this paper, we are focusing on a specific scenario where an adversary is not only eavesdropping on the public communication channel but is also launching side-channel attacks on multiple communication devices owned by a sender. We consider that this kind of side-channel attack is feasible in the real world when multiple devices owned by the sender relocated in the same area such that the adversary can catch the side information from the devices directly.

1.1. Modelling Side-Channel Attacks

The adversarial/security model we use in this paper and its relation to a real-world example are shown in Figure 1. Basically, we adapt the approach in [4] on modeling the side-channel, where the side-channel is modelled as a rate constraint noiseless channel.
We describe our model in a more formal way as follows. Let us consider two sources X 1 and X 2 , where each is encrypted in two different encryption devices using secret keys K 1 and K 2 , respectively, resulting in ciphertexts C 1 and C 2 , respectively. The ciphertexts C 1 and C 2 are sent by the sender to multiple receivers through multiple public communication channels. The adversary A is allowed to obtain: (1) ciphertexts C 1 and C 2 from the public communication channels, and also (2) “noisy” digital data Z generated by the probe or the measurement device from the physical phenomenon leaked by all encryption devices of the sender. The measurement device may just be a simple analog-to-digital converter that converts the analog data representing physical information leaked by the devices into “noisy” digital data Z. In our model, we represent the measurement process as a communication channel W.The adversary A is equipped with a side-channel encoding device φ A which encodes and processes Z into the binary data M A . Finally, combining C 1 , C 2 , and M A , A will attempt to derive information on the sources X 1 and X 2 .

1.2. Our Results and Methodology in Brief

We show that we can strengthen the secrecy/security of the Shannon ciphers which are implemented on multiple physical devices of a sender in a broadcasting system against an adversary who collects ciphertexts and launches side-channel attacks by a simple method of reencoding the ciphertexts before releasing them into the public communication channels. This method is based on post-encryption-compression (PEC) paradigm. We prove that, in the case that all encryption devices implement one time pad encryption, we can strengthen the secrecy/security using appropriate affine encoders φ 1 and φ 2 which transform the original ciphertexts C 1 and C 2 into reencoded ciphertexts C ˜ 1 and C ˜ 2 .
More formally, we prove that, for any distribution of the secret keys ( K 1 , K 2 ) and any measurement device (used to convert the physical information from a side-channel into the noisy large alphabet data Z), we can derive an achievable rate region for ( R 1 , R 2 , R A ), where R 1 and R 2 are the encoding rates of φ 1 and φ 2 , respectively, R A is the encoding rate of adversary’s encoding device φ A . More precisely, if we reencode C 1 and C 2 into C ˜ 1 and C ˜ 2 using φ 1 and φ 2 with encoding rates R 1 and R 2 , respectively, such that R 1 and R 2 are inside the achievable region, then we can attain reliability and security in the following sense:
  • anyone with secret keys K 1 and K 2 can construct appropriate decoders that decrypt and encode the reencoded ciphertexts C ˜ 1 and C ˜ 2 into original sources X 1 and X 2 with exponentially decaying error probability, and
  • the amount of information on the sources X 1 and X 2 gained by any adversary A which collects the reencoded ciphertexts C 1 , C 2 the encoded side-channel information M A is exponentially decaying to zero as long as the side-channel encoding device φ A encodes Z into M A with the rate R A which is inside the achievable rate region.
Taking the advantage of the homomorphic property of one-time-pad and affine encoding, we separate the theoretical analysis of reliability and security such that we can deal with each issue independently. For reliability analysis, similar to the analysis in [4,5,6,7], we mainly obtain our result by adapting the result of Csizár [8] on the universal coding using linear codes. Our main theorem on security is based on the technique developed in [4] which is actually a combination of two other techniques. One is a technique developed by Oohama in [9] for deriving approximation error exponents for the intrinsic randomness problem in some framework of distributed random number extraction. (This technique is is also used in the security analysis in Santoso and Oohama [6,10].) Another one is a technique proposed by Oohama [11] to establish exponential strong converse theorem for the one helper source coding problem. (This technique is used in the security analysis for the side channel attacks to the Shannon cipher system.)
In addition, since we model the side-channel as a rate constraint noiseless channel, all theoretical results in this paper are independent from the type of side-channel information the adversary collects from the encryption devices. This means that the countermeasure we propose in this paper can be applied against any type of side-channel attacks launched by the adversary, e.g., timing attacks, electromagnetic radiation or power analysis, and so on.

1.3. Related Works

The use of PEC for communication system can be traced back to the work by Johnson et al., in [12]. However, their main focus is the issue of reliability and they only provide weak secrecy for security, whereas, in this paper, we provide security based on the strong secrecy [13,14].
Several theoretical models analyzing the security of a cryptographic system against side-channel attacks have been proposed in the literature. However, most of the existing works are applicable only for specific characteristics of the leaked physical information. For example, Brier et al. [15] and Coron et al. [16] propose a statistical model for side-channel attacks using the information from power consumption and the running time, whereas Agrawal et al. [3] propose a statistical model for side-channel attacks using electromagnetic (EM) radiations. A more general model for side-channel attacks is proposed by Köpf et al. [17] and Backes et al. [18], but they are heavily dependent upon implementation on certain specific devices. Micali et al. [19] propose a very general security model to capture the side-channel attacks, but they fail to offer any hint of how to build a concrete countermeasure against the side-channel attacks. One of the closest existing models to ours is the general framework for analyzing side-channel attacks proposed by Standaert et al. [20]. However, the authors of [20] propose a countermeasure against side-channel attacks that is different from ours, i.e., noise insertion on implementation. It should be noted that the noise insertion countermeasure proposed by [20] depends on the characteristics of the leaked physical information. Another model that is similar to ours in the sense that it is independent from the type of leaked physical information is proposed by Chérisey et al. [21,22]. However, the main aim of [21,22] is only establishing the mathematical link between success probability of side-channel adversary and mutual information and no countermeasure is proposed.

1.4. Organization of This Paper

This paper is structured as follows. In Section 2, we show the basic notations and definitions that we use throughout this paper, and we also describe the formal formulations of our model and the security problem. In Section 3, we explain the idea and the formulation of our proposed solution. In Section 4, we state our main theorem on the reliability and security of our solution. In Section 5, we show the proof of our main theorem. In Section 6, we discuss an alternative formulation of our model and problem. In Section 7, we show the comparison between our current results in this paper and our previous works. We put our conclusions in Section 9. We put the proofs of other related propositions, lemmas, and theorems in the appendix.

2. Problem Formulation

2.1. Preliminaries

In this subsection, we show the basic notations and related consensus used in this paper.
Random Source of Information and Key: For each i = 1 , 2 , let X i be a random variable from a finite set X i . For each i = 1 , 2 , let { X i , t } t = 1 be two stationary discrete memoryless sources (DMS) such that, for each t = 1 , 2 , , X i , t take values in finite set X i and has the same distribution as that of X i denoted by p X i = { p X i ( x i ) } x i X i . The stationary DMS { X i , t } t = 1 , are specified with p X i .
We next define the two keys used in the two common cryptosystems. For each i = 1 , 2 , let ( K 1 , K 2 ) be a pair of two correlated random variables taken from the same finite set X 1 × X 2 . Let { ( K 1 , t , K 2 , t ) } t = 1 be a stationary discrete memoryless source such that, for each t = 1 , 2 , , ( K 1 , t , K 2 , t ) takes values in X 1 × X 2 and has the same distribution as that of ( K 1 , K 2 ) denoted by
p K 1 K 2 = { p K 1 K 2 ( k 1 , k 2 ) } ( k 1 , k 2 ) X 1 × X 2 .
The stationary DMS { ( K 1 , t , K 2 , t } t = 1 is specified with p K 1 K 2 .
Random Variables and Sequences: We write the sequence of random variables with length n from the information sources as follows: X i n : = X i , 1 X i , 2 X i , n , i = 1 , 2 . Similarly, the strings with length n of X i n are written as x i n : = x i , 1 x i , 2 x i , n X i n . For ( x 1 n , x 2 n ) X 1 n × X 2 n , p X 1 n X 2 n ( x 1 n , x 2 n ) stands for the probability of the occurrence of ( x 1 n , x 2 n ) . When the information source is memoryless specified with p X 1 X 2 , we have the following equation holds:
p X 1 n X 2 n ( x 1 n , x 2 n ) = t = 1 n p X 1 X 2 ( x 1 , t , x 2 , t ) .
In this case, we write p X 1 n X 2 n ( x 1 n , x 2 n ) as p X 1 X 2 n ( x 1 n , x 2 n ) . Similar notations are used for other random variables and sequences.
Consensus and Notations: Without loss of generality, throughout this paper, we assume that X 1 and X 2 are finite fields. The notation ⊕ is used to denote the field addition operation, while the notation ⊖ is used to denote the field subtraction operation, i.e., a b = a ( b ) for any elements a , b from the same finite field. All discussions and theorems in this paper still hold although X 1 and X 2 are different finite fields. However, for the sake of simplicity, we use the same notation for field addition and subtraction for both X 1 and X 2 . Throughout this paper, all logarithms are taken to the natural basis.

2.2. Basic System Description

In this subsection, we explain the basic system setting and basic adversarial model we consider in this paper. First, let the information source and the key be generated independently by three different parties S gen , 1 , S gen , 2 and K gen , respectively. In our setting, we assume the following:
  • The random keys K 1 n and K 2 n are generated by K gen from uniform distribution. We may have a correlation between K 1 n and K 2 n .
  • The sources X 1 n and X 2 n , respectively, are generated by S gen , 1 and S gen , 2 . Those are independent from the keys.
Next, let the two random sources X 1 n and X 2 n , respectively, from S gen , 1 and S gen , 2 be sent to two separated nodes L 1 and L 2 . In addition, let two random key (sources) K 1 n and K 2 n from K gen be also sent separately to L 1 and L 2 . Further settings of our system are described as follows. Those are also shown in Figure 2.
  • Separate Sources Processing: For each i = 1 , 2 , at the node L i , X i n is encrypted with the key K i n using the encryption function Enc i . The ciphertext C i n of X i n is given by C i n : = Enc i ( X i n ) = X i n K i n .
  • Transmission: The ciphertexts C 1 n and C 2 n , respectively, are sent to the information processing center D 1 and D 2 through two public communication channels. Meanwhile, the keys K 1 n and K 2 n , respectively are sent to D 1 and D 2 through two private communication channels.
  • Sink Nodes Processing: For each i = 1 , 2 , in D i , we decrypt the ciphertext C i n using the key K i n through the corresponding decryption procedure Dec i defined by Dec i ( C i n ) = C i n K i n . It is obvious that we can correctly reproduce the source output X n from C i n and K i n by the decryption function Dec i .
Side-Channel Attacks by Eavesdropper Adversary: An (eavesdropper) adversary A eavesdrops on the public communication channel in the system. The adversary A also uses a side information obtained by side-channel attacks. Let Z be a finite set and let W : X 1 × X 2 Z be a noisy channel. Let Z be a channel output from W for the input random variable K. We consider the discrete memoryless channel specified with W. Let Z n Z n be a random variable obtained as the channel output by connecting ( K 1 n , K 2 n ) X 1 n × X 2 n to the input of channel. We write a conditional distribution on Z n given ( K 1 n , K 2 n ) as
W n = W n ( z n | k 1 n , k 2 n ) ( k 1 n , k 2 n , z n ) X 1 n × X 2 n × Z n .
Since the channel is memoryless, we have
W n ( z n | k 1 n , k 2 n ) = t = 1 n W ( z t | k 1 , t , k 2 , t ) .
On the above output Z n of W n for the input ( K 1 n , K 2 n ) , we assume the following:
  • The two random pairs ( X 1 , X 2 ) , ( K 1 , K 2 ) and the random variable Z, satisfy ( X 1 , X 2 ) ( K 1 , K 2 , Z ) , which implies that ( X 1 n , X 2 n ) ( K 1 n , K 2 n , Z n ) .
  • By side-channel attacks, the adversary A can access Z n .
We next formulate side information the adversary A obtains by side-channel attacks. For each n = 1 , 2 , , let φ A ( n ) : Z n M A ( n ) be an encoder function. Set φ A : = { φ A ( n ) } n = 1 , 2 , . Let
R A ( n ) : = 1 n log | | φ A | | = 1 n log | M A ( n ) |
be a rate of the encoder function φ A ( n ) . For R A > 0 , we set
F A ( n ) ( R A ) : = { φ A ( n ) : R A ( n ) R A } .
On encoded side information, the adversary A obtains, we assume, the following:
  • The adversary A , having accessed Z n , obtains the encoded additional information φ A ( n ) ( Z n ) . For each n = 1 , 2 , , the adversary A can design φ A ( n ) .
  • The sequence { R A ( n ) } n = 1 must be upper bounded by a prescribed value. In other words, the adversary A must use φ A ( n ) such that, for some R A and for any sufficiently large n, φ A ( n ) F A ( n ) ( R A ) .
As a solution to the side channel attacks, we consider a system of broadcast encryption with post-encryption coding. We call this system as Sys . The illustration of Sys is shown in Figure 3.
  • Encoding at Source node L i , i = 1 , 2 : For each i = 1 , 2 , we first use φ i ( n ) to encode the ciphertext C i n = X i n K i n . A formal definition of φ i ( n ) is φ i ( n ) : X i n X i m i . Let C ˜ i m i = φ i ( n ) ( C i n ) . Instead of sending C i n , we send C ˜ i m i to the public communication channel.
  • Decoding at Sink Nodes D i , i = 1 , 2 : For each i = 1 , 2 , D i receives C ˜ i m i from a public communication channel. Using common key K i n and the decoder function Ψ i ( n ) : X i m × X i n X i n , D i outputs an estimation X ^ i n = Ψ i ( n ) ( C ˜ i m i , K i n ) of X i n .
On Reliability and Security: From the description of our system in the previous section, the decoding process in our system above is successful if X ^ i n = X i n holds. Combining this and Equation (5), it is clear that the decoding error probabilities p e , i , i = 1 , 2 , are as follows:
p e , i = p e ( φ i ( n ) , Ψ i ( n ) | p X i n ) : = Pr [ Ψ i ( n ) ( φ i ( n ) ( X i n ) ) X i n ] .
Set M A ( n ) = φ A ( n ) ( Z n ) . The information leakage Δ ( n ) on ( X 1 n , X 2 n ) from ( C ˜ 1 m 1 , C ˜ 2 m 2 , M A ( n ) ) is measured by the mutual information between ( X 1 n , X 2 n ) and ( C ˜ 1 m 1 , C ˜ 2 m 2 , M A ( n ) ) . This quantity is formally defined by
Δ ( n ) = Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) : = I ( X 1 n X 2 n ; C ˜ 1 m 2 , C ˜ 2 m 2 , M A ( n ) ) .
Reliable and Secure Framework:
Definition 1.
A pair ( R 1 , R 2 ) is achievable under R A > 0 for the system Sys if there exists two sequences { ( φ i ( n ) , Ψ i ( n ) ) } n 1 , i = 1 , 2 , such that ϵ > 0 , n 0 = n 0 ( ϵ ) N 0 , n n 0 , we have for i = 1 , 2 ,
1 n log | X i m i | = m i n log | X i | R i , p e ( φ i ( n ) , Ψ i ( n ) | p X i n ) ϵ ,
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) ϵ .
Definition 2 (Reliable and Secure Rate Region).
Let R Sys ( p X 1 X 2 , p Z K 1 K 2 ) denote the set of all ( R A , R ) such that R is achievable under R A . We call R Sys ( p X 1 X 2 , p Z K 1 K 2 ) thereliable and secure rateregion.
Definition 3.
A five tuple ( R 1 , R 2 , E 1 , E 2 , F ) is achievable under R A > 0 for the system Sys if there exists a sequence { ( φ i ( n ) , Ψ i ( n ) ) } n 1 , i = 1 , 2 , such that ϵ > 0 , n 0 = n 0 ( ϵ ) N 0 , n n 0 , we have for i = 1 , 2 ,
1 n log | X i m i | = m i n log | X i | R i , p e ( φ i ( n ) , Ψ i ( n ) | p X i n ) e n ( E i ϵ ) ,
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) e n ( F ϵ ) .
Definition 4 (Rate, Reliability, and Security Region).
Let D Sys ( p X 1 X 2 , p K 1 K 2 , W ) denote the set of all ( R A , R , E , F ) such that ( R 1 , R 2 , E 1 , E 2 , F ) is achievable under R A . We call D Sys ( p X 1 X 2 , p K 1 K 2 , W ) therate, reliability, and securityregion.

3. Proposed Idea: Affine Encoder as Privacy Amplifier

For each n = 1 , 2 , , let ϕ i ( n ) : X i n X i m i be a linear mapping. We define the mapping ϕ i ( n ) by
ϕ i ( n ) ( x i n ) = x i n A i   for   x i n X i n ,
where A i is a matrix with n rows and m i columns. Entries of A i are from X i . We fix b i m i X i m i . Define the mapping φ i ( n ) : X i n X i m i by
φ i ( n ) ( k i n ) : = ϕ i ( n ) ( k i n ) b i m i = k i n A i b i m i ,   for   k i n X i n .
The mapping φ i ( n ) is called the affine mapping induced by the linear mapping ϕ i ( n ) and constant vector b i m i X i m i . By the definition of φ i ( n ) , the following affine structure holds:
φ i ( n ) ( x i n k i n ) = ( x i n k i n ) A i b i m i = x i n A i ( k i n A i b i m i ) = ϕ i ( n ) ( x i n ) φ i ( n ) ( k i n ) ,   for   x i n , k i n X i n .
Next, let ψ i ( n ) be the corresponding decoder for ϕ i ( n ) such that ψ i ( n ) : X i m i X i n . Note that ψ i ( n ) does not have a linear structure in general.
Description of Proposed Procedure: We describe the procedure of our privacy amplified system as follows:
  • Encoding at Source node L i , i = 1 , 2 : First, we use φ i ( n ) to encode the ciphertext C i n = X i n K i n Let C ˜ i m i = φ i ( n ) ( C i n ) . Then, instead of sending C n , we send C ˜ i m i to the public communication channel. By the affine structure (3) of encoder, we have that
    C ˜ i m i = φ i ( n ) ( X i n K i n ) = ϕ i ( n ) ( X i n ) φ i ( n ) ( K i n ) = X ˜ i m i K ˜ i m i ,
    where we set X ˜ i m i : = ϕ i ( n ) ( X i n ) , K ˜ i m i : = φ i ( n ) ( K i n ) .
  • Decoding at Sink Node D i , i = 1 , 2 : First, using the linear encoder φ i ( n ) , D i encodes the key K i n received through private channel into K ˜ i m i = φ i ( n ) ( K i n ) . Receiving C ˜ i m i from public communication channel, D i computes X ˜ i m i in the following way. From (4), we have that the decoder D i can obtain X ˜ i m i = ϕ i ( n ) ( X i n ) by subtracting K ˜ i m i = φ i ( n ) ( K i n ) from C ˜ i m i . Finally, D i outputs X ^ i n by applying the decoder ψ i ( n ) to X ˜ i m i as follows:
    X ^ i n = ψ i ( n ) ( X ˜ i m i ) = ψ i ( n ) ( ϕ i ( n ) ( X i n ) ) .
Our privacy amplified system described above is illustrated in Figure 4.

4. Main Results

In this section, we state our main results. To describe our results, we define several functions and sets. Let U be an auxiliary random variable taking values in a finite set U . We assume that the joint distribution of ( U , Z , K 1 , K 2 ) is
p U Z K 1 K 2 ( u , z , k 1 , k 2 ) = p U ( u ) p Z | U ( z | u ) p K 1 K 2 | Z ( k 1 , k 2 | z ) .
The above condition is equivalent to U Z ( K 1 , K 2 ) . In the following argument for convenience of descriptions of definitions, we use the following notations:
R 3 : = R 1 + R 2 , X 3 : = X 1 × X 2 , k 3 : = ( k 1 , k 2 ) , K 3 : = ( K 1 , K 2 ) .
For each i = 1 , 2 , 3 , we simply write p i = p U Z K i . Specifically, for i = 3 , we have p 3 = p U Z K 1 K 2 = p . Define the three sets of probability distribution with i = 1 , 2 , 3 :
P ( p Z K i ) : = { p U Z K i : | U | | Z | + 1 , U Z K i } .
For i = 1 , 2 , 3 , let us define as follows:
R i ( p i ) : = { ( R A , R i ) : R A , R i 0 , R A I ( Z ; U ) , R i H ( K i | U ) } ,
R i ( p Z K i ) : = p i P ( p Z K i ) R i ( p i ) .
The two regions R i ( p Z K i ) , i = 1 , 2 have the same form as the region appearing as the admissible rate region in the one-helper source coding problem posed and investigated by Ahlswede and Körner [23]. We can show that the region R i ( p Z K i ) , i = 1 , 2 , and R 3 ( p Z K 1 K 2 ) satisfy the following property.
Property 1.
(a) 
The region R i ( p Z K i ) , i = 1 , 2 is a closed convex subset of R + 2 . The region R 3 ( p Z K 1 K 2 ) is a closed convex subset of R + 3 .
(b) 
The bound | U | | Z | + 1 is sufficient to describe R i ( p Z K i ) , i = 1 , 2 , 3 .
We define several quantities to state our main result. Let i { 1 , 2 } . We first define a function related to an exponential upper bound of p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) . Let X ¯ i be an arbitrary random variable over X i and has a probability distribution p X ¯ i . Let P ( X i ) denote the set of all probability distributions on X i . For R i 0 and p X i P ( X i ) , we define the following function:
E ( R i | p X i ) : = min p X ¯ i P ( X i ) { [ R i H ( X ¯ i ) ] + + D ( p X ¯ i | | p X i ) } .
We next define a function related to an exponential upper bound of Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) . For each i = 1 , 2 , 3 , we define three sets of probability distributions on U × Z × X i by
P ˜ ( p Z K i ) : = { p = p U Z K i : | U | | Z | , U Z K i } .
Furthermore, for each i = 1 , 2 , 3 , we define three sets of probability distributions on U × Z × X i by
Q ( p K i | Z ) : = { q i = q U Z K i : q K i Z | U = p K i Z | U : for   some p i P ˜ ( p Z K i ) } .
For each i = 1 , 2 , 3 , for ( μ , α ) [ 0 , 1 ] 2 , and for q i = q U Z K i Q ( p K i | Z ) , define
ω q i | p Z ( μ , α ) ( z , k i | u ) : = α ¯ log q Z ( z ) p Z ( z ) + α μ log q Z | U ( z | u ) p Z ( z ) + μ ¯ log 1 q K i | U ( k i | u ) , Ω ( μ , α ) ( q i | p Z ) : = log E q exp ω q i | p Z ( μ , α ) ( Z , K i | U ) , Ω ( μ , α ) ( p Z K i ) : = min q i Q ( p K i | Z ) Ω ( μ , α ) ( q i | p Z ) , F ( μ , α ) ( μ R A + μ ¯ R i | p Z K i ) : = Ω i ( μ , α ) ( p K i , W ) α ( μ R A + μ ¯ R i ) 2 + α μ ¯ , F ( R A , R i | p Z K i ) : = sup ( μ , α ) [ 0 , 1 ] 2 F ( μ , α ) ( μ R A + μ ¯ R i | p Z K i ) .
In [11] (extended version), Oohama proved several properties on F ( R A , R i | p Z K i ) , i = 1 , 2 , 3 . According to [11] (extended version), we have the following property.
Property 2.
For any i = 1 , 2 , 3 and for any τ ( 0 , ( 1 / 2 ) ρ ( p Z K i ) ) , the condition ( R A , R i + τ ) R i ( p Z K i ) implies
F ( R A , R i | p Z K i ) > ρ ( p Z K i ) 4 · g 2 τ ρ ( p Z K i ) > 0 ,
where ρ ( p Z K i ) , i = 1 , 2 , 3 , respectively, are some quantities depending on p Z K i and g is the inverse function of ϑ ( a ) : = a + ( 5 / 4 ) a 2 , a 0 .
Let us define as follows:
F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) : = min i = 1 , 2 , 3 F ( R A , R i | p Z K i ) .
Our main result is as follows.
Theorem 1.
For any R A , R 1 , R 2 > 0 and any p Z K 1 K 2 , there exists two sequence of mappings { ( φ i ( n ) , ψ i ( n ) ) } n = 1 , i = 1 , 2 such that, for any p X i , i = 1 , 2 , and any n ( R 1 + R 2 ) 1 , we have
1 n log | X i m i | = m i n log | X i | R i , p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) e n [ E ( R i | p X i ) δ i , n ] , i = 1 , 2
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p K 1 K 2 n , W n ) e n [ F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) δ 3 , n ] ,
where δ i , n , i = 1 , 2 , 3 are defined by
δ i , n : = 1 n log e ( n + 1 ) 2 | X i | × 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | , f o r   i = 1 , 2 , δ 3 , n : = 1 n log 15 n ( R 1 + R 2 ) × 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | .
Note that, for i = 1 , 2 , 3 , δ i , n 0 as n .
Detail of the proof of Theorem 1 will be explained in Section 5.
The functions E ( R i | p X i ) and F ( R A , R 1 , R 2 | p Z K 1 K 2 ) take positive values if ( R A , R 1 , R 2 ) belongs to the set
R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) : = { R 1 > H ( X 1 ) } { R 2 > H ( X 2 ) } i = 1 , 2 , 3 R i c ( p Z K i ) .
Thus, by Theorem 1, under ( R A , R 1 , R 2 ) R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) , we have the following:
  • On the reliability, for i = 1 , 2 , p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) goes to zero exponentially as n tends to infinity, and its exponent is lower bounded by the function E ( R i | p X i ) .
  • On the security, for any φ A satisfying φ A ( n ) F A ( n ) ( R A ) , the information leakage Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) on X 1 n , X 2 n goes to zero exponentially as n tends to infinity, and its exponent is lower bounded by the function F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) .
  • For each i = 1 , 2 , any code ( ϕ i ( n ) , ψ i ( n ) ) that attains the exponent function E ( R i | p X i ) is a universal code that depends only on R i not on the value of the distribution p X i .
Define
D Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) : = { ( R A , R 1 , R 2 , E ( R 1 | p X 1 ) , E ( R 2 | p X 2 ) , F min ( R A , R 1 , R 2 | p K 1 K 2 ) ) : ( R 1 , R 2 ) R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) } .
From Theorem 1, we obtain the following corollary:
Corollary 1.
R Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) R Sys ( p X 1 X 1 , p Z K 1 K 2 ) , D Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) D Sys ( p X 1 X 1 , p Z K 1 K 2 ) .
Remark 1.
Note that, from the definitions of sets P ( p Z K i ) , R i ( p Z K i ) , it is easy to see that the set R Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) is the intersection of the outer regions of all possible adversarial encoding of A (where each encoding is represented by one auxiliary variable U) within rate R A . Moreover, since we use the strong converse theorem developed in [11] instead of the weak converse, we can guarantee that in R Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) , not only the adversarial decoding success probability, but also the information leakage decays to zero at an exponential rate.
Remark 2.
Thanks to the separation between reliability and security analysis, the results related security in this paper will still hold even in the case where the sources are correlated. Moreover, our proposed countermeasure can strengthen the secrecy even in the case where the marginal distribution of each key K i , i.e., p K i , ( i = 1 , 2 ) is not uniform.

Examples of Extremal Cases

In the remaining part of this section, we give two simple examples of R Sys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) . Those correspond to extremal cases on the correlation of ( K 1 , K 2 , Z ) . In those two examples, we assume that X 1 = X 2 = { 0 , 1 } and p X 1 ( 1 ) = s 1 , p X 2 ( 1 ) = s 2 . We further assume that p K 1 , K 2 has the binary symmetric distribution given by
p K 1 K 2 ( k 1 , k 2 ) = ( 1 / 2 ) ρ ¯ k 1 k 2 + ρ k 1 k 2 ¯ for ( k 1 , k 2 ) { 0 , 1 } 2 ,
where ρ [ 0 , 0 . 5 ] is a parameter indicating the correlation level of ( K 1 , K 2 ) .
Example 1.
We consider the case where W = p Z | K 1 K 2 is given by
W ( z | k 1 , k 2 ) = W ( z | k 1 ) = ρ A ¯ k 1 z + ρ A k 1 z ¯ f o r ( k 1 , k 2 , z ) { 0 , 1 } 3 .
In this case, we have K 2 K 1 Z . This corresponds to the case where the adversary A attacks only node L 1 . Let N A be a binary random variable with p N A ( 1 ) = ρ A . We assume that N A is independent from ( X 1 , X 2 ) and ( K 1 , K 2 ) . Using N A , Z can be written as Z = K 1 N A . The inner bound for this example denoted by R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) is the following:
R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) = { ( R A , R 1 , R 2 ) : 0 R A log 2 h ( θ ) , h ( s 1 ) < R 1 < h ( ρ A θ ) , h ( s 2 ) < R 2 < h ( ρ ρ A ) θ , R 1 + R 2 < h ( ρ ) + h ( ρ A θ ) f o r   s o m e   θ [ 0 , 1 ] } ,
where h ( · ) denotes the binary entropy function and a b : = a b ¯ + a ¯ b .
One can easily compute R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) based on the solution for the problem of lossless source coding with helper, which is explained in [24]. The computation of R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) is given in Appendix A.
Example 2.
We consider the case of ρ = 0 . 5 . In this case, K 1 and K 2 is independent. In this case, we have no information leakage if R A = 0 . We assume that W = p Z | K 1 K 2 is given by
W ( z | k 1 , k 2 ) = ρ A ¯ k 1 k 2 z + ρ A k 1 k 2 z ¯ f o r ( k 1 , k 2 , z ) { 0 , 1 } 3 .
Let N A be the same random variable as the previous example. Using N A , Z can be written as Z = K 1 K 2 N A . The inner bound in this example denoted by R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) is the following:
R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) = { ( R A , R 1 , R 2 ) : 0 R A log 2 h ( θ ) , h ( s i ) < R i < log 2 , i = 1 , 2 , R 1 + R 2 < log 2 + h ( ρ A θ ) f o r   s o m e   θ [ 0 , 1 ] } .
Similar to Example 1, one can also easily compute R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) based on the solution for the problem of lossless source coding with helper, which is explained in [24]. Computation of R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) is given in Appendix B.
For the above two examples, we show the section of the regions R Sys , exi ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) for i = 1 , 2 by the plane { R A = log 2 h ( θ ) } , which is shown in Figure 5.

5. Proofs of the Main Results

In this section, we prove Theorem 1.

5.1. Types of Sequences and Their Properties

In this subsection, we prepare basic results on the types. Those results are basic tools for our analysis of several bounds related to error provability of decoding or security.
Definition 5.
For each i = 1 , 2 and for any n-sequence x i n = x i , 1 x i , 2 x i , n X n , n ( x i | x i n ) denotes the number of t such that x i , t = x i . The relative frequency n ( x i | x i n ) / n x i X i of the components of x i n is called the type of x 1 n denoted by P x n . The set that consists of all the types on X is denoted by P n ( X ) . Let X ¯ i denote an arbitrary random variable whose distribution P X ¯ i belongs to P n ( X i ) . For p X ¯ i P n ( X i ) , set
T X ¯ i n : = x i n : P x i n = p X ¯ i .
For set of types and joint types, the following lemma holds. For the detail of the proof, see Csiszár and Körner [25].
Lemma 1.
(a) 
| P n ( X i ) | ( n + 1 ) | X i | .
(b) 
For P X ¯ i P n ( X i ) ,
( n + 1 ) | X i | e n H ( X ¯ i ) | T X ¯ i n | e n H ( X ¯ i ) .
(c) 
For x i n T X ¯ i n ,
p X i n ( x i n ) = e n [ H ( X ¯ i ) + D ( p X ¯ i | | p X i ) ] .
By Lemma 1 parts (b) and (c), we immediately obtain the following lemma:
Lemma 2.
For p X ¯ i P n ( X i ) ,
p X i n ( T X ¯ i n ) e n D ( p X ¯ i | | p X i ) .

5.2. Upper Bounds on Reliability and Security

In this subsection, we evaluate upper bounds of p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) , i = 1 , 2 , and Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 U n ) . For p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) , we derive an upper bound that can be characterized with a quantity depending on ( ϕ i ( n ) , ψ i ( n ) ) and type P x i n of sequences x i n X i n . We first evaluate p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) , i = 1 , 2 . For x i n X i n and p X ¯ P n ( X i ) , we define the following functions:
Ξ x i n ( ϕ i ( n ) , ψ i ( n ) ) : = 1 if ψ i ( n ) ϕ i ( n ) ( x i n ) x i n , 0 otherwise , Ξ X ¯ i ( ϕ ( n ) , ψ ( n ) ) : = 1 | T X ¯ i n | x i n T X ¯ i n Ξ x i n ( ϕ i ( n ) , ψ i ( n ) ) .
Then, we have the following lemma.
Lemma 3.
In the proposed system, for i = 1 , 2 and for any pair of ( ϕ i ( n ) , ψ i ( n ) ) , we have
p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) p X ¯ i P n ( X i ) Ξ X ¯ ( ϕ i ( n ) , ψ i ( n ) ) e n D ( p X ¯ i | | p X i ) .
Proof of this lemma is found in [26]. We omit the proof.
We next discuss upper bounds of
Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) = I ( C ˜ 1 m 1 C ˜ 2 m 2 , M A ( n ) ; X 1 n X 2 n ) .
On an upper bound of I ( C ˜ 1 m 1 C ˜ 2 m 2 , M A ( n ) ; X 1 n X 2 n ) , we have the following lemma:
Lemma 4.
I ( C ˜ 1 m 1 C ˜ 2 m 2 , M A ( n ) ; X 1 n X 2 n ) D p K 1 m 1 K 2 m 2 | M A ( n ) p V 1 m 1 V 2 m 2 p M A ( n ) ,
where p V 1 m 1 V 2 m 2 represents the uniform distribution over X 1 m 1 × X 2 m 2 .
We can prove Lemma 4 using a similar method shown in [4]. The detailed proof is given in Appendix C.

5.3. Random Coding Arguments

We construct a pair of affine encoders ( φ 1 ( n ) , φ 2 ( n ) ) using the random coding method. For the two decoders ψ i ( n ) , i = 1 , 2 , we propose the minimum entropy decoder used in Csiszár [8] and Oohama and Han [27].
Random Construction of Affine Encoders: For each i = 1 , 2 , we first choose m i such that
m i : = n R i log | X i | ,
where a stands for the integer part of a. It is obvious that, for i = 1 , 2 ,
R i 1 n m i n log | X i | R i .
By the Definition (2) of ϕ i ( n ) , we have that, for x i n X i n ,
ϕ i ( n ) ( x i n ) = x i n A i ,
where A i is a matrix with n rows and m i columns. By the definition (2) of φ i ( n ) , we have that, for k i n X i n ,
φ i ( n ) ( k i n ) = k i n A i + b i m i ,
where for each i = 1 , 2 , b i m i is a vector with m i columns. Entries of A i and b i m i are from the field of X i . Those entries are selected at random, independently from each other and with uniform distribution. Randomly constructed linear encoder ϕ i ( n ) and affine encoder φ i ( n ) have three properties shown in the following lemma.
Lemma 5 (Properties of Linear/Affine Encoders).
For each i = 1 , 2 , we have the following:
(a) 
For any x i n , v i n X i n with x i n v i n , we have
Pr [ ϕ i ( n ) ( x i n ) = ϕ i ( n ) ( v i n ) ] = Pr [ ( x i n v i n ) A = 0 m i ] = | X | m i .
(b) 
For any s i n X i n , and for any s ˜ i m i X m i , we have
Pr [ φ i ( n ) ( s i n ) = s ˜ i m i ] = Pr [ s n A i b i m i = s ˜ i m i ] = | X i | m i .
(c) 
For any s i n , t i n X i n with s i n t i n , and for any s ˜ i m i X i m i , we have
Pr [ φ i ( n ) ( s i n ) = φ i ( n ) ( t i n ) = s ˜ i m i ] = Pr [ s i n A i b i m i = t i n A i b i m i = s ˜ i m i ] = | X i | 2 m i .
Proof of this lemma is found in [26]. We omit the proof.
We next define the decoder function ψ i ( n ) : X i m i X i n , i = 1 , 2 . To this end, we define the following quantities.
Definition 6.
For x i n X i n , we denote the entropy calculated from the type P x i n by H ( x i n ) . In other words, for a type P X ¯ i P n ( X i ) such that P X ¯ i = P x i n , we define H ( x i n ) = H ( X ¯ i ) .
Minimum Entropy Decoder: For each i = 1 , 2 , and for ϕ i ( n ) ( x i n ) = x ˜ i m i , we define the decoder function ψ i ( n ) : X i m i X i n as follows:
ψ i ( n ) ( x ˜ i m i ) : = x ^ i n if   ϕ i ( n ) ( x ^ i n ) = x ˜ i m i   and   H ( x ^ i n ) < H ( x ˇ i n ) for   all   x ˇ i n such   that   ϕ i ( n ) ( x ˇ i n ) = x ˜ i m i ,   and   x ˇ i n x ^ i n , arbitrary if   there   is   no   such   x ^ i n X i n .
Error Probability Bound: In the following arguments, we let expectations based on the random choice of the affine encoders φ i ( n ) i = 1 , 2 be denoted by E [ · ]. For, i = 1 , 2 , define
Π X ¯ i ( R i ) : = e n [ R i H ( X ¯ i ) ] + .
Then, we have the following lemma.
Lemma 6.
For each i = 1 , 2 , for any n and for any P X ¯ i P n ( X i ) ,
E Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | Π X ¯ ( R i ) .
Proof of this lemma is found in [26]. We omit the proof.
Estimation of Approximation Error: Define
Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) : = ( a , k 1 n , k 2 n ) M A ( n ) × X 1 n × X 2 n p M A ( n ) K n ( a , k 1 n , k 2 n ) × log [ 1 + ( e n R 1 1 ) p K 1 n | M A ( n ) ( k 1 n | a ) + ( e n R 2 1 ) p K 2 n | M A ( n ) ( k 2 n | a ) + ( e n R 1 1 ) ( e n R 2 1 ) p K 1 n K 2 n | M A ( n ) ( k 1 n , k 2 n | a ) .
Then, we have the following lemma.
Lemma 7.
For i = 1 , 2 and for any n , m i satisfying ( m i / n ) log | X i | R i , we have
E D p K ˜ m 1 K ˜ m 2 | M A ( n ) p V 1 m 1 V 2 m 2 p M A ( n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) .
Proof of this lemma is given in Appendix D. From the bound (20) in Lemma (7), we know that the quantity Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) serves as an upper bound of the ensemble average of the conditional divergence D ( p K ˜ 1 m 1 K ˜ 2 m 2 | M A ( n ) | | p V 1 m 1 V 2 m 2 | p M A ( n ) ) .
From Lemmas 4 and 7, we have the following corollary.
Corollary 2.
E Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) .
Existence of Good Code { ( φ i ( n ) , ψ i ( n ) ) } i = 1 , 2 :
From Lemma 6 and Corollary 2, we have the following lemma stating an existence of universal code { ( φ i ( n ) , ψ i ( n ) ) } i = 1 , 2 .
Lemma 8.
There exists at least one deterministic code { ( φ i ( n ) , ψ i ( n ) ) } i = 1 , 2 satisfying ( m i / n ) log | X i | R i , i = 1 , 2 , such that, for i = 1 , 2 and for any p X ¯ i P n ( X i ) ,
Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | × { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } Π X ¯ i ( R i ) .
Furthermore, for any φ A ( n ) F A ( n ) ( R A ) , we have
Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) .
Basically, we can prove Lemma 8 in the same way as to prove a similar lemma shown in [4]. The detailed proof is given in Appendix E.
Proposition 1.
For any R A , R 1 , R 2 > 0 , and any p Z K 1 K 2 , there exist two sequences of mappings { ( φ i ( n ) , ψ i ( n ) ) } n = 1 , i = 1 , 2 such that, for i = 1 , 2 and for any p X i P ( X i ) , we have
1 n log | X i m i | = m i n log | X i | R i , p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) e ( n + 1 ) 2 | X i | × { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } e n E ( R i | p X i )
and, for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } × Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) .
Proof. 
By Lemma 8, there exists ( φ i ( n ) , ψ i ( n ) ) , i = 1 , 2 , satisfying ( m i / n ) log | X i | R i , such that for i = 1 , 2 and for any p X ¯ i P n ( X i ) ,
Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | × { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } Π X ¯ ( R i ) .
Furthermore, for any φ A ( n ) F A ( n ) ( R A ) ,
Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } × Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) .
The bound (22) in Proposition 1 has already been proved in (24). Hence, it suffices to prove the bound (21) in Proposition 1 to complete the proof. On an upper bound of p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) , i = 1 , 2 , we have the following chain of inequalities:
p e ( ϕ i ( n ) , ψ i ( n ) | p X i n ) ( a ) e ( n + 1 ) | X i | × { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } × p X ¯ i P n ( X i ) Π X ¯ i ( R i ) e n D ( p X ¯ i | | p X i ) e ( n + 1 ) | X i | { ( n + 1 ) | X i | + 1 } | P n ( X i ) | e n E ( R i | p X i ) ( b ) e ( n + 1 ) 2 | X i | { 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | } × e n E ( R i | p X i ) .
Step (a) follows from Lemma 3 and (23). Step (b) follows from Lemma 1 part (a). ☐

5.4. Explicit Upper Bound of Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n )

In this subsection, we derive an explicit upper bound of Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) , which holds for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) . Define
0 : = p M A ( n ) Z n K 1 n K 2 n R 1 1 n log 1 p K 1 n | M A ( n ) ( K 1 n | M A ( n ) ) η or R 2 1 n log 1 p K 2 n | M A ( n ) ( K 2 n | M A ( n ) ) η 2 or R 1 + R 2 1 n log 1 p K 1 n K 2 n | M A ( n ) ( K 1 n , K 2 n | M A ( n ) ) η 3 .
For i = 1 , 2 , define
i : = p M A ( n ) Z n K i n { R i 1 n log 1 p K i n | M A ( n ) ( K i n | M A ( n ) ) η i .
Furthermore, define
3 : = p M A ( n ) Z n K 1 n K 2 n { R 1 + R 2 1 n log 1 p K 1 n K 2 n | M A ( n ) ( K 1 n , K 2 n | M A ( n ) ) η 3 .
By definition, it is obvious that
0 i = 1 3 i .
We have the following lemma.
Lemma 9.
For any η i > 0 , i = 1 , 2 , 3 and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have the following:
(26) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) n ( R 1 + R 2 ) 0 + i = 1 3 e n η i (27) n ( R 1 + R 2 ) i = 1 3 i + i = 1 3 e n η i .
Specifically, if n [ R 1 + R 2 ] 1 , we have
( n [ R 1 + R 2 ] ) 1 Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) i = 1 3 ( i + e n η i ) .
Proof. 
By (25), it suffices to show (26) to prove Lemma 9. We set
A R 1 , R 2 ( K 1 n , K 2 n | M A ( n ) ) : = ( e n R 1 1 ) p K 1 n | M A ( n ) ( K 1 n | M A ( n ) ) + ( e n R 2 1 ) p K 2 n | M A ( n ) ( K 2 n | M A ( n ) ) + ( e n R 1 1 ) ( e n R 2 1 ) p K 1 n K 2 n | M A ( n ) ( K 1 n , K 2 n | M A ( n ) ) .
Then, we have
Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) = E log 1 + A R 1 , R 2 ( K 1 n , K 2 n | M A ( n ) ) .
We further observe the following:
R 1 < 1 n log 1 p K 1 K 2 n | M A ( n ) ( K n | M A ( n ) ) η 1 R 2 < 1 n log 1 p K 1 K 2 n | M A ( n ) ( K n | M A ( n ) ) η 2 R 1 + R 2 < 1 n log 1 p K 1 K 2 n | M A ( n ) ( K n | M A ( n ) ) η 3 A R 1 , R 2 ( K 1 n , K 2 n | M A ( n ) ) < i = 1 3 e n η i ( a ) log 1 + A R 1 , R 2 ( K 1 n , K 2 n | M A ( n ) ) i = 1 3 e n η i .
Step (a) follows from log ( 1 + a ) a . We also note that
log { 1 + ( e n R 1 1 ) p K 1 n | M A ( n ) ( K 1 n | M A ( n ) ) + ( e n R 2 1 ) p K 2 n | M A ( n ) ( K 2 n | M A ( n ) ) + ( e n R 1 1 ) ( e n R 2 1 ) × p K 1 n K 2 n | M A ( n ) ( K 1 n , K 2 n | M A ( n ) ) } log [ e n R 1 e n R 2 ] = n ( R 1 + R 2 ) .
From (29)–(31), we have the bound (26). ☐
On upper bound of i , i = 1 , 2 , 3 , we have the following lemma:
Lemma 10.
For any η > 0 and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have that for each i = 1 , 2 , we have i ˜ i , where
˜ i : = p M A ( n ) Z n K i n 0 1 n log q ^ i , M A ( n ) Z n K i n ( M A ( n ) , Z n , K i n ) p M A ( n ) Z n K n ( M A ( n ) , Z n , K i n ) η i , ( a ) 0 1 n log Q i , Z n ( Z n ) p Z n ( Z n ) η i , ( b ) R A 1 n log Q i , Z n | M A ( n ) ( Z n | M A ( n ) ) p Z n ( Z n ) η i , ( c ) R i 1 n log 1 Q i , K i n | M A ( n ) ( K i n | M A ( n ) ) η i + 3 e n η i
and that for i = 3 , we have 3 ˜ 3 , where
˜ 3 : = p M A ( n ) Z n K 1 n K 2 n 0 1 n log q ^ 3 , M A ( n ) Z n K 1 n K 2 n ( M A ( n ) , Z n , K 1 n , K 2 n ) p M A ( n ) Z n K 1 n K 2 n ( M A ( n ) , Z n , K 1 n K 2 n ) η 3 , ( a ) 0 1 n log Q 3 , Z n ( Z n ) p Z n ( Z n ) η 3 , ( b ) R A 1 n log Q ˜ 3 , Z n | M A ( n ) ( Z n | M A ( n ) ) p Z n ( Z n ) η 3 , ( c ) R 1 + R 2 1 n log 1 p K 1 n K 2 n | M A ( n ) ( K 1 n , K 2 n | M A ( n ) ) η 3 + 3 e n η 3 .
The probability distributions appearing in the three inequalities (a), (b), and (c) in the right members of (32) have a property that we can select them arbitrary. In (a), we can choose any probability distribution q ^ i , M A ( n ) Z n K i n on M A ( n ) × Z n × X i n . In (b), we can choose any distribution Q i , Z n on Z n . In (c), we can choose any stochastic matrix Q ˜ i , Z n | M A ( n ) : M A ( n ) Z n . The probability distributions appearing in the three inequalities (a), (b), and (c) in the right members of (33) have a property that we can select them arbitrary. In (a), we can choose any probability distribution q ^ 3 , M A ( n ) Z n K 1 n K 2 n on M A ( n ) × Z n × X 1 n × X 2 n . In (b), we can choose any distribution Q 3 , Z n on Z n . In (c), we can choose any stochastic matrix Q ˜ 3 , Z n | M A ( n ) : M A ( n ) Z n .
The above lemma is the same as Lemma 10 in the previous work [26]. Since the proof of the lemma is in [26], we omit the proof of Lemma 10 in the present paper. We have the following proposition.
Proposition 2.
For any φ A ( n ) F A ( n ) ( R A ) and any n [ R 1 + R 2 ] 1 , we have
( n [ R 1 + R 2 ] ) 1 Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) 15 e n F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) .
Proof: 
By Lemmas 9 and 10, we have for any
( n [ R 1 + R 2 ] ) 1 Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) i = 1 3 ( ˜ i + e n η i ) .
The quantity ˜ i + e n η i , i = 1 , 2 , 3 . is the same as the upper bound on the correct probability of decoding for one helper source coding problem in Lemma 1 in Oohama [11] (extended version). In a manner similar to the derivation of the exponential upper bound of the correct probability of decoding for one helper source coding problem, we can prove that, for any φ A ( n ) F A ( n ) ( R A ) , there exist η i , i = 1 , 2 , 3 such that for i = 1 , 2 , 3 , we have
˜ i + e n η i 5 e n F ( R A , R i | p Z K i ) .
From (35) and (36), we have that for any φ A ( n ) F A ( n ) ( R A ) and any n [ R 1 + R 2 ] 1 ,
( n [ R 1 + R 2 ] ) 1 Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) 5 i = 1 3 e n F ( R A , R i | p Z K i ) 15 e n F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) ,
completing the proof. □

6. Alternative Formulation

Here, we show an alternative way to formulate the main problem we consider in this paper. Originally, we consider a problem of having a reliable and secure broadcasting communication in the presence of a side-channel adversary in the case where the sender uses one-time-pad encryption. We can also formulate it in a slightly more general way as follows.
Let consider a problem of having a reliable and secure broadcasting communication in the presence of a side-channel adversary, in the case that the sender uses the encoding scheme Φ i ( n ) at node L i , where Φ i ( n ) encodes X i ( n ) and K i ( n ) into C ˜ i ( m i ) for i = 1 , 2 . We denote the system resulted from the alternative formulation as AltSys . We illustrate AltSys in Figure 6.

6.1. Explanation on Sys and AltSys and Their Comparison

First, recall the “communication” channel W which is present in both systems, Sys and AltSys . The channel W represents the process of transforming analog raw physical data from the side-channel into raw digital data which later can be processed further by the side-channel adversary A .
In the broadcasting encryption system with post-encryption coding Sys shown in Figure 3, the main problem we consider to solve is how to strengthen the secrecy on broadcasting encrypted sources against side-channel adversary A , where the encryption function is one-time-pad encryption. In Sys , since the encryption has been explicitly described as one-time-pad encryption in the beginning, we always treat W as an immediate consequence of the side-channel attacks launched on one-time-pad encryption processes.
In the broadcasting system AltSys from our alternative formulation, shown in Figure 6, the problem we consider here is slightly different to the one in Sys . In AltSys , the problem we consider to solve is whether we can find or construct good encoding schemes that can guarantee the reliability and security against side-channel adversary A . In AltSys , we can have the properties of W fixed first, and then we will find good encoding schemes under the condition of the properties of W.

6.2. Reliability and Security of Alternative Formulation

We can also define the reliability and security of AltSys as follows in the same manner as the ones shown in Section 2.2.
Defining Reliability and Security: From the description of AltSys shown in Figure 6, the decoding process is successful if X ^ i n = X i n holds. The decoding error probabilities p e , i , i = 1 , 2 , are defined as follows:
p e , i = p e ( Φ i ( n ) , Ψ i ( n ) | p X i n , p K i n ) : = Pr [ Ψ i ( n ) ( Φ i ( n ) ( X i n , K i n ) ) X i n ] .
Recall that X i and K i are assumed to be independent. Let us set M A ( n ) = φ A ( n ) ( Z n ) . The information leakage Δ ( n ) on ( X 1 n , X 2 n ) from ( C ˜ 1 m 1 , C ˜ 2 m 2 , M A ( n ) ) is measured by the mutual information between ( X 1 n , X 2 n ) and ( C ˜ 1 m 1 , C ˜ 2 m 2 , M A ( n ) ) . We can formally define this quantity by
Δ ( n ) = Δ ( n ) ( Φ 1 ( n ) , Φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) : = I ( X 1 n X 2 n ; C ˜ 1 m 2 , C ˜ 2 m 2 , M A ( n ) ) .
Definition 7.
A pair ( R 1 , R 2 ) is achievable under R A > 0 for the system AltSys if there exists two sequences { ( Φ i ( n ) , Ψ i ( n ) ) } n 1 , i = 1 , 2 , such that ϵ > 0 , n 0 = n 0 ( ϵ ) N 0 , n n 0 , we have for i = 1 , 2 ,
1 n log | X i m i | = m i n log | X i | R i , p e ( Φ i ( n ) , Ψ i ( n ) | p X i n , p K i n ) ϵ ,
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( Φ 1 ( n ) , Φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) ϵ .
Definition 8 (Reliable and Secure Rate Region).
Let R AltSys ( p X 1 X 2 , p Z K 1 K 2 ) denote the set of all ( R A , R ) such that R is achievable under R A . We call R AltSys ( p X 1 X 2 , p Z K 1 K 2 ) thereliable and secure rateregion.
Definition 9.
A five tuple ( R 1 , R 2 , E 1 , E 2 , F ) is achievable under R A > 0 for the system AltSys if there exists a sequence { ( Φ i ( n ) , Ψ i ( n ) ) } n 1 , i = 1 , 2 , such that ϵ > 0 , n 0 = n 0 ( ϵ ) N 0 , n n 0 , we have for i = 1 , 2 ,
1 n log | X i m i | = m i n log | X i | R i , p e ( Φ i ( n ) , Ψ i ( n ) | p X i n , p K i n ) e n ( E i ϵ ) ,
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( Φ 1 ( n ) , Φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) e n ( F ϵ ) .
Definition 10 (Rate, Reliability, and Security Region).
Let D AltSys ( p X 1 X 2 , p K 1 K 2 , W ) denote the set of all ( R A , R , E , F ) such that ( R 1 , R 2 , E 1 , E 2 , F ) is achievable under R A . We call D AltSys ( p X 1 X 2 , p K 1 K 2 , W ) therate, reliability, and securityregion.
Theoretical Results on the Reliable and Security for Broadcasting System from Alternative Formulation: In order to provide solution for the problem from our alternative formulation, it is sufficient to show the existence of encoders and decoders { ( Φ i ( n ) , Ψ i ( n ) ) } , i = 1 , 2 which can guarantee reliable and security in the presence of a side-channel adversary. Based on the approach and theoretical results shown in Section 4 on proving the reliability and security of the broadcast system where the sender sends encrypted sources using one-time-pad encryption, it is easy to see that we can achieve the reliability and security for the broadcasting system from alternative formulation of the problem (Figure 6) such that the decoding error probabilities p e , i ( i = 1 , 2 ) and the information leakage Δ ( n ) decay into zero in exponential rates by specifying Φ i ( n ) and Ψ i ( n ) , i = 1 , 2 , as follows:
Φ i ( n ) ( X i n , K i n ) : = φ i ( n ) ( EncOTP i ( n ) ( X i n , K i n ) ) for   i = 1 , 2 , Ψ i ( n ) ( C ˜ i m i , K i n ) : = ψ i ( n ) ( DecOTP i ( n ) ( C ˜ i m i , φ i ( n ) ( K i n ) ) ) for   i = 1 , 2 ,
where:
  • EncOTP i ( n ) : X i n × X i n X i n is the one-time-pad encryption function defined as EncOTP i ( n ) ( a , b ) : = a b for ( a , b ) X i n × X i n ,
  • φ i ( n ) : X i n X i m i is an affine encoder constructed based on a linear encoder ϕ i ( n ) : X i n X i m i as shown in Section 5.3,
  • DecOTP i ( n ) : X i m i × X i m i X i m i is the one-time-pad decryption function defined as DecOTP i ( n ) ( a , b ) : = a b for ( a , b ) X i m i × X i m i ,
  • ψ i ( n ) : X i m i X i n is a decoder function for linear encoder ϕ i ( n ) which is associated with the affine encoder φ i ( n ) . (See Section 5.3 for the detailed construction.).
It is easy to see that Theorem 1 actually shows the achievability of reliability and security for broadcasting system in the presence of a side-channel adversary with the specification of Φ i ( n ) and Ψ i ( n ) , i = 1 , 2 stated in Equation (37). Hence, the following theorem automatically holds.
Theorem 2.
For any R A , R 1 , R 2 > 0 and any p Z K 1 K 2 , there exist two sequences of mappings { ( Φ i ( n ) , Ψ i ( n ) ) } n = 1 , i = 1 , 2 such that for any p X i and p K i for i = 1 , 2 , and any n ( R 1 + R 2 ) 1 , we have
1 n log | X i m i | = m i n log | X i | R i , p e ( Φ i ( n ) , Ψ i ( n ) | p X i n , p K i n ) e n [ E ( R i | p X i ) δ i , n ] , i = 1 , 2
and for any eavesdropper A with φ A satisfying φ A ( n ) F A ( n ) ( R A ) , we have
Δ ( n ) ( Φ 1 ( n ) , Φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p K 1 K 2 n , W n ) e n [ F min ( R A , R 1 , R 2 | p Z K 1 K 2 ) δ 3 , n ] ,
where δ i , n , i = 1 , 2 , 3 are defined by
δ i , n : = 1 n log e ( n + 1 ) 2 | X i | × 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | , f o r   i = 1 , 2 , δ 3 , n : = 1 n log 15 n ( R 1 + R 2 ) × 1 + ( n + 1 ) | X 1 | + ( n + 1 ) | X 2 | .
Note that, for i = 1 , 2 , 3 , δ i , n 0 as n .
It is easy to see that the proof of Theorem 1 that has been explained in Section 5 is also the proof of Theorem 2. Note that the functions E ( R i | p X i ) and F ( R A , R 1 , R 2 | p Z K 1 K 2 ) take positive values if ( R A , R 1 , R 2 ) belongs to the set
R AltSys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) : = { R 1 > H ( X 1 ) } { R 2 > H ( X 2 ) } i = 1 , 2 , 3 R i c ( p Z K i ) .
Then, define the following:
D AltSys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) : = { ( R A , R 1 , R 2 , E ( R 1 | p X 1 ) , E ( R 2 | p X 2 ) , F min ( R A , R 1 , R 2 | p K 1 K 2 ) ) : ( R 1 , R 2 ) R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) } .
Hence, we have the following corollary.
Corollary 3.
R AltSys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) R AltSys ( p X 1 X 1 , p Z K 1 K 2 ) , D AltSys ( in ) ( p X 1 X 1 , p Z K 1 K 2 ) D AltSys ( p X 1 X 1 , p Z K 1 K 2 ) .

7. Comparison to Previous Results

The following Table 1 shows the comparison between our result in this paper and already existing published research results which use the PEC paradigm for amplifying secrecy of the system.

8. Discussion on the Outer-Bounds of Rate Regions and Open Problems

In this paper, we have shown the inner-bound of R Sys (resp. R AltSys ). Although we have not touched the issue on the outer-bound of R Sys (resp. R AltSys ) in this paper, one may find the hints to derive the outer-bounds in Yamamoto [28]. However, it should be remarked that, in this paper, we are dealing with the side-channel adversary model, which is different from the wiretap model in Yamamoto [28]. In order to apply the method in Yamamoto [28] to find the outer-bound of R Sys (resp. R AltSys ), one may need to extend the method in Yamamoto [28] so that it can handle the rate constraint introduced by the side-channel adversary. We left the outer-bounds of R Sys and R AltSys as open problems.
Furthermore, in contrast to the case of R Sys (resp. R AltSys ) where we found hints in Yamamoto [28], we are not able to find any hints in the literature on determining the upper-bound of D Sys (resp. R AltSys ). We also left the outer-bounds of D Sys (resp. R AltSys ) as open problems.

9. Conclusions

In this paper, we have proposed a new model for analyzing the reliability and the security of broadcasting encrypted sources in the case of one-time-pad encryption, in the presence of an adversary that is not only eavesdropping the public communication channel to obtain ciphertexts but is also obtaining some physical information leaked by multiple devices owned by the sender while performing the encryption. We have also presented a countermeasure against such an adversary by utilizing affine encoders with certain properties. The main distinguishing feature of our countermeasure is that its performance is independent from the characteristics or the types of physical information leaked from the devices exploited by the adversary.

Author Contributions

Both B.S. and Y.O. contributed for the writing of the original draft of this paper. Other contributions of the B.S. include (but are not limited to): the conceptualization of the research goals and aims, the validation of the results, the visualization/presentation of the works, the review and editing. Other contributions of Y.O. include (but are not limited to): the conceptualization of the ideas, research goals and aims, the formal analysis and the supervision.

Funding

This research was funded by Japan Society for the Promotion of Science (JSPS) Kiban (B) 18H01438 and Japan Society for the Promotion of Science (JSPS) Kiban (C) 18K11292.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Computation of R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 )

In this appendix, we compute the region R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) . Since H ( X i ) = h ( s i ) , i = 1 , 2 , we have
R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) = { R 1 > h ( s 1 ) } { R 2 > h ( s 2 ) } i = 1 , 2 , 3 R i c ( p Z K i ) .
We compute R ( p Z K 1 ) , R ( p Z K 2 ) , and R ( p Z K 1 K 2 ) explicitly. Then, we obtain the form of the region R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) given by (13). We first compute R ( p Z K i ) , i = 1 , 2 . Let N ˜ A be a binary random variable with p N ˜ A ( 1 ) = ρ A . We assume that N ˜ A is independent from Z. Let N be a binary random variable with p N ( 1 ) = ρ . We assume that N is independent from ( Z , N ˜ A ) . Using N ˜ A and N, K i , i = 1 , 2 can be written as
K 1 = Z N ˜ A , K 2 = Z N ˜ A N = K 1 N .
Then, by Example 10.2 (p. 265 in [24]), we have
R ( p Z K 1 ) = { ( R A , R 1 ) : R A log 2 h ( θ ) , R 1 h ( ρ A θ ) for   some   θ [ 0 , 1 ] } ,
R ( p Z K 2 ) = { ( R A , R 2 ) : R A log 2 h ( θ ) , R 2 h ( ρ ρ A θ ) for   some   θ [ 0 , 1 ] } .
We next compute R ( p Z K 1 K 2 ) . Note that
H ( K 1 K 2 | U ) = H ( K 1 | U ) + H ( K 2 | K 1 U ) = ( a ) H ( K 1 | U ) + H ( K 2 | K 1 ) = H ( K 1 | U ) + H ( N ) = H ( K 1 | U ) + h ( ρ ) .
Step (a) follows from U K 1 K 2 . From (A4) and Example 10.2 (p. 265 in [24]), we have
R ( p Z K 1 K 2 ) = { ( R A , R 1 , R 2 ) : R A log 2 h ( θ ) , R 1 + R 2 h ( ρ ) + h ( ρ A θ ) for   some θ [ 0 , 1 ] } .
From (A1)–(A3) and (A5), we have the form of the region R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) given by (13).

Appendix B. Computation of R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 )

In this appendix, we compute the region R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) . Since H ( X i ) = h ( s i ) , i = 1 , 2 , we have
R Sys ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) = { R 1 > h ( s 1 ) } { R 2 > h ( s 2 ) } i = 1 , 2 , 3 R i c ( p Z K i ) .
We compute R ( p Z K 1 ) , R ( p Z K 2 ) , and R ( p Z K 1 K 2 ) explicitly. Then, we obtain the form of the region R Sys , ex 1 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) given by (14). We first compute R ( p Z K i ) , i = 1 , 2 . We can easily verify that for each i = 1 , 2 , K i is independent from Z. Then, for each i = 1 , 2 , we have
R ( p Z K i ) = { ( R A , R i ) : R A log 2 h ( θ ) , R i log 2 for   some θ [ 0 , 1 ] } .
We next compute R ( p Z K 1 K 2 ) . To this end, we prove the following lemma.
Lemma A1.
For Example 2, we have
H ( K 1 K 2 | U ) = H ( K 2 ) + H ( K 1 K 2 | U ) = log 2 + H ( K 1 K 2 | U ) .
Proof. 
Note that
H ( K 1 K 2 | U ) = H ( K 1 + K 2 | U ) + H ( K 2 | K 1 K 2 , U ) = H ( K 1 K 2 | U ) + H ( K 2 ) I ( K 2 ; K 1 K 2 , U ) .
On the upper bound of I ( K 2 ; K 1 K 2 , U ) , we have the following chain of inequalities:
I ( K 2 ; K 1 K 2 , U ) I ( K 2 ; K 1 K 2 , U , Z ) = I ( K 2 ; K 1 K 2 , Z ) + I ( K 2 ; U | K 1 K 2 , Z ) I ( K 2 ; K 1 K 2 , Z ) + I ( K 1 K 2 , K 2 ; U | Z ) = ( a ) I ( K 2 ; N , N A ) + I ( K 1 , K 2 ; U | Z ) = ( b ) 0 .
Step (a) follows from K 2 = K 1 N and Z = K 1 K 2 N A . Step (b) follows from U Z ( K 1 , K 2 ) . From (A8) and (A9), we have Lemma A1. ☐
Let N ^ A be a binary random variable with p N ^ A ( 1 ) = ρ A . We assume that N ^ A is independent from Z. Using N ^ A , X 1 X 2 can be written as
X 1 X 2 = Z N ^ A .
From Lemma A1, (A10), and Example 10.2 (p. 265 in [24]), we have
R ( p Z K 1 K 2 ) = { ( R A , R 1 , R 2 ) : R A log 2 h ( θ ) , R 1 + R 2 log 2 + h ( ρ A θ ) for   some θ [ 0 , 1 ] } .
From (A6), (A7), and (A11), we have the form of the region R Sys , ex 2 ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) given by (14).

Appendix C. Proof of Lemma 4

We have the following chain of inequalities:
I ( C ˜ 1 m 1 C ˜ 2 m 2 , M A ( n ) ; X 1 n X 2 n ) = ( a ) I ( C ˜ 1 m 2 C ˜ 2 m 2 ; X 1 n X 2 n | M A ( n ) ) log ( | X 1 m 1 | | X 2 m 2 | ) H ( C ˜ 1 m 1 C ˜ 2 m 2 | X 1 n X 2 n , M A ( n ) ) = ( b ) log ( | X 1 m 1 | | X 2 m 2 | ) H ( K ˜ 1 m 1 K ˜ 2 m 2 | X 1 n X 2 n , M A ( n ) ) = ( c ) log ( | X 1 m 1 | | X 2 m 2 | ) H ( K ˜ 1 m 1 K ˜ 2 m 2 | M A ( n ) ) = D p K 1 m 1 K 2 m 2 | M A ( n ) p V 1 m 1 V 2 m 2 p M A ( n ) .
Step (a) follows from ( X 1 n , X 2 n ) M A ( n ) . Step (b) follows from that for i = 1 , 2 , C ˜ i m i = K ˜ i m i X ˜ i m i and X ˜ i m i = ϕ i ( n ) ( X i n ) . Step (c) follows from ( K ˜ 1 m 1 , K ˜ 2 m 2 , M A ( n ) ) ( X 1 n , X 2 n ) .

Appendix D. Proof of Lemma 7

In this appendix, we prove Lemma 7. This lemma immediately follows from the following lemma:
Lemma A2.
For i = 1 , 2 and for any n , m i satisfying ( m i / n ) log | X i | R i , we have
E D p K ˜ m 1 K ˜ m 2 | M A ( n ) p V m 1 V m 2 p M A ( n ) ( a , k 1 n , k 2 n ) M A ( n ) × X 1 n × X 2 n p M A ( n ) K n ( a , k 1 n , k 2 n ) × log 1 + ( | X 1 m 1 | 1 ) p K 1 n | M A ( n ) ( k 1 n | a ) + ( | X 2 m 2 | 1 ) p K 2 n | M A ( n ) ( k 2 n | a ) + ( | X 1 m 1 | 1 ) ( | X 2 m 2 | 1 ) p K 1 n K 2 n | M A ( n ) ( k 1 n , k 2 n | a ) .
In fact, from | X i m i | e n R i and (A12) in Lemma A2, we have the bound (20) in Lemma 7. In this appendix we prove Lemma A2. In the following arguments, we use the following simplified notations:
k i n , K i n X i n k i , K i K i k ˜ i m i , K ˜ i m i X i m i l i , L i L i φ i ( n ) : X i n X i m i φ i : K i L i φ i ( n ) ( k i n ) = k i n A i + b i m i φ i ( k i ) = k i A i + b i V i m i X i m i V i L i M A ( n ) M A ( n ) M M .
We define
χ l , l = 1 , if l = l , 0 , if l l .
Then, the conditional distribution of the random pair ( L 1 , L 2 ) for given M = a M is
p L 1 L 2 | M ( l | a ) = k K p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 for ( l 1 , l 2 ) L 1 × L 2 .
Set
Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) : = χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 × log | L 1 | | L 2 | ( k 1 , k 2 ) K 1 × K 2 p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 .
Then, the conditional divergence between p L 1 L 2 | M and p V 1 V 2 for given M is given by
D p L 1 L 2 | M p V 1 V 2 p M = ( a , k 1 , k 2 ) M × K 1 × K 2 ( l 1 , l 2 ) L 1 × L 2 p M K 1 K 2 ( a , k 1 , k 2 ) Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) .
The quantity Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) has the following form:
Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) = χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 × log | L 1 | | L 2 | p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 + k 2 { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 + k 1 { k 1 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 + ( k 1 , k 2 ) { k 1 } c × { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 .
The above form is useful for computing E [ Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) ] .
Proof of Lemma A2.
Taking expectation of both side of (A14) with respect to the random choice of the entry of the matrix A i and the vector b i representing the affine encoder φ , we have
E D p L 1 L 2 | M p V 1 V 2 p M = ( a , k 1 , k 2 ) M × K 1 × K 2 ( l 1 , l 2 ) L 1 × L 2 p M K 1 K 2 ( a , k 1 , k 2 ) E Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) .
To compute the expectation E Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) , we introduce an expectation operator useful for the computation. Let E φ 1 ( k 1 ) = l k 1 , φ 2 ( k 2 ) = l k 2 [ · ] be an expectation operator based on the conditional probability measures Pr · | φ 1 ( k 1 ) = l k 1 , φ 2 ( k 2 ) = l k 2 . Using this expectation operator, the quantity E Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) can be written as
E Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) = ( l k 1 , l k 2 ) L 1 × L 2 Pr φ 1 ( k 1 ) = l k 1 , φ 2 ( k 2 ) = l k 2 × E φ 1 ( k 1 ) = l k 1 , φ 2 ( k 2 ) = l k 2 Υ ( l k 1 , l 1 ) , ( l k 2 , l 2 ) .
Note that
Υ ( l k 1 , l 1 ) , ( l k 2 , l 2 ) = 1 , if φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 , 0 , otherwise .
From (A16) and (A17), we have
E Υ ( φ 1 ( k 1 ) , l 1 ) , ( φ 2 ( k 2 ) , l 2 ) = Pr φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 × E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) = 1 | L 1 | | L 2 | E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) .
Using (A14), the expectation E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) can be written as
E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) = E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 log | L 1 | | L 2 | p K 1 K 2 | M ( k 1 , k 2 | a ) + k 2 { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 2 ( k 2 ) , l 2 + k 1 { k 1 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 + ( k 1 , k 2 ) { k 1 } c × { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 .
Applying Jensen’s inequality to the right member of (A19), we obtain the following upper bound of E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 )
E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) log | L 1 | | L 2 | p K 1 K 2 | M ( k 1 , k 2 | a ) + k 2 { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) E 2 + k 1 { k 1 } c p K 1 K 2 | M ( k 1 , k 2 | a ) E 1 + ( k 1 , k 2 ) { k 1 } c × { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) E 12 ,
where we set
E 1 : = E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 χ φ 1 ( k 1 ) , l 1 , E 2 : = E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 χ φ 2 ( k 2 ) , l 2 , E 12 : = E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 χ φ 1 ( k 1 ) , l 1 χ φ 2 ( k 2 ) , l 2 .
Computing E 1 , we have
E 1 = Pr φ 1 ( k 1 ) = l 1 | φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 = ( a ) Pr φ 1 ( k 1 ) = l 1 | φ 1 ( k 1 ) = l 1 = ( b ) 1 | L 1 | .
Step (a) follows from that the random constructions of φ 1 and φ 2 are independent. Step (b) follows from Lemma 5 parts (b) and (c). In a similar manner we compute E 2 to obtain
E 2 = 1 | L 2 | .
We further compute E 12 to obtain
E 12 = Pr φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 | φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 = ( a ) Pr φ 1 ( k 1 ) = l 1 | φ 1 ( k 1 ) = l 1 × Pr φ 2 ( k 2 ) = l 2 | φ 2 ( k 2 ) = l 2 = ( b ) 1 | L 1 | | L 2 | .
Step (a) follows from that the random constructuions of φ 1 and φ 2 are independent. Step (b) follows from Lemma 5 parts (b) and (c), From (A20)–(A23), we have
E φ 1 ( k 1 ) = l 1 , φ 2 ( k 2 ) = l 2 Υ ( l 1 , l 1 ) , ( l 2 , l 2 ) log | L 1 | | L 2 | p K 1 K 2 | M ( k 1 , k 2 | a ) + k 2 { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) 1 | L 2 | + k 1 { k 1 } c p K 1 K 2 | M ( k 1 , k 2 | a ) 1 | L 1 | + ( k 1 , k 2 ) { k 1 } c × { k 2 } c p K 1 K 2 | M ( k 1 , k 2 | a ) 1 | L 1 | | L 2 | = log 1 + ( | L 1 | 1 ) p K 1 | M ( k 1 | a ) + ( | L 2 | 1 ) p K 2 | M ( k 2 | a ) + ( | L 1 | 1 ) ( | L 2 | 1 ) p K 1 K 2 | M ( k 1 , k 2 | a ) .
From (A15), (A18), and (A24), we have the bound (A12) in Lemma A2.

Appendix E. Proof of Lemma 8

We have the following chain of inequalities:
E Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) + i = 1 , 2 p X ¯ i P n ( X i ) Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | Π X ¯ i ( R i ) = E Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) + i = 1 , 2 p X ¯ i P n ( X i ) E Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | Π X ¯ i ( R i ) ( a ) 1 + i = 1 , 2 p X ¯ i P n ( X i ) 1 ( b ) 1 + i = 1 , 2 ( n + 1 ) | X i | .
Step (a) follows from Lemma 6 and Corollary 2. Step (b) follows from Lemma 1 part (a). Hence, there exists at least one deterministic code { ( φ i ( n ) , ψ i ( n ) ) } i = 1 , 2 such that
Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) + i = 1 , 2 p X ¯ i P n ( X i ) Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | Π X ¯ i ( R i ) 1 + i = 1 , 2 ( n + 1 ) | X i | ,
from which we have that, for i = 1 , 2 and for any p X ¯ i P n ( X i ) ,
Ξ X ¯ i ( ϕ i ( n ) , ψ i ( n ) ) e ( n + 1 ) | X i | Π X ¯ i ( R i ) 1 + j = 1 , 2 ( n + 1 ) | X j | .
Furthermore, we have that, for any φ A ( n ) F A ( n ) ( R A ) ,
Δ n ( φ 1 ( n ) , φ 2 ( n ) , φ A ( n ) | p X 1 X 2 n , p Z K 1 K 2 n ) Θ ( R 1 , R 2 , φ A ( n ) | p Z K 1 K 2 n ) 1 + j = 1 , 2 ( n + 1 ) | X j | ,
completing the proof.

References

  1. Kocher, P.C. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 18–22 August 1996; Volume 1109, pp. 104–113. [Google Scholar]
  2. Kocher, P.C.; Jaffe, J.; Jun, B. Differential Power Analysis. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 15–19 August 1999; Volume 1666, pp. 388–397. [Google Scholar]
  3. Agrawal, D.; Archambeault, B.; Rao, J.R.; Rohatgi, P. The EM Side—Channel(s). Cryptographic Hardware and Embedded Systems-CHES 2002; Kaliski, B.S., Koç, Ç.K., Paar, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 29–45. [Google Scholar]
  4. Santoso, B.; Oohama, Y. Information Theoretic Security for Shannon Cipher System under Side-Channel Attacks. Entropy 2019, 21, 469. [Google Scholar] [CrossRef]
  5. Santoso, B.; Oohama, Y. Secrecy Amplification of Distributed Encrypted Sources with Correlated Keys using Post-Encryption-Compression. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3042–3056. [Google Scholar] [CrossRef]
  6. Santoso, B.; Oohama, Y. Privacy amplification of distributed encrypted sources with correlated keys. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 958–962. [Google Scholar]
  7. Oohama, Y.; Santoso, B. Information theoretical analysis of side-channel attacks to the Shannon cipher system. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 581–585. [Google Scholar]
  8. Csiszár, I. Linear Codes for Sources and Source Networks: Error Exponents, Universal Coding. IEEE Trans. Inform. Theory 1982, 28, 585–592. [Google Scholar] [CrossRef]
  9. Oohama, Y. Intrinsic Randomness Problem in the Framework of Slepian-Wolf Separate Coding System. IEICE Trans. Fundam. 2007, 90, 1406–1417. [Google Scholar] [CrossRef]
  10. Santoso, B.; Oohama, Y. Post Encryption Compression with Affine Encoders for Secrecy Amplification in Distributed Source Encryption with Correlated Keys. In Proceedings of the 2018 International Symposium on Information Theory and Its Applications (ISITA), Singapore, 28–31 October 2018; pp. 769–773. [Google Scholar]
  11. Oohama, Y. Exponent function for one helper source coding problem at rates outside the rate region. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 1575–1579. Available online: https://arxiv.org/pdf/1504.05891.pdf (accessed on 17 January 2019).
  12. Johnson, M.; Ishwar, P.; Prabhakaran, V.; Schonberg, D.; Ramchandran, K. On compressing encrypted data. IEEE Trans. Signal Process. 2004, 52, 2992–3006. [Google Scholar] [CrossRef]
  13. Maurer, U.; Wolf, S. Unconditionally Secure Key Agreement and The Intrinsic Conditional Information. IEEE Trans. Inform. Theory 1999, 45, 499–514. [Google Scholar] [CrossRef]
  14. Maurer, U.; Wolf, S. Infromation-Theoretic Key Agreement: From Weak to Strong Secrecy for Free. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Bruges, Belgium, 14–18 May 2000; Volume 1807, pp. 351–368. [Google Scholar]
  15. Brier, E.; Clavier, C.; Olivier, F. Correlation Power Analysis with a Leakage Model. In Cryptographic Hardware and Embedded Systems—CHES 2004; Joye, M., Quisquater, J.J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 16–29. [Google Scholar] [Green Version]
  16. Coron, J.; Naccache, D.; Kocher, P.C. Statistics and secret leakage. ACM Trans. Embed. Comput. Syst. 2004, 3, 492–508. [Google Scholar] [CrossRef]
  17. Köpf, B.; Basin, D.A. An information-theoretic model for adaptive side-channel attacks. In Proceedings of the 14th ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 29 October–2 November 2007; pp. 286–296. [Google Scholar]
  18. Backes, M.; Köpf, B. Formally Bounding the Side-Channel Leakage in Unknown-Message Attacks. In Proceedings of the European Symposium on Research in Computer Security, Málaga, Spain, 6–8 October 2008; Volume 5283, pp. 517–532. [Google Scholar]
  19. Micali, S.; Reyzin, L. Physically Observable Cryptography (Extended Abstract). In Proceedings of the Theory of Cryptography Conference, Cambridge, MA, USA, 19–21 February 2004; Volume 2951, pp. 278–296. [Google Scholar]
  20. Standaert, F.; Malkin, T.; Yung, M. A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cologne, Germany, 26–30 April 2009; Volume 5479, pp. 443–461. [Google Scholar]
  21. de Chérisey, E.; Guilley, S.; Rioul, O.; Piantanida, P. An Information-Theoretic Model for Side-Channel Attacks in Embedded Hardware. In Proceedings of the 2019 IEEE International Symposium on Information Theory, Paris, France, 7–12 July 2019. [Google Scholar]
  22. de Chérisey, E.; Guilley, S.; Rioul, O.; Piantanida, P. Best Information is Most Successful Mutual Information and Success Rate in Side-Channel Analysis. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2019, 2019, 49–79. [Google Scholar]
  23. Ahlswede, R.; Körner, J. Source Coding with Side Information and A Converse for The Degraded Broadcast Channel. IEEE Trans. Inform. Theory 1975, 21, 629–637. [Google Scholar] [CrossRef]
  24. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  25. Csiszár, I.; Körner, J. Information Theory, Coding Theorems for Discrete Memoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  26. Oohama, Y.; Santoso, B. Information Theoretic Security for Side-Channel Attacks to The Shannon Cipher System. Preprint. 2018. Available online: https://arxiv.org/pdf/1801.02563.pdf (accessed on 18 January 2019).
  27. Oohama, Y.; Han, T.S. Universal coding for the Slepian-Wolf data compression system and the strong converse theorem. IEEE Trans. Inform. Theory 1994, 40, 1908–1919. [Google Scholar] [CrossRef]
  28. Yamamoto, H. Coding theorems for Shannon’s cipher system with correlated source outputs, and common information. IEEE Trans. Inf. Theory 1994, 40, 85–95. [Google Scholar] [CrossRef]
Figure 1. Side-channel attacks in a broadcasting system.
Figure 1. Side-channel attacks in a broadcasting system.
Entropy 21 00781 g001
Figure 2. Side-channel attacks to the two Shannon cipher systems.
Figure 2. Side-channel attacks to the two Shannon cipher systems.
Entropy 21 00781 g002
Figure 3. Sys : a system of broadcast encryption with post-encryption coding.
Figure 3. Sys : a system of broadcast encryption with post-encryption coding.
Entropy 21 00781 g003
Figure 4. Our proposed countermeasure: affine encoders as privacy amplifiers.
Figure 4. Our proposed countermeasure: affine encoders as privacy amplifiers.
Entropy 21 00781 g004
Figure 5. Shape of the regions R Sys , ex i ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) { R A = 1 h ( θ ) } , i = , 1 , 2 .
Figure 5. Shape of the regions R Sys , ex i ( in ) ( p X 1 X 2 , p Z K 1 K 2 ) { R A = 1 h ( θ ) } , i = , 1 , 2 .
Entropy 21 00781 g005
Figure 6. Broadcasting system AltSys from alternative formulation.
Figure 6. Broadcasting system AltSys from alternative formulation.
Entropy 21 00781 g006
Table 1. Comparison of research on application of PEC for secrecy amplification.
Table 1. Comparison of research on application of PEC for secrecy amplification.
Network SystemSide-Channel AdversaryCorrelated Keys
Previous work 1 [5,6]Distributed Encryption (2 senders, 2 receivers)NoYes
Previous work 2 [4,7]Two Terminals (1 sender, 1 receiver)YesNo
This paperBroadcast Encryption (1 sender, 2 receivers)YesYes

Share and Cite

MDPI and ACS Style

Santoso, B.; Oohama, Y. Information Theoretic Security for Broadcasting of Two Encrypted Sources under Side-Channel Attacks . Entropy 2019, 21, 781. https://doi.org/10.3390/e21080781

AMA Style

Santoso B, Oohama Y. Information Theoretic Security for Broadcasting of Two Encrypted Sources under Side-Channel Attacks . Entropy. 2019; 21(8):781. https://doi.org/10.3390/e21080781

Chicago/Turabian Style

Santoso, Bagus, and Yasutada Oohama. 2019. "Information Theoretic Security for Broadcasting of Two Encrypted Sources under Side-Channel Attacks " Entropy 21, no. 8: 781. https://doi.org/10.3390/e21080781

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop