Lecture 3
Lecture 3
1 Topic Covered
• Definition of Multi-Party Computation (MPC)
For t-security, the intuition is that an adversary should not learn more from participating in
the protocol than it could have gotten from the secret inputs of the corrupted parties xS and
the final output f (x1 , . . . , xn ). This intuition is formalized by saying that the distributions
Lecture 3, Page 1
of the view of the adversary on the protocol should be equivalent to what could be produced
by a simulator given only xS and f (x1 , . . . , xn ).
We note that for t-security, we consider the honest but curious model. In this model,
the adversarial parties are trusted to run the protocol as specified but may be ’curious’ (i.e.
want to learn information) about the other parties. We let this model be the default for
t-security in this lecture but later in the course we will also consider a stronger model where
adversarial parties can deviate arbitrarily from the protocol execution.
We now show that, when we have an honest majority t < n/2, then for all functions f
there exists a MPC protocol for f .
Theorem 1 ∀t, n such that t < n/2, ∀f , ∃ a t-secure MPC π for f .
We show this by constructing a MPC protocol π for any arbitrary function f using Shamir
secret sharing in the next section.
2.1 Construction
Recall that with Shamir secret sharing, random coefficients were generated to define a unique
polynomial of degree t, and Lagrange interpolation enabled recovery of the polynomial given
t + 1 points. To assist in our construction, we choose to represent f as an arithmetic circuit
that contains either addition or multiplication gates. Any circuit can be represented in
this manner and any polynomial-time computation can be represented by a polynomial size
arithmetic circuit.
Each input wire is associated with a party i ∈ [n] but it may be the case that one party’s
input may consist of several field elements that go on different input wires.
We construct π as follows.
1. Each party begins by using t-threshold Shamir secret sharing to share each1 of its
input values. That is, for each input u, it creates a secret sharing of u as
[u] = (p(1), . . . , p(n))
where p is a random polynomial of degree ≤ t and p(0) = u. It sends the share
ui = p(i) to party i.
2. Once each party has a secret share of each of the inputs, the parties proceed to compute
secret shares for the values on each internal wire of the circuit. For some gate within
the circuit assume the parties already computed secret sharing of the inputs to that
gate: [u] = (u1 , . . . , un ) and [v] = (v1 , . . . , vn ). Their goal is to compute a secret
sharing [w] = (w1 , . . . , wn ) of the output of the gate. We consider each of these three
cases:
(a) Addition
[w]
+
[u] [v]
1
”Each” in the sense that we do not restrict to a single field element
Lecture 3, Page 2
In this case we can set [w] := [u]+[v] = (u1 +v1 , . . . , un +vn ). In other words each
party individually adds up its shares of u and v to get a corresponding share of
w. Note that [u] = (p(1) . . . p(n)) and [v] = (q(1) . . . q(n)) for some polynomials
p, q of degree ≤ t such that p(0) = u and q(0) = v. Therefore
[w]
× c
[u]
[w]
×
[u] [v]
For multiplication, things become more tricky. The parties can compute:
3. Finally, each party broadcasts their secret share of each output wire [y] which allows
all the parties to recover the outputs.
Lecture 3, Page 3
We have just constructed a protocol to compute an arbitrary f represented as an arith-
metic circuit. We claim this is secure.
Note that ViewS (π(x1 , . . . , xn )) consists of values xS , rS , [u]S , [zi ]S , [y], where xS are the
inputs of the adversarial parties, rS is randomness of the adversarial parties, for each input of
an honest party [u]S denotes that shares seend by the adverarial parties, [zi ]S are the shares
of shares seen by the adversarial parties during the computation of the each multiplication
gate, and [y] are the shares for each output wire of all n parties.
The simulator is given xS , so simulating the input of the colluding parties from the
view is trivial: the simulator simply uses the given xS . We note that the distributions of
secret shares [u]S and [zi ]S are uniformly random by the t-out-of-n security of Shamir secret
sharing, so rS , [u]S , and [zi ]S can be simulated by choosing values at random.
For [y], we can not choose this value uniformly randomly since it is dependent on other
values in the protocol. However, the simulator can compute the shares [y]S of the parties
in S from our other sampled values by executing the protocol on their behalf. Recalling
|S| = t, computing [y]S gives us t points of the polynomial. The simulator is given the
output y = f (x1 , . . . , xn ), which gives us an additional point on the polynomial. With these
t + 1 points, we can compute the polynomial using Lagrange interpolation, and compute
[y]S̄ , which are the remaining secret shares of [y]. Therefore, since we are able to build a
simulator that produces a distribution equivalent to an adversary’s view of the protocol,
the protocol is t-secure.
Theorem 3 For the AND function with n = 2 parties, there is no perfectly t-secure MPC
protocol.
Proof: Consider two parties Alice and Bob with inputs xa and xb , respectively. They
communicate back and forth to compute xa ∧ xb . Let T be the transcript of the protocol.
We say that T is consistent with (e.g.,) xa = 0 if there is some randomness that Alice could
have used had she had the input 0 that would have made her send the responses in T .
If Alice’s bit xa = 1, then T can only be consistent with one of xb = 0 and xb = 1. This
follows by correctness. Since Alice needs to output a different bit depending on whether
xb = 0 or xb = 1 the transcript T cannot be consistent with both options. This itself is not
a contradiction.
If Alice’s bit xa = 0, then T has to be consistent with both xb = 0 and xb = 1. This
follows by security since Alice should not learn Bob’s bit and if the transcript was only
Lecture 3, Page 4
consistent with one of the two options a computationally unbounded Alice could figure out
which one.
But the above two conditions imply that Bob can learn Alice’s bit when xb = 0. In
particular, at the end of the protocol Bob checks whether T is consistent with only one of
xb = 0, xb = 1 or both of them. By the above this completely reveals Alice’s bit which Bob
should not have learned in a secure MPC when his input is xb = 0.
Note that the above argument does not lead to an efficient attack and hence this only
precludes perfectly secure MPC protocols. Later on in the course we will see how to get
security against computationally bounded adversaries.
We can extend this impossibility result to showing that for any n there is no perfectly
secure MPC protocol for the n-party AND function defined as f (x1 , . . . , xn ) = ni=1 xi
V
when the collusion size is t ≥ n/2. How? If there was one we could convert it into a
2-party protocol for the AND function which is secure against 1 adversarial party. We
can let Alice act as the first n/2 parties in the protocol each of which uses Alice’s input
x1 = x2 = . . . xn/2 = xa and Bob act as the remaining n/2 parties each of which uses Bob’s
input xn/2+1 = · · · = xn = xb . Note that an adversarial Alice or Bob sees the view of n/2
parties in the n-party protocol but this preserves security if the n-party protocol is secure
against n/2-size collusions. Since we already saw that it is not possible to get 2-party MPC
for the AND function, it is not possible to get an n-party MPC for the AND function with
collusion size t ≥ n/2.
3 Statistical Distance
Recall the definition of perfect security:
Definition 1 An encryption scheme has perfect security if for all m0 , m1 ,
Enc(K, m0 ) ≡ Enc(K, m1 )
♦
Although perfect security is great if we can get it, we already saw that it requires the
key to be at least as large as the message. We will begin looking at ways to relax the
requirements of perfect security. The first attempt at relaxing the requirement relies on the
notion of statistical distance.
Definition 2 For random variables X, Y with support Z the statistical distance SD of
X, Y is:
1X
SD(X, Y ) = Pr[X = z] − Pr[Y = z] .
2
z∈Z
♦
Let us draw a sample of what X, Y might look like if we “sort” the support by the
difference between the probability assigned to each point by X vs Y .
Lecture 3, Page 5
Y
A B
X
C
z
Letting |A| be the area of A, we observe that |A| + |C| = 1, |B| + |C| = 1, and therefore
|A| = |B|. This shows that
1
SD(X, Y ) = A + B = |A| = |B|
2
To get inuition for this definition, imagine we were trying to distinguish between X and
Y . That is we wanted to find the set D that maximizes Pr[X ∈ D] − Pr[Y ∈ D] . Then
this is achieved by setting D = A or D = B (clear from picture). Therefore we get the
equivalent definition
Lecture 3, Page 6