Non-Malleable Multi-Party Computation: Abstract
Non-Malleable Multi-Party Computation: Abstract
Fuchun Lin?
1 Introduction
In secure Multi-Party Computation (MPC), a set of players wish to evaluate
a function f on their private inputs without revealing information about their
private inputs beyond what is contained in the output. The function f is publicly
known to all players and is assumed to be an arithmetic circuit C over some
finite field. The task can be trivially accomplished given a trusted party: every
player gives his/her input to the trusted party and the trusted party does the
computation, returns the result. The study of secure MPC is about replacing
the trusted party with a protocol that works exactly the same as the trusted
party, despite possible active/passive attacks from bounded number of players.
Both correctness and privacy against corruption of parties are formulated as a
simulation based notion, in an entangled way, that involves a real world and an
ideal world (in ideal world there is a trusted party). The security is formulated
as the existence of an efficient simulator that, through oracle call to the trusted
party in the ideal world and may substitute the inputs of the corrupt parties,
simulates a set of views of the corrupt parties that have the same distribution
as the views they would get if they were to run the protocol in the real world
where the corrupt parties may deviate from the protocol.
MPC was introduced in the work of Yao [Yao82]. Feasibility results in the
computational setting on MPC were obtained by Yao [Yao82] and Goldreich et
al. [GMW87], where the adversary can corrupt all but one players but is assumed
to have bounded computational resources. Our focus in this work is information-
theoretic security. Feasibility results in information-theoretic setting were shown,
for up to less than 1/3 corrupt parties assuming secure point-to-point commu-
nication channels [BGW88,CCD88], and for up to less than 1/2 corrupt parties
assuming secure point-to-point communication channels and a broadcast channel
[RB89,Bea89]. A construction of secure MPC with a slightly weaker correctness
notion called security with abort for up to less than 1/2 corrupt parties under the
sole assumption of secure point-to-point communication channels was given that
has communication complexity asymptotically the same as best MPC protocols
for passive adversary [GIP+ 14].
The above feasibility results were proved in the theoretic security (black-box
security) model, where every cryptography primitive is an abstract object with
ideal functionality. In real life, every cryptographic algorithm is ultimately im-
plemented on a physical device that affects, is being affected by, the environment
around it. Security models that take this fact into account are called the imple-
mentation security models. The real-life adversary studied in implementation
security models can be divided into two groups: leakage adversary (passive at-
tack) and tampering adversary (active attack). Implementation security models
against leakage attacks for general purpose MPC protocols were first studied in
2
[BGJ+ 13,BGJK12]. The adversary considered in [BGJ+ 13] can corrupt an arbi-
trary subset of parties and, in addition, can learn arbitrary auxiliary informa-
tion on the entire states of all honest parties (including their inputs and random
coins), in an adaptive manner, throughout the protocol execution. The above
standard notion of secure computation is impossible (the adversary can simply
leak the private input of an honest party) and a weaker notion called leakage-
tolerant was shown to be achievable, which guarantees that for any amount of
information the real world adversary is able to (adaptively) acquire throughout
the protocol, this same amount of auxiliary information is given to the ideal
world simulator. In contrast to [BGJ+ 13], [BGJK12] constructed MPC proto-
cols that achieve standard ideal world security (where no leakage is allowed in
the ideal world) against real world adversaries that may leak repeatedly from
the secret state of each honest player separately, assuming a one-time leak-free
preprocessing phase, and assuming the number of parties is large enough. In-
tuitively, the one-time leak-free preprocessing phase is exploited by the honest
parties to secret share their private inputs (private inputs are erased once the
shares are stored) and with the independent leakage assumption, it is possible
to prevent the adversary from obtaining information about the private inputs.
The recent result on Leakage-Resilient MPC (LR-MPC) [BDIR18] belongs to
the latter category.
3
The next object closer to providing a solution is the currently active research
area of Non-Malleable Secret Sharing (NM-SS) [GK18] (see Definition 4). Se-
cret sharing, introduced independently by Blakley [Bla79] and Shamir [Sha79],
is a major tool in MPC constructions (cf. [CDN15]) and threshold cryptogra-
phy [Des94]. The goal in secret sharing is to encode a secret s into a number of
shares Sh1 , . . . , Shn that are distributed among a set [n] = {1, . . . , n} of parties
such that for a privacy threshold t, any set R ⊆ [n] with |R| > t can reconstruct
the secret, while any set A ⊂ [n] with |A| ≤ t do not have information about the
secret. For special purpose protocols such as threshold signature schemes, com-
pilers based on NM-SS that transform a black-box security threshold signature
scheme into one that is resilient against independent tampering of all parties
were constructed in [ADN+ 19,FV19]. Given the ubiquitous presence of secret
sharing in the construction of general purpose MPC protocols, one would expect
a more prominent role of NM-SS in securing MPC protocols against tampering.
One bottle neck here is that NM-SS was proposed as the opposite of secret shar-
ing with homomorphic properties, which are exactly the kind of secret sharing
used in MPC protocols.
The object closest to providing a general solution is the following. Similar
to generalising block codes to interactive codes, NMC was recently generalised
to Interactive NMC (INMC) [FGJ+ 19]. It is shown that the interactive setting
allows INMC to be constructed in many powerful tampering models that would
have been impossible in non-interactive setting. For example, they considered a
Bounded State Tampering (BST) model, where an adversary can keep a state
that stores information about past messages and use it in tampering with the
current message (the current message is given to the adversary in full) and
showed that a rather strong security called protocol non-malleability is achiev-
able. However, the two parties executing the protocol are both honest and the
tampering is in an outsider model. Encoding a secure two-party computation
protocol using an INMC will not provide any protection, as the adversary of
two-party computation is executing the protocol as an insider. We will return to
this in Related works.
Our contributions. We propose a tamper-tolerant notion for general pur-
pose MPC protocols, as an analogue of the leakage-tolerant notion studied
in [BGJ+ 13] in the sense that we allow the ideal world adversary to tamper
with the output of the computation, to be compared with that the leakage ideal
world adversary in [BGJ+ 13] gets to leak same amount of information as the real
world adversary. On the other hand, in order to define a useful tamper-tolerant
notion, we insist that the ideal world adversary should only be allowed to harm-
lessly tamper with the output of the computation, in the sense of [DPW18].
Our motivating example. We continue with the signature scheme motivating
example for NMC and take it to a setting of threshold signature scheme, where
the secret signing key remains in the distributed form held by n servers all
through the signing process. Moreover, consider a natural situation where the
secret signing key itself is to be generated on the spot, from private values that
are held by honest but mutually distrusting clients (for instance, computing a
4
session key from global private keys). A standard solution that kills two birds
with one stone is to have the servers and the clients run an MPC protocol in
the so-called client-server model, where the clients provide private inputs and
the servers compute. If the MPC protocol employed is based on a secret sharing
scheme, the servers naturally have the distributed form of the secret signing key
upon the completion of circuit evaluation as each server’s last message to be
delivered for output reconstruction 1 . The robustness of MPC protocol prevents
a black-box adversary that corrupts bounded number of servers (servers do not
have input hence can not do input substitution) from changing the value of the
being computed secret signing key. But if a real life adversary exploits flaws in
the implementation of the MPC protocol and is capable of inflicting a bit-flip
in the computed signing key, the same attack of [BDL01] can be used to factor
the underlying RSA modulus. The application scenario motivates the study of
a non-malleable MPC, where tampered protocol should not compute an output
related to private inputs of honest parties.
In order to achieve the non-malleability described in the motivating example,
we must adapt at least some minimum restrictions, as otherwise, a tampering
adversary can trivially make the output depend on the private inputs of honest
parties if we were to allow the adversary access to everything following [BGJ+ 13].
But instead of assuming a harm-free phase as done in [BGJK12] (seems too much
to ask in real life), we begin with MPC protocols in honest majority setting
and go for an adversarial channel tampering model (cf. [FGJ+ 19]), where the
adversary can tamper with all the messages traveling between parties, but has to
follow certain patterns that define a structured type of tampering. We explicitly
model the execution of such an MPC protocol under corruption and a type F
of adversarial channel tampering (see the beginning of Section 3).
Definition[NM-MPC] (informal summary of Definition 6) An MPC protocol
evaluating a circuit is non-malleable if for any active adversary corrupting a
bounded number of parties (and any choice of a tampering function from tam-
pering class F), there exists a simulator that takes the adversary’s corruption
strategy (and tampering function) as inputs and simulate in the ideal world,
where there is an incorruptible trusted party evaluating the circuit, the real
execution of the protocol under corruption (and tampering): the simulator is
1
The client-server model with the output of computation remains in secret-shared
form is common practice and in particular useful (see [MR18] and references therein).
Machine learning is widely used to produce models that classify images, authenticate
biometric information, recommend products, choose which Ads to show, and iden-
tify fraudulent transactions, etc. Internet companies regularly collect users’ online
activities and browsing behaviour to train more accurate recommender systems. The
healthcare sector envisions a future where patients’ clinical and genomic data can be
used to produce new diagnostic models. There are efforts to share security incidents
and threat data, to improve future attack prediction. In all above application sce-
narios, the data being classified or used for training is often sensitive and may come
from multiple sources with different privacy requirements. MPC is an important tool
for supporting machine learning on private data held by mutually distrusting entities
and the end product of data training is stored in distributed form for privacy.
5
allowed to modify the incorruptible trusted party’s output for honest parties, if
the real world adversary deviates from the protocol.
Non-malleability as defined above is a relaxed notion of security for MPC
protocols which suggests an interesting way of decoupling the two entangled
properties (correctness and privacy) that define secure computation. Intuitively,
we no longer insist on correctness (we allow the ideal world adversary to tamper
with the output if the real world adversary deviates from the protocol), but pre-
serve the privacy notion in its strongest possible form (the view of the adversary
is simulated without information beyond the corrupted parties’ input and what
is implied in an output). The rationale behind this relaxation is that, since the
correctness of secure computation in its strongest possible form still allows ma-
licious adversary to substitute wrong inputs on behalf of the corrupted parties
leading to an unintended output, the importance of insisting on preventing the
adversary from further tampering with the unintended output may be overrated,
at least in some applications. For the two example NM-MPC constructions, we
in fact achieve a stronger security that allows the modified output for honest
parties to be either remains the same as the incorruptible trusted party has
outputted it or following a distribution determined by the adversary (the prob-
ability of remaining the same and the fixed distribution are both independent
of the incorruptible trusted party’s output). This makes them sufficient for the
application in Our motivating example.
We begin with showing that with this relaxation of security (privacy against
malicious adversary), information-theoretic honest majority MPC protocols can
be constructed even when their secure point-to-point channels are tampered
with. Bounded State Tampering is a well-established adversarial tampering model
since [CM97]. We follow the recent formulation in [FGJ+ 19]. The adversary
keeps a state of bounded size (at most s bits) storing information about past
messages that can be used in the tampering of the current message. For a 2-round
s
protocol, f = (f1 , f2 ) ∈ FBST can produce m̃1 = g1 (m1 ) and m̃2 = f2 (m1 , m2 ) =
g2 (m2 , h1 (m1 )) depending on h1 (m1 ) and m2 , where the range of h1 is {0, 1}s .
Our first result of feasibility nature is that once the amount of information of
s
past messages the adversary is allowed to “memorise” is limited: F = FBST , NM-
MPC can be constructed even in the joint-tampering model, where an adversary
can jointly tamper with messages intended for different receivers. We emphasize
that the adversary has full control over the current messages and there is no
information-theoretic approach to secure the channel against such adversary
(not even to detect its presence).
Theorem (informal summary of Theorem 1) For channel tampering class F over
which a pair of independent keys can be generated by two communicants, there
is a compiler that turns a secure MPC protocol into an NM-MPC protocol in the
joint tampering model and the communication complexity increases by (3/ρ + 1)
times, where ρ is the rate in which such independent keys can be generated.
We will discuss the notion of independent keys further in technical overview.
Substituting in a known construction for generating such independent keys
s
against FBST -tampering [FGJ+ 19], we obtain NM-MPC with respect to cor-
6
s
ruption of parties and FBST -tampering of secure channels in the joint tampering
model. In particular, the level of non-malleability achieved in this construction is
only marginally weaker than the secure with abort notion in [GIP+ 14] (see Def-
inition 1), where the adversary is allowed to individually decide, after learning
its own outputs, whether each honest party receives its correct output from the
functionality or a special ⊥ message which the party outputs. The level of non-
malleability achieved above further relaxes secure with abort notion and allows
each honest party to receive its correct output with a probability p and output
⊥ with probability 1 − p (“secure with probabilistic abort”). On the other hand,
the shortcoming of the above approach is the excessive cost (the generated key
is of length a close to zero fraction of total communication). The known con-
s
struction for generating the independent keys against FBST -tampering is based
on a split-state non-malleable extractor [CG17], whose efficient construction is a
notoriously hard bottle-neck problem in NMC and NM-SS literature [Li18].
Motivated by finding more efficient approaches that circumvent NM-KE, we
consider relying on the secret sharing scheme implementing the protocol. With-
out generating a long key to mask the transmitted messages, we can not hope
to defeat a joint tampering adversary. We fall back to independent tampering
s
model and also consider a weakened bounded state tampering FweakBST , where
the state size bound is effective for the current message and all past messages.
s
For a 2-round protocol, f = (f1 , f2 ) ∈ FweakBST can produce m̃1 = g1 (h1 (m1 ))
depending on h1 (m1 ) only and m̃2 = f2 (m1 , m2 ) = g2 (h2 (m2 , h1 (m1 ))) depend-
ing on h2 (m2 , h1 (m1 )) only, where the range of h1 , h2 is {0, 1}s . Though much
s s
weaker than FBST -tampering, here FweakBST -tampering still allows the adversary
to selectively overwrite the whole current message (again, impossible to detect
adversary’s presence).
Theorem (informal summary of Theorem 2) For constant integers s, θ, and big
enough prime number p, there exist a MPC protocol that non-malleably computes
an arithmetic circuit over Fp against an active adversary corrupting at most θ
s
servers, and FweakBST -tampering of the secure channel in the independent model.
The protocol has the same asymptotic communication complexity as best passive
security MPC.
s
It is fair to say that the combination of FweakBST -tampering and the indepen-
dent tampering model defines a rather contrived adversary that is difficult to find
applications in real life. The more interesting message carried in this construction
is the fact that weaker notion of security (privacy against active adversary) for
MPC is efficiently achievable without assuming secure point-to-point channels.
Using the technical framework initiated in [BDIR18], we are able to construct
NM-SS that provides flexible choice of parameters ranging from two extremes
(parameters of NM-SS directly translate into those of NM-MPC). One extreme
is the maximum state case, where for every share the adversary stores all infor-
mation except one bit (essentially the adversary can arbitrarily tamper with the
share, independently). It is still possible to obtain NM-SS as the non-malleability
error vanishes exponentially fast in the number t − θ of uncorrupted shares in a
reconstruction set of size t + 1. The other extreme is the minimum state case,
7
where for every share the adversary stores one bit information. In this case, for
a 10 bits prime p (log p = 10), choosing n = p − 1, t = 300 (approximately n/3)
allows for ε = 2−50 against up to θ = 175 fully tampered shares.
Allowing non-explicit Monte-Carlo constructions and further restricting to a
yet smaller class of tampering functions (the information to store in the state is
through reading physical bits only, called physical-bit s-BST), we are able to use
the more recent results of [MPSW21,MNP+ 21] to achieve another dimension of
extreme parameters: the reconstruction threshold can be set as low as t + 1 = 2
(privacy threshold t = 1) , and considering varying number of parties n = 10, 100,
and 1000, the non-malleability error as small as 2−50 against physical-bit 1-
bounded state tampering can be achieved with success probability 1 − 2−50
(over choosing the evaluation places), using a prime number p with more than
λ = 430, 4800, and 62000 bits, respectively.
8
The tampering counterpart is more difficult to capture both conceptually and
quantitively. We propose a tamper-tolerant notion motivated by relaxing the
correctness of secure computation in a privacy-preserving way. Conceptually, we
draw inspirations from the non-malleability notion in tamper-resilient cryptog-
raphy and merge it with the simulation based formulation in MPC literature.
This generalizes the idea of harmless tampering, which is a tampering defined
independent of any information that would breach the privacy, to secure compu-
tation. Quantitively, the “amount” of tampering allowed in the real world does
not translate into the “amount” of tampering in the ideal world (as a symbol-by-
symbol analogy to [BGJ+ 13] would suggest), but into the amount of overhead
(e.g. communication complexity) required for turning the harmful real world
tampering into harmless ideal world tampering.
The secure with abort MPC protocols in [GIP+ 14] represent the maximum
corruption (up to 1/2), minimum assumptions (secure point-to-point channels
only) and highest efficiency (asymptotically same as best passive security MPC)
in the honest majority setting. The high efficiency is the result of an unusual
two-step approach to active security through first constructing an intermediate
protocol that computes an additively corruptible functionality/circuit (in fact,
most of the celebrated passive security MPC constructions suffice), and then
apply the intermediate protocol to compute an encoded version of the func-
tionality/circuit, instead of the functionality/circuit itself. Intuitively, the first
step, through protocol design and the secure channels assumption, reduces a
full-fledged malicious adversary to an additive adversary, who is then efficiently
(negligible overhead) defeated in the second step through a novel circuit encod-
ing technique and conventional input encoding against additive attack. We study
a weakening of security (privacy against malicious adversary) for MPC protocols
that is defined by allowing the output of the incorruptible functionality/circuit
to be corruptible and mainly show constructions that remove the secure point-
to-point channel assumption. To put this new notion of security in the right
context, we note here that malicious privacy in honest majority setting may not
be easy to achieve (much harder than semi-honest privacy), even assuming se-
cure point-to-point channels. The first step of the above construction does not
guarantee malicious privacy, since the functionality itself is corruptible, not only
the output of the functionality. Even after the circuit encoding (before input
encoding) in the second step above, the input to the functionality is still cor-
ruptible, rendering it not maliciously private. Achieving malicious privacy using
the above construction takes the same amount of efforts as achieving security
with abort. Finally, our conceptually very simple approach to privacy against
malicious adversary using MPC-in-the-head framework can be interpreted as an
example of circuit encoding such that running the encoded circuit using semi-
honestly secure two-party protocol yields privacy against malicious adversary.
The study of Interactive Non-Malleable Codes (INMC) [FGJ+ 19] considers
encoding of two-party protocols for achieving a strong non-malleability notion
called protocol non-malleable against an outsider tampering adversary (an in-
stance of outsider adversarial tampering is defined by a set of restrictions that
9
distinguish it from the insider tampering where one party is corrupted and
fully controlled by adversary). In [FGJ+ 19], three adversarial channel tamper-
ing classes: bounded state tampering, unbalanced split-state tampering and frag-
mented sliding window tampering were studied. The descriptions of the latter
two tampering models depend on the round number of the protocols to be en-
coded, which makes them not suitable for MPC (the round number of a general
purpose MPC protocol may depend on the depth of the circuit to be computed).
We first show that INMC against bounded state tampering can be used to con-
struct general purpose NM-MPC in the honest majority setting (at least three
parties). We then show that NM-MPC in the honest majority setting can be
used to construct a two-party protocol through the MPC-in-the-head paradigm.
Combining these two results, we have a correct way of obtaining non-malleability
against an insider adversary in two-party setting (as opposed to the naive direct
encoding approach).
10
adversary has full writing capability (in the sense of completely overwriting the
messages) and in effect cutting off the communication. To make it worse, there is
no public discussion channel usually assumed in the later model [Mau92,Mau93]
for information reconciliation that results in a shared imperfect secret (correct-
ness) and privacy amplification that generates a secret key (privacy) using a
randomness extractor. Intuitively, the channel controlled by a BST adversary
is too weak a resource for establishing secure communication. A novel idea of
generating independent keys that are sufficient for establishing non-malleable
communication was proposed in [FGJ+ 19] and such a protocol was termed Non-
Malleable Key Exchange (NM-KE) (see Definition 9 for an exact definition). It
was shown that NM-KE can be constructed by having both communicants apply-
ing a 2-split-state non-malleable extractor [CG17] to their correlated randomness
mentioned above. Intuitively, independent keys does not need the information
reconciliation (correctness) and can be generated from the privacy amplification
(non-malleability is malicious privacy).
The remaining technicalities for this idea to work in MPC setting (especially
in the joint tampering model) are carefully analyzing the type of information
available for a BST adversary at the tampering of a current message. Note
that each pair of parties are to independently run an NM-KE protocol and
observe that the messages exchanged are fresh uniform messages independent
from previous messages within one protocol and across multiple protocols.
In the independent tampering model, we observe that a Non-Malleable Secret
Sharing (NM-SS) (see Definition 4) with n shares has, in particular, an implicit
n-split-state non-malleable extractor [CG17] in its reconstruction function, which
guarantees that (roughly speaking) for any n independently chosen tampering
functions applied to the n shares, reconstructing from the tampered shares is
independent from reconstructing from the n clean shares. This suggests that one
could skip the independent keys generation step (and the corresponding OTP
followed by MAC) and simply rely on the reconstruction function of NM-SS
to provide non-malleability. Unfortunately, it is trivial to show that such NM-
SS implies that its reconstruction function is not a linear function (define a
tampering that adds a constant share vector for the secret 1, share by share
independently, the linear reconstruction function returns a related secret). But
for a secret sharing scheme to be useful in constructing general purpose MPC
protocols, linearity is the least property required to enable privacy-preserving
evaluation of secret values. For the efficient protocols in the honest majority
setting, a stronger multiplication property is required for the underlying secret
sharing scheme, which allows the reconstruction of the product of two secrets
from the share-wise multiplication of their share vectors (effectively requires that
the reconstruction of each secret should take less than n/2 shares). These make
relying on NM-SS for constructing general purpose NM-MPC protocols highly
technical, though efficiency-wise highly attractive.
The obstacles discussed above are the main reasons that the NM-MPC proto-
cols constructed in this approach can only tolerate a rather limited BST variant.
Firstly, to overcome the impossibility of NM-SS with linear reconstruction func-
11
tion, we restrict the type of functions the adversary is allowed to tamper with
s
each share, resulting in the FweakBST -tampering class. Secondly, we are able to
show that without the need to further restrict the tampering adversary, there
are linear NM-SS against the above restricted share tampering that at the same
time has (strong) multiplication property.
2 Preliminary
The statistical distance of two random variables (their corresponding distribu-
tions) is defined as follows. For X, Y ← Ω,
1 X
SD(X; Y) = |Pr(X = ω) − Pr(Y = ω)|.
2
ω∈Ω
ε
We say X and Y are ε-close (denoted X ∼ Y) if SD(X, Y) ≤ ε.
A secure computation task is defined by a function specifying the desired
mapping from the inputs to the final output. We consider arithmetic circuits over
some finite field and will identify a circuit C with the functionality it computes.
Formalizing the real world computation. We use the so-called client-server
model refinement of MPC protocols in this work. The inputs are provided by
the Clients {C1 , . . . , Cm }, each client Ci holds an input xi , and the computation
is processed by the Servers {S1 , . . . , Sn }, who do not have inputs. An n-server
m-client protocol Π over a finite field F proceeds in rounds where in each round
j in circuit evaluation phase, the protocol’s description contains n next message
functions nextMSGji for i = 1, . . . , n that can be represented as arithmetic circuits
over F. The next message function nextMSGji of server Si for the j-th round gets
as input all the messages that Si received until (not including) the j-th round and
Si ’s local randomness, and outputs Si ’s messages in the j-th round. The view
of Si during an execution of a protocol Π on inputs x, denoted by viewΠ i (x),
contains the random input Ri and all the messages received from the clients
and other servers. For every S = {i1 , . . . , it } ⊂ [n], we denote by viewΠ S (x) =
(viewΠ Π
i1 (x)), . . . , (viewit (x)). The input sharing phase and output reconstruction
phase happen before and after the circuit evaluation phase, respectively. For
every C ⊂ [m], we denote the view of clients in C by viewΠ Π
C (x). Let outC (x)
denote the output of the clients in C. The real world adversary Adv corrupts a
set S of servers and a set C of clients, which means the adversary act on the
corrupted parties’ behalf in the protocol. The execution of Π in the presence of
Adv who corrupts S ∪ C is characterized by a random variable
RealΠ,Adv,S∪C (x) = viewΠ,Adv
S∪C (x), outΠ,Adv
C
(x) ,
where the superscript ,Adv highlights the presence of Adv. The first component
is adversary’s view, which can be divided into the truncated views of S ∪ C and
the last communication
round messages from honest servers to corrupted clients:
Π,Adv Π,Adv Π,Adv
viewS∪C (x) = truncviewS∪C (x), lastmviewS→C (x) . The second component is
12
the output of honest clients, appending which is crucial for a unified formation
of privacy and correctness.
Formalizing the ideal world computation. There is an incorruptible trusted
party in the ideal world, who evaluates the circuit C on m inputs provided by
the clients and provides the outputs. The ideal world adversary Sim corrupts a
set S ∪ C of parties, which means that Sim gets to substitute the inputs of the
corrupted clients in C before they are given to the trusted party, and simulate
views for S ∪ C. The ideal world computation is characterized by a random
variable
C,Sim C,Sim
Idealabort
C,Sim,S∪C (x) = viewS∪C (x), outC (x) ,
where the superscript abort indicates that here Sim is allowed to individually
decide, after learning its own outputs, whether each honest party receives its
correct output from the functionality or a special ⊥ message which the party
outputs.
Comparing real/ideal world. The combination of privacy and correctness are
captured by requiring that given any real world adversary Adv corrupting S ∪ C,
there exist an ideal world Sim that (also corrupts S ∪ C) simulates Adv in the
sense that the two random variables RealΠ,Adv,S∪C (x) and Idealabort
C,Sim,S∪C (x) are
indistinguishable.
Definition 1. Let C be a m-input functionality and let Π be an m-client n-
server protocol. We say that Π (t, )-securely computes C if for every probabilistic
adversary Adv in the real world controlling a set S ⊂ [n] of servers such that
|S| ≤ t and a set C ⊂ [m], there exists a probabilistic simulator Sim in the ideal
world such that for every input x, it holds that
SD RealΠ,Adv,S∪C (x); Idealabort
C,Sim,S∪C (x) ≤ .
13
– Privacy. For any set A ⊂ [n] such that |A| ≤ t and for any s, s0 ∈ F, it holds
that
0
Sharen,t (s)A ∼ Sharen,t (s0 )A .
When the parameters are clear from the context, we simply write (Share, Recover).
Linear secret sharing schemes are closely related to linear codes.
Definition 3. A subset C ⊂ Fn is an [n, k, d]-linear code over finite field F if C
is a subspace of Fn of dimension k such that: for all c ∈ C\{0}, the Hamming
weight wH (c) > d (i.e., the minimum Hamming distance between two elements
of the code is at least d). A code is called Maximum Distance Separable (MDS)
if n − k + 1 = d.
For R ⊂ [n], let CR ⊂ F|R| be the projection of the code C on R:
CR : = {cR |c ∈ C}.
It can be shown that if C is an [n, k, n − k + 1]-linear MDS code, then for any
R ⊂ [n] with |R| ≥ k, we always have that CR is [|R|, k, |R| − k + 1]-linear MDS
code.
An [n, k, d]-linear code C over finite field F can be represented by its generator
matrix G ∈ Fk×n , whose rows form a basis of C. Let R ← Ft be a uniform t-
tuple. The sharing algorithm of Shamir’s secret sharing scheme can be described
as follows.
1 1 ... 1
a1 a2 . . . an
Share(s) = (s, R) . . . ,
.. .. . . . ..
at1 at2 . . . atn
where a1 , . . . , an are distinct non-zero elements in F. It can be seen that the
support of the random variable Share(0) is an [n, t, n − t + 1]-linear MDS code.
Definition 4 ([GK18]). A secret sharing scheme (Share, Recover) with n shares
and privacy threshold t is non-malleable with respect to a class F of tampering
functions (F-NM for short) if for any secret s ∈ S, any f ∈ F and any R ⊂ [n]
of size |R| = t + 1, there is a distribution Df,R over the set S ∪ {⊥} ∪ {same∗ }
determined solely by f and R, such that the following real tampering experiment
Tamperf,Rs and the simulation Patch(s, Df,R ) using Df,R are distinguishable with
a negligible advantage ε.
ε
Tamperf,R
s ∼ Patch(s, Df,R ),
where the real tampering experiment Tamperf,R
s and the simulation Patch(s, Df,R )
are defined as follows.
– The real tampering experiment is a random variable with randomness from
the randomised sharing algorithm Share:
v ← Share(s)
ṽ = f (v)
f,R
Tampers = .
s̃ = Recover(ṽR , R)
Output s̃.
14
– The simulation is a random variable defined from the distribution Df,R :
s̃ ← Df,R
Patch(s, Df,R ) = output s, if s̃ = same∗ ; .
output s̃, otherwise.
15
of servers and a set C of clients in an MPC protocol Π computing a circuit C as
a sequence of channel tampering functions as follows.
fΠ(C),S∪C : = {fCh(i,i0 ) }i,i0 ∈S̄, i<i0 ; {fCh(i,n+i0 ) }i∈S̄,i0 ∈C̄ . (1)
To summarize, the real world computation is the same as standard model where
there is no tampering (we recycle some of the notations from Definition 1 and
indicate the changes), except that the views of honest parties are subject to a
tampering denoted by fΠ(C),S∪C . The global view in real world contains the view
of corrupted parties and the output of honest clients. The real world computation
is characterized by a random variable
∗
Π,Adv∗
fΠ(C),S∪C
RealΠ,Adv,S∪C (x) = viewΠ,Adv
S∪C (x), outC
(x) ,
∗
where the superscript ,Adv highlights the presence of both Adv and fΠ(C),S∪C .
Formalizing the ideal world computation. There is an incorruptible trusted
party in the ideal world, who evaluates the circuit C on m inputs provided by
the clients and provides the outputs. The ideal world adversary Sim corrupts a
set S ∪ C of parties, which means that Sim gets to substitute the inputs of the
corrupted clients in C before they are given to the trusted party, and simulate
views for S ∪ C. Moreover, Sim is allowed to choose a function f and apply it
to the output of honest clients. We clarify at this point that the application of
f to the output happens after the circuit evaluation (without interfering with
the working of the incorruptible trusted party). The ideal world computation is
characterized by a random variable
f ←DfΠ(C),S∪C
C,Sim
IdealC,Sim,S∪C (x) = viewS∪C (x), f outCC,Sim (x) ,
f ←DfΠ(C),S∪C
where the superscript indicates that here Sim is allowed to individ-
ually modify, after learning its own outputs, the output that each honest party
receives.
Comparing real/ideal world. The privacy (decoupled from correctness) against
malicious adversary is captured by requiring that given any real world adversary
Adv corrupting S ∪ C and tampering fΠ(C),S∪C , there exists an ideal world Sim
that (also corrupts S ∪ C) simulates the adversary in the sense that the two ran-
f
Π(C),S∪C
f ←Df
Π(C),S∪C
dom variables RealΠ,Adv,S∪C (x) and IdealC,Sim,S∪C (x) are indistinguishable.
Definition 6 (NM-MPC). Let C be an m-input functionality and let Π be an
m-client n-server protocol that securely computes C when all parties follow the
protocol specifications. We say that Π (t, F, )-non-malleablly computes C if for
every probabilistic adversary Adv in the real world controlling a set S ⊂ [n] of
servers such that |S| ≤ t and a set C ⊂ [m] of clients as well as imposing a se-
quence fΠ(C),S∪C of F-tampering functions, there exists a probabilistic simulator
Sim in the ideal world such that the following holds for a tuple of distributions
{Di }i∈C with each Di supported on {⊥, same∗ } ∪ F.
fΠ(C),S∪C f ←DfΠ(C),S∪C
SD RealΠ,Adv,S∪C (x); IdealC,Sim,S∪C (x) ≤ , (2)
16
where the distribution DfΠ(C),S∪C satisfies
f ← DfΠ(C),S∪C
Patch(y, {Di }i∈C ) = .
output f (y)
Specially, the protocol is a detection NM-MPC if the tuple of distributions {Di }i∈C
is supported on {⊥, same∗ }|C| .
We only require the protocol to securely compute C when all parties follow
the protocol specifications in Definition 6, which is the weakest form of useful-
ness that successfully rules out vacuous private protocols (e.g. simply ignore all
parties and output a constant, this protocol is private against malicious adver-
sary although it does not compute anything). Another natural way to define
NM-MPC with a stronger form of usefulness is to further require (2) to hold for
fΠ(C),S∪C = Id (no tampering of secure point-to-point channels) with {Di }i∈C
solely supported on some ∆ ∈ {⊥, same∗ }|C| when there is deviation from pro-
tocol specifications. Secure (with abort) MPC protocols in Definition 1 satisfy
this stronger form of usefulness.
{gj : Fu → Fu ||Range(gj )| ≤ 2s },
17
s
Definition 8. The class FSS,θ of independent tampering functions for secret
u
sharing scheme over F induced by a θ-bounded (corrupting at most θ servers)
s
adversary of MPC protocol with FweakBST channel tampering access is defined as
s
follows. Any function f ∈ FSS,θ is written as
f : (Fu )n → (Fu )n , f = (f1 , . . . , fn ),
where at most θ components fi : Fu → Fu are arbitrary functions and the rest of
the n − θ components fi : Fu → Fu should satisfy
|fi (Fu )| ≤ 2s .
2. If the adversary chooses a function f ∈ F that does not alter any message
(we say the adversary is passive in this case), then
Pr[KA = KB ∧ KA 6=⊥ ∧KB 6=⊥] = 1.
3. If the adversary chooses a function f ∈ F that alters messages (we say the
adversary is active in this case), then there exists a probability pf such that
Pr[KA = KB ∧ KA 6=⊥ ∧KB 6=⊥] = pf ,
and (when KA and KB are not equal) at least one of the following must hold.
SD (KA , viewfAdv , KB ); (purify(KA ), viewfAdv , KB ) ≤ ε;
SD (KA , viewfAdv , KB ); (KA , viewfAdv , purify(KB )) ≤ ε.
18
k Π
The rate of a non-malleable key exchange is the ratio |Trans Π | , where Trans de-
notes the transcript of the protocol in the case that no abort occurs. A rate
0 < ρ < 1 is achievable by NM-KE with respect to F, if there exists ε-non-
malleable key exchange protocols with respect to F with rate approaching ρ and
ε goes to zero as the transcript size grows.
The high level idea of the construction in Theorem 1 is to encode the com-
munication between each pair of parties using an INMC with respect to bounded
state tampering. In order to argue non-malleability in the joint tampering model,
which is a natural extension as we move from interactive coding to multi-party
coding, we need a careful analysis of how the auxiliary information available at
the tampering of one message can affect the execution.
Theorem 1. Let Π be an MPC protocol (t, ε)-securely computes a circuit C with
r rounds and a communication complexity of Σ bits. Let ΠNM-KE be the rNM-KE -
s
round ε-non-malleable key exchange protocol with respect to FBST -channel tam-
pering with rate ρNM-KE [FGJ 19]. Let MAC : {0, 1} × {0, 1}λ → {0, 1}λ be a
+ 2λ
19
2. Protocol evaluation phase.
– If the jth round is not a round of input sharing gate for Π,
(a) When it is an honest server Si ’s turn to send messages, the party
invokes the next-message function nextMSGji of Π to compute
Si appends the local randomness Rji and the message mji,i to the
party’s own view viewΠ i .
(b) Next for every receiver Si0 of Si in the jth round of Π, the party Si
computes the one-time pad encryption as well as authentication tag
0 0
cji,i0 = mji,i0 ⊕ OTPkeyji,i0 and tji,i0 = MAC MACkeyji,i0 , cji,i0 ,
and finally appends mji,i0 to the party’s view viewΠ i0 . If the protocol is
executed under tampering, then the MAC key of server Si0 may be
different from the MAC key of server Si , or the messages received by
server Si0 may no longer be equal to (cji,i0 , tji,i0 ), and the verification
may fail causing the server Si0 to abort.
– If the jth round is a round of input sharing or output reconstruction
gate for Π, do the following.
(a) When it is a client Ci ’s (considered as the (n + i)th party) turn to
send messages, the party invokes nextMSGjn+i of Π to compute
(b) Next for every receiver Si0 of Ci in the jth round of Π, the party Ci
computes the one-time pad encryption as well as authentication tag
0 0
cjn+i,i0 = mjn+i,i0 ⊕OTPkeyji0 ,n+i and tjn+i,i0 = MAC MACkeyji0 ,n+i , cjn+i,i0 ,
20
(c) When every client that sends messages in the jth round of Π com-
pletes the transmission, each honest server Si0 verifies the tag of
the received messages (cjn+i,i0 , tjn+i,i0 ) from each client Ci using the
0
corresponding authentication key MACkeyji0 ,n+i then decrypts
0 0
Vf(MACkeyji0 ,n+i , cjn+i,i0 , tjn+i,i0 ) = 1, mji,i0 = cji,i0 ⊕ OTPkeyji0 ,n+i ,
21
abort. The fixed message topology agreed on by all parties before hand allows
the simulation to match the real protocol execution. Note that it is possible that
some honest parties may not abort if only their last messages are tampered with.
Without loss of generality, we describe the simulation of honest party’s output
for single output C. If an honest party aborts before completing his/her role in the
full MPC protocol execution, when he/she is the sender in the round after he/she
aborts, the parties expecting his/her messages will abort. In this way, more and
more parties abort until the party (reconstructor) who is supposed to compute
the final output aborts. This leads to an ⊥ output of the NM-MPC simulator.
It remains to consider the case when the last message (the sender has no further
role in the execution after the current round) of an honest party is tampered
with. In the honest majority MPC protocols, these rounds are collectively called
the “last message round”, where each party sends a share to the reconstructor,
who reconstructs the final output value. If tampering happens when an honest
party is sending his/her share to the reconstructor, he/she will not abort. But
according to the hardened protocol the reconstructor verifies the last message of
each party before using a subset of them for reconstruction, hence would catch
a tampering and abort. To summarize, the MPC simulator will output ⊥ with a
probability computed from the tampering functions or a value from the trusted
party. The concrete simulation algorithm is given below, followed by analysis.
First step, non-malleability simulator reads the sequence fΠ(C),S∪C of chan-
nel tampering functions for honest parties. Extracts the tampering functions
for the rounds corresponding to ΠKE and sample the reaction of key exchange
phase. If all keys are correctly generated, move on to next step, otherwise output
⊥. Let p1 be the probability that Sim continues. Second step, non-malleability
simulator extracts the tampering functions for the remaining rounds and ap-
ply them to uniform messages. In this step, compute the probability p2 that
all tampering functions fix all messages simultaneously. This defines, till here,
the distribution DfΠ(C),S∪C , which is supported on {⊥, same∗ }. Third step, non-
malleability simulator invoke the simulator of Π on the adversary strategy Adv
to obtain either ⊥ or an output provided by the trusted party. In the first step,
the tampering functions receives messages of other parallel executions of ΠNM as
auxiliary information in the joint tampering model. Note that, in particular, the
ΠNM considered here sends independent uniform strings each round and there is
no dependence across parallel execution. These auxiliary information does not
affect the security of the generated keys. In the second step, since all messages
are masked by the keys generated in the first step, independence of auxiliary
information in the joint tampering model reduces to security of the keys. In the
third step, if the real world protocol did not abort up to this point, with over-
whelming probability the adversary did not tamper with secure point-to-point
channels and the analysis reduces to corruption only adversary case.
22
s s
Theorem 2. Let cs = 2p sin(π/2 sin(π/p)
)
< 1 (when 2s < p). Let C : Fp × . . . ×
Fp → Fp be an m-input functionality. There is an m-client n-server MPC pro-
tocol ΠNM (basing on a secret sharing scheme (Share, Recover) over Fup ) that
, O(m, n − θ) · u2 · 2s · cs n/2−θ−1 + )-non-malleably computes C in
s
(θ, FweakBST
the independent tampering model. Moreover, the protocol ΠNM has asymptotic
communication complexity same as best passive security MPC.
23
When P = (P1 , . . . , Pk ) is s-bounded, we simply write (in the case when a
partition contains strictly less than 2s subsets, we pad empty sets in the back)
s s
P = (P11 ∪ . . . ∪ P12 , . . . , Pk1 ∪ . . . ∪ Pk2 ),
s s
where for each i ∈ [k], subsets Pi1 , . . . , Pi2 are disjoint and Pi1 ∪ . . . ∪ Pi2 = Fp .
In this way, an element y = (y1 , . . . , yk ) in the range of f = (f1 , . . . , fk ) is labeled
by a tuple (j1 , . . . , jk ) ∈ [2s ]k and f (x) = y means (x1 ∈ P1j1 ) ∧ . . . ∧ (xk ∈ Pkjk ).
Using the connection between linear secret sharing schemes and linear MDS
codes (see Section 2), the problem is reduced to the study of the distribution of
the random variable f (C), where C ← C is sampled uniformly from a linear MDS
code C. The Fourier analysis approach transforms the probability of f (C) taking
a certain value into a sum concerning products of Fourier coefficients, and then,
through bounding the Fourier coefficients, analyses the probability distribution
of f (C). The probability of f (C) taking the value labeled by (j1 , . . . , jk ) is com-
puted by summing, over the codeword space C, the probability of C = (c1 , . . . , ck )
and (c1 ∈ P1j1 ) ∧ . . . ∧ (ck ∈ Pkjk ), which is either 0 or |C|
1
. Using the Poisson Sum-
mation Formula, the sum over the linear space C is transformed into a sum over
the dual space C⊥ concerning products of Fourier coefficients. Consider another
random variable f (U ), where U ← Fkp is sampled uniformly from the full space
Fkp . The probability of f (U ) taking the same value labeled by (j1 , . . . , jk ) (a sum
over Fkp ) is transformed using the Poisson Summation Formula into a sum over
⊥
the dual space Fkp = {0}, which is the zero space consisting of an all-0 vector.
This means that the difference of the probability of the random variables f (C)
and f (U ) taking the same value labeled by (j1 , . . . , jk ) is expressed as a sum over
⊥
C⊥ \ Fkp = C⊥ \{0} of products concerning Fourier coefficients (the quantity
inside the | · | below), and hence the statistical distance between f (C) and f (U )
is expressed as follows.
1 X X Y
SD(f (C); f (U )) = 1̂P ji (αi ) .
2 i
(j1 ,...,jk )∈[2s ]k (α1 ,...,αk )∈C⊥ \{0} i∈[k]
It is shown (see [BDIR21, Lemma 4.16]) that the quantity inside the (·) in (4)
can be bounded as follows.
(P
s |1̂ ji (αi )| ≤ cs , if αi 6= 0;
Pji ∈[2 ] Pi
ji ∈[2s ] |1̂P ji (αi )| = 1, if αi = 0,
i
where the constant cs is defined and bounded as follows (see [BDIR21, Lemma
4.10])
2s sin(π/2s )
cs = ≤ 1 − 2−2s , 1 ≤ s ≤ log p − 1. (5)
p sin(π/p)
24
Now for a given s, through choosing a big enough dimension of C, one can
make the sum over C⊥ \{0} of the product concerning Fourier coefficients (each
⊥
product in (4) is upper bounded by cds , where d⊥ denotes the minimum distance
⊥
of C ) smaller than a given error parameter. There is an unnecessary undesir-
able dependence on the cardinality |C⊥ \{0}| of the dual code space (|C⊥ \{0}|
increases as p increases), which can be removed (through more sophisticated
analysis) yielding the bound in Lemma 1.
Definition 11. Let MDS[k, k−1, 2]p be an MDS code over alphabet Fp with code
parameter [k, k − 1, 2]. Let C ← MDS[k, k − 1, 2]p denote a random codeword of
MDS[k, k − 1, 2]p chosen uniformly from the codebook. Let U ← Fkp be the random
variable uniformly distributed over Fkp . We say that C ← MDS[k, k − 1, 2]p is
ε-indistinguishable from uniform by s-bounded partitions if for any s-bounded
s s
P = (P11 ∪ . . . ∪ P12 , . . . , Pk1 ∪ . . . ∪ Pk2 ),
k k
1 X Y Y
Pr[Ci ∈ Piji ] − Pr[Ui ∈ Piji ] ≤ ε.
2
(j1 ,...,jk )∈[2s ]k i=1 i=1
s s
Lemma 1 ([BDIR21] Th 4.6 with n = k and t = k−1). Let cs = 2p sin(π/p) sin(π/2 )
<
1 (when 2s < p). The random variable C ← MDS[k, k−1, 2]p is ε-indistinguishable
from uniform by s-bounded partitions for ε = 21 · 2s · (cs + 2−2s−1 )k−2 .
25
exactly what we exploit to prove that Shamir’s secret sharing scheme with re-
construction threshold below n/2 can be a NM-SS.
s s
Theorem 3. Let cs = 2p sin(π/p)
sin(π/2 )
< 1 (when 2s < p). Shamir’s secret sharing
s
scheme over Fp with privacy threshold t is FSS,θ -NM with error ε = 12 · 2s · (cs +
2−2s−1 )t−θ−1 .
Proof. We first prove the theorem for the special case of θ = 0 using Lemma 1
and then show a reduction of the θ > 0 case to an instance of the θ = 0 case
with shortened code lengthen.
s
Assume θ = 0. Let f = (f1 , . . . , fn ) ∈ FSS,θ be a secret sharing tampering
s
function. In particular, let Range(fi ) = {c̃i , . . . , c̃2i }, i = 1, . . . , n. In the case
1
when |Range(fi )| < 2s , we pad values not in Range(fi ) and let the pre-mage
s
sets of the padded values be empty. Let Pi = (Pi1 , . . . , Pi2 ) be defined by the fi
s
pre-image sets of c̃1i , . . . , c̃2i . By definition, we have
where the distributions of Ui1 + ∆si1 , . . . , Uit+1 + ∆sit+1 are in fact independent of
the secret s. On the other hand, the real tampering experiment is
v + ∆s ← Share(0) + ∆s
ṽR = fi1 (vi1 + ∆si1 ), . . . , fit+1 (vit+1 + ∆sit+1 )
f,R
Tampers = .
s̃ = Recover(ṽR , R)
Output s̃.
SD(Tamperf,R s ; Df,R )
1
P Qt+1 jik Qt+1 jik
≤ 2 (ji ,...,ji )∈[2s ]t+1 k=1 Pr[Ck ∈ Pik ] − k=1 Pr[Uik ∈ Pik ]
1 t+1
≤ 12 · 2s · (cs + 2−2s−1 )t−1
26
where the first inequality is due to the fact that we are bounding the statistical
distance between the outputs of Recover(·, R) using that of the inputs.
In the case when θ > 0, let Θ ⊂ [n] with |Θ| = θ denote the set of shares that
are arbitrarily tampered with by f . A simple observation to begin with is that
if R ∩ Θ = ∅, the arguments above for θ = 0 case still go through without any
modification. It is only when R ∩ Θ 6= ∅, we need adjustments. The tampering
at the shares in R ∩ Θ are not s-bounded. We will exclude these shares from the
analysis and construct a new instance where we can apply Lemma 1. It suffices
to bound the statistical distance for the worst-case scenario Θ ⊂ R.
We have used the fact that Share(0)R has the same distribution as C ←
MDS[t + 1, t, 2]p . Now conditioned on Share(0)Θ = w, for a constant vector
w ∈ Fθp , the distribution
C[t+1]\Θ0 |(CΘ0 = w) ≡ C 0 + cw 0
[t+1]\Θ 0 , C ← MDS[t + 1 − θ, t − θ, 2]p .
27
One extreme is the maximum state bound s = blog p − 1c case. The above bound
means that once we fix s = blog p − 1c to a constant, the non-malleability error
vanishes exponentially fast in t − θ, which implies non-malleability is possible
even for state bounds close to log p − 1 3 .
Another extreme in the opposite direction is the minimum state bound s = 1
case. We want to know how small can we choose t and p while still have reasonable
indistinguishability error ε. We substitute concrete values and estimate that for
a 10 bits prime p (log p = 10), choosing n = p − 1, t = 300 (approximately n/3)
allows for ε = 2−50 against up to θ = 125 fully tampered shares.
28
random [n, k, n − k + 1]p punctured Reed-Solomon code over finite field Fp admits
an exponentially small bound with exponentially small failure probability (see
Lemma 2 below).
An undesirable consequence of the Monte-Carlo nature of these constructions
is that we can not directly combine it with the technique we used in previous
subsection that exploits the difference between LR-SS and NM-SS. Assume we
were to apply the results of [MPSW21] in combination with the technique in
previous subsection, we should have the guarantee that for each reconstruction
set, the non-malleability error is exponentially small except with exponentially
small probability. But NM-SS requires non-malleability error to be negligible for
all reconstruction sets simultaneously. A naive union bound argument seems to
be not sufficient for keeping success probability close to 1, when the number of
reconstruction sets is large, which is typically the case when k = poly(log p).
We then only discuss the implication of the results concerning the physical-bit
attack functions, for which k can be chosen as small as a constant 2.
Lemma 2 ([MNP+ 21] Cor 3). Let 0 < δ < ln 2 be an arbitrary constant. Let
RS[n, k; X]p denote a random [n, k] punctured Reed-Solomon code over finite field
Fp of prime order with evaluation places X ← (F∗p )n . There exists a (slightly)
super-linear function P (·, ·) such that the following holds. For any block length
n ∈ N, code dimension 2 ≤ k ∈ N, physical-bit state bound s ∈ N, and indistin-
guishability error parameter ε = 2−κ , there exists λ0 = P (sn/k, κ/k) such that if
the number of bits λ needed to represent the order of the prime field Fp satisfies
λ > λ0 , then C ← RS[n, k; X]p is ε-indistinguishable from uniform by physical-
bit s-bounded partitions with probability (over the randomness of choosing the
evaluation places X ← (F∗p )n ) at least 1 − exp(−δ · (κ − 1)λ).
2 sn
In particular, a function P (sn/k, κ/k) = δ 0 · sn κ κ
k + k · log k + k , for an
appropriate universal positive constant δ 0 , suffices.
29
techniques that are applicable for most of the MPC protocols, we refer interested
readers to read their work.
– Inner protocol. The action of the abstract server S i above is in fact emulated
by Alice and Bob running an inner protocol to compute the functionality Gij .
Alice has
[j−1]
ShA (σ j−1
i ), µi↔A ; µji←A ; ShA (uj−1 j−1
i←1 ), . . . , ShA (ui←n ) .
Bob has
[j−1]
ShB (σ j−1
i ), µi↔B ; µji←B ; ShB (ui←1
j−1 j−1
), . . . , ShB (ui←n ) .
The functionality Gij takes inputs from Alice and Bob, and proceeds by
first reconstructing σ j−1 i and uj−1 j−1
i←1 , . . . , ui←n , and then evaluating a circuit
defined by nextMSGji . The functionality divides each of the private values
(σ ji and uji→1 , . . . , uji→n ) into two shares and outputs to Alice and Bob.
The inner protocol π OT is an OT-hybrid two-party computation protocol
that computes the collection of {Gij }i,j .
30
An oblivious watch list technique was then used to build a compiler out of Π and
π OT such that if Π is secure against active adversary in honest majority setting
and π OT is private against semi-honest adversary, then the compiled protocol
is secure against active adversary. Intuitively, the semi-honest π OT guarantees
private communication between the virtual servers and the oblivious watch list
technique enforces authenticated communication between the virtual servers,
which collectively reduces the security of the compiled protocol to the security
of Π with secure point-to-point channels.
We propose a general (abstract) framework of constructing non-malleable
two-party computation with/without tampering of the assumed ideal function-
ality OT.
Theorem 4. Let T be a tampering class for OT functionality. Let FT be the
tampering class for the communication channels between virtual servers induced
by executing π OT under T -tampered OT. Let Π be an NM-MPC with respect to
FT . There is a compiler for Π and semi-honest π OT that gives a general purpose
non-malleable two-party computation protocol with respect to T -tampering of OT.
The proof of Theorem 4 is straightforward and is omitted. Interesting tamper-
ing classes studied in the implementation security literature are those capturing
powerful tampering adversaries against whom it is impossible to recover the
ideal functionality. The types of imperfectness studied in the line of works on
OT combiners [HKN+ 05] and more generally OT extractors [IKOS09] may serve
as examples of weak tampering adversary (they are weak tampering because it
is possible to extract ideal OT from the tampered version). Here we briefly dis-
cuss the special case of T = ∅ (no tampering, OT is an ideal functionality). In
this special case, non-malleability becomes privacy against malicious adversary
in the OT-hybrid model. We argue that Theorem 4 provides a conceptually very
simple way of achieving privacy against malicious adversary for general purpose
two-party computation. The compiler can skip the oblivious watch list construc-
tion and naively compile Π and π OT , which is amount to Alice and Bob running
semi-honest π OT . Intuitively, the emulation of the NM-MPC protocol Π (with
respect to F∅ -tampering) servers as an encoding of the circuit C (in the sense of
[GIP+ 14]) such that semi-honestly evaluate the encoded circuit is private against
malicious adversary. Note that we do not have explicit construction of NM-MPC
Π (with respect to F∅ -tampering) computing a circuit C and it is not clear if
this approach yields efficient protocols.
5 Conclusion
We extended the non-malleability notion in tamper-resilient storage to tamper-
resilient computation and defined Non-Malleable Multi-Party Computation (NM-
MPC) using standard ideal/real world formulation: the ideal world adversary is
allowed to tamper with the output of the trusted party, yielding a way to relax
correctness of secure computation in a privacy-preserving way. For MPC proto-
cols in honest majority setting, where efficient constructions with full security
31
assuming secure point-to-point channels are well understood and no security is
known without the assumption, we showed non-malleability is achievable when
the assumed channels are severely tampered. For MPC protocols in no honest
majority setting, where weak secure computation notions play important roles,
we discussed the implications of NM-MPC in honest majority setting in two-
party computation, via MPC-in-the-head paradigm.
Acknowledgement
The author would like to thank Yuval Ishai for suggesting a technical strength-
ening in Definition 6, while a preliminary version of this paper was presented in
ICCC 2022 (https://www.bilibili.com/video/BV1br4y1x7Qu/?spm_id_from=
333.788.recommend_more_video.1).
References
ADN+ 19. Divesh Aggarwal, Ivan Damgård, Jesper Buus Nielsen, Maciej Obremski,
Erick Purwanto, João L. Ribeiro, and Mark Simkin. Stronger leakage-
resilient and non-malleable secret-sharing schemes for general access struc-
tures. In Advances in Cryptology - CRYPTO, pages 510–539, 2019.
AGM+ 15a. Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, and
Manoj Prabhakaran. Explicit non-malleable codes against bit-wise tam-
pering and permutations. In Advances in Cryptology - CRYPTO 2015,
pages 538–557, 2015.
AGM+ 15b. Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, and
Manoj Prabhakaran. A rate-optimizing compiler for non-malleable codes
against bit-wise tampering and permutations. In Theory of Cryptography
Conference, TCC 2015, pages 375–397, 2015.
AL17. Gilad Asharov and Yehuda Lindell. A full proof of the BGW protocol for
perfectly secure multiparty computation. J. Cryptol., 30(1):58–151, 2017.
BDIR18. Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, and Tal Rabin. On
the local leakage resilience of linear secret sharing schemes. In Advances
in Cryptology - CRYPTO 2018, pages 531–561, 2018.
BDIR21. Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, and Tal Rabin. On
the local leakage resilience of linear secret sharing schemes. J. Cryptol.,
34(2):10, 2021.
BDL01. Dan Boneh, Richard A. DeMillo, and Richard J. Lipton. On the impor-
tance of eliminating errors in cryptographic computations. J. Cryptol.,
14(2):101–119, 2001.
Bea89. Donald Beaver. Multiparty protocols tolerating half faulty processors. In
Advances in Cryptology - CRYPTO ’89, volume 435, pages 560–572, 1989.
BGJ+ 13. Elette Boyle, Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, and Amit
Sahai. Secure computation against adaptive auxiliary information. In
Advances in Cryptology - CRYPTO, pages 316–334, 2013.
BGJK12. Elette Boyle, Shafi Goldwasser, Abhishek Jain, and Yael Tauman Kalai.
Multiparty computation secure against continual memory leakage. In Sym-
posium on Theory of Computing Conference, STOC, pages 1235–1254,
2012.
32
BGW88. Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness the-
orems for non-cryptographic fault-tolerant distributed computation (ex-
tended abstract). In Proceedings of the 20th Annual ACM Symposium on
Theory of Computing, pages 1–10, 1988.
Bla79. George R. Blakley. Safeguarding cryptographic keys. In Proceedings of the
1979 AFIPS National Computer Conference, pages 313–317, 1979.
CCD88. David Chaum, Claude Crépeau, and Ivan Damgård. Multiparty uncondi-
tionally secure protocols (extended abstract). In Symposium on Theory of
Computing, STOC, pages 11–19, 1988.
CDF+ 08. Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padró, and Daniel
Wichs. Detection of algebraic manipulation with applications to robust
secret sharing and fuzzy extractors. In Advances in Cryptology - EURO-
CRYPT, volume 4965, pages 471–488, 2008.
CDN15. Ronald Cramer, Ivan Damgård, and Jesper Buus Nielsen. Secure Multi-
party Computation and Secret Sharing. Cambridge University Press, 2015.
CG16. Mahdi Cheraghchi and Venkatesan Guruswami. Capacity of non-malleable
codes. IEEE Trans. Information Theory, 62(3):1097–1118, 2016.
CG17. Mahdi Cheraghchi and Venkatesan Guruswami. Non-malleable coding
against bit-wise and split-state tampering. J. Cryptology, 30(1):191–241,
2017.
CM97. Christian Cachin and Ueli M. Maurer. Unconditional security against
memory-bounded adversaries. In Advances in Cryptology - CRYPTO ’97,
pages 292–306, 1997.
Des94. Yvo Desmedt. Threshold cryptography. In Eur. Trans. Telecommun.,
volume 5, pages 449–458, 1994.
DH76. Whitfield Diffie and Martin E Hellman. New directions in cryptography.
IEEE Transactions on Information Theory, 22(6):644–654, 1976.
DKRS06. Yevgeniy Dodis, Jonathan Katz, Leonid Reyzin, and Adam Smith. Robust
fuzzy extractors and authenticated key agreement from close secrets. In
Advances in Cryptology-CRYPTO 2006, pages 232–250. Springer, 2006.
DN07. Ivan Damgård and Jesper Buus Nielsen. Scalable and unconditionally
secure multiparty computation. In Advances in Cryptology - CRYPTO
2007, volume 4622, pages 572–590, 2007.
DPW18. Stefan Dziembowski, Krzysztof Pietrzak, and Daniel Wichs. Non-
malleable codes. J. ACM, 65(4):20:1–20:32, 2018.
FGJ+ 19. Nils Fleischhacker, Vipul Goyal, Abhishek Jain, Anat Paskin-Cherniavsky,
and Slava Radune. Interactive non-malleable codes. In Theory of Cryp-
tography TCC, pages 233–263, 2019.
FV19. Antonio Faonio and Daniele Venturi. Non-malleable secret sharing in the
computational setting: Adaptive tampering, noisy-leakage resilience, and
improved rate. In Advances in Cryptology - CRYPTO, pages 448–479,
2019.
GIP+ 14. Daniel Genkin, Yuval Ishai, Manoj Prabhakaran, Amit Sahai, and Eran
Tromer. Circuits resilient to additive attacks with applications to secure
computation. In Symposium on Theory of Computing, STOC, pages 495–
504, 2014.
GK18. Vipul Goyal and Ashutosh Kumar. Non-malleable secret sharing. In ACM
SIGACT Symposium on Theory of Computing, STOC 2018, pages 685–
698, 2018.
33
GMW87. Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental
game or A completeness theorem for protocols with honest majority. In
Symposium on Theory of Computing STOC, pages 218–229, 1987.
GPR16. Vipul Goyal, Omkant Pandey, and Silas Richelson. Textbook non-
malleable commitments. In ACM SIGACT Symposium on Theory of Com-
puting, STOC, pages 1128–1141, 2016.
HKN+ 05. Danny Harnik, Joe Kilian, Moni Naor, Omer Reingold, and Alon Rosen.
On robust combiners for oblivious transfer and other primitives. In Ad-
vances in Cryptology - EUROCRYPT, pages 96–113, 2005.
IKOS09. Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Ex-
tracting correlations. In Symposium on Foundations of Computer Science,
FOCS, number 261–270, 2009.
IPS08. Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. Founding cryptography
on oblivious transfer - efficiently. In Advances in Cryptology - CRYPTO,
pages 572–591, 2008.
IPS09. Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. Secure arithmetic com-
putation with no honest majority. In Theory of Cryptography, TCC, pages
294–314, 2009.
Li18. Xin Li. Pseudorandom correlation breakers, independence preserving
mergers and their applications. Electronic Colloquium on Computational
Complexity (ECCC), 25:28, 2018.
Mau92. Ueli M. Maurer. Protocols for secret key agreement by public discussion
based on common information. In Advances in Cryptology - CRYPTO ’92,
volume 740, pages 461–470, 1992.
Mau93. Ueli M Maurer. Secret key agreement by public discussion from common
information. IEEE Transactions on Information Theory, 39(3):733–742,
1993.
MNP+ 21. Hemanta K. Maji, Hai H. Nguyen, Anat Paskin-Cherniavsky, Tom Suad,
and Mingyuan Wang. Leakage-resilience of the shamir secret-sharing
scheme against physical-bit leakages. In Advances in Cryptology - EU-
ROCRYPT, pages 344–374, 2021.
MPSW21. Hemanta K. Maji, Anat Paskin-Cherniavsky, Tom Suad, and Mingyuan
Wang. Constructing locally leakage-resilient linear secret-sharing schemes.
In Advances in Cryptology - CRYPTO, pages 779–808, 2021.
MR18. Payman Mohassel and Peter Rindal. Aby3 : A mixed protocol framework
for machine learning. In Proceedings of the 2018 ACM SIGSAC Conference
on Computer and Communications Security, CCS, pages 35–52, 2018.
RB89. Tal Rabin and Michael Ben-Or. Verifiable secret sharing and multiparty
protocols with honest majority (extended abstract). In Symposium on
Theory of Computing, STOC, pages 73–85, 1989.
Sha79. Adi Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979.
TX21. Ivan Tjuawinata and Chaoping Xing. Leakage-resilient secret sharing with
constant share size. 2021.
Yao82. Andrew Chi-Chih Yao. Protocols for secure computations (extended ab-
stract). In Foundations of Computer Science, FOCS, pages 160–164, 1982.
Supplementary Materials
34
A Execution of MPC protocol under tampering
An MPC protocol Π for evaluating an arithmetic circuit C evaluates different
types of gates in topological order determined by the design of the protocol and
the circuit structure. Each gate typically contains several rounds of communica-
tions and in each round 4 a party transmits one message to other parties. Here
we enforce a sequential order for the rounds from beginning of the protocol till
the end of the protocol and assume every party knows about this message topol-
ogy. Most gates are evaluated by the n servers, except the input sharing gates,
where the m clients also take part. We first formally describe the processing of
one round evaluated by the n servers under tampering. Assume it is server Si ’s
turn to activate his/her next message function nextMSGji in the jth round. The
function nextMSGji is evaluated to compute the next messages for server Si to
be sent to other servers
where Rji is the local randomness of the server Si generated in the jth round and
viewΠ
i denotes the collection of the view of Si up to this point. The tampering
function fij for the current transmitted messages of server Si is applied and
results in
j j j j
(m̃
i,1 , . . . , m̃i,i−1 , m̃i,i+1 , . . . , m̃i,n ) : =
fij previousMSG, (mji,1 , . . . , mji,i−1 , mji,i+1 , . . . , mji,n ) ,
where the local randomness Rji of the server Si and the message mji,i that Si
keeps on his/her own are not tampered with. In the input sharing phase of an
MPC protocol, the m clients provide the input in the form of secret shares to the
servers. We model the input sharing phase to be interactive and multiple-round
as in the rest of the protocol. The communication only happens between the
clients and the servers (the servers do not communicate with each other and
the clients do not communicate with each other). We still denote the tampering
function for the messages transmitted by the server Si by fij , although the mes-
sages of Si in this case are consist of m components intended for m clients. Let
the tampering function for the messages transmitted by the client Ci be denoted
4
Here the term round is to be distinguished from its usual usage in honest majority
MPC literature where it allows multiple senders to send messages simultaneously.
35
j
by fn+i (we consider Ci as the (n + i)th party). The messages transmitted by
the client Ci are consist of n components intended for n servers.
In general, the tampering of an MPC protocol with a total of r rounds can
be described by the following sequence of tampering functions.
j
{fi }i∈[n] , jth round is not a round of input sharing gate
{fij }i∈[n+m] , jth round is a round of input sharing gate j=1,...,r
The execution of an MPC protocol under corruption means there are a set
S of t corrupt servers that the adversary can read their views real time and
can make these servers coordinately modify their next message functions. When
analysing the combined influence, through controlling the corrupt servers and
through tampering with the transmitted messages, we sometimes assume that
the adversary does not tamper with the channels of the corrupt servers. This is
without loss of generality because the adversary can always make the corrupt
servers further modify their next message functions in order to achieve what the
adversary could have additionally achieved through tampering with the mes-
sages corrupt servers receive/send. In the sequel, we only need to consider the
tampering of the communication between honest parties,
(" )
{fij }i∈S̄ , jth round is not a round of input sharing gate
{fij }i∈S̄∪{n+1,...,n+m} , jth round is a round of input sharing gate
j=1,...,r
where previousMSG{i,i0 } denotes the subset of previous messages that are trans-
mitted between the server Si and the server Si0 . We collect the tampering func-
tions that are applied to the messages transmitted between the server Si and
the server Si0 , and define a tampering function fCh(i,i0 ) for the channel Ch(i, i0 ).
n o
j j
fCh(i,i0 ) = (fi,i 0 , fi0 ,i ) ,
j th round is not a round of input sharing gate
36
B Linear-based protocol with illustration
Definition 12 ([GIP+ 14]). Let n = 2t + 1 and let (Sharen,t , Recovern,t ) be a
redundant dense linear secret sharing scheme. An n-server m-client protocol Π for
computing a single-output m-client circuit C : Fu1 × . . . × Fum → Fu is said to be
linear-based with respect to (Sharen,t , Recovern,t ) if Π has the following structure
with linear protocols (defined immediately afterwards) as internal components.
1. Setup phase. During this phase all servers participate in some linear proto-
col Πsetup that gets no auxiliary inputs. At the end of this phase every server
k
Si holds a vector of shares setupShgi for every multiplication gate g k ∈ GΠmult
in C.
2. Randomness generation phase. During this phase all servers participate
in some linear protocol Πrandom that gets no auxiliary inputs. At the end of
k
this phase every server Si holds a share randShgi for every randomness gate
g k ∈ GΠrand in C.
3. Input sharing phase. Π processes every input gate g k ∈ GΠinput in C be-
longing to a client as follows. The client shares its input x for g k using
k
(Share, Recover) and then sends each server Si its corresponding share Shgi .
4. Circuit evaluation phase. Π computes C in stages. During the k-th stage
in an honest execution, the k-th gate, g k , inside C is evaluated (in some
topological order) and at the end of the stage the servers hold a sharing of
the output of g k with a distribution induced by Share. The evaluation of each
gate is done as follows.
(a) If g k is an addition gate with inputs g a and g b , Π evaluates g k by having
each server Si sum its shares corresponding to the outputs of g a and g b .
Similarly, for a subtraction gate, Si subtracts its shares corresponding
to the outputs of g a and g b . There is no communication during these
k
rounds. Each server Si holds a share Shgi of the output of gate g k .
(b) If g k is a multiplication gate with inputs g a and g b , Π evaluates g k using
some n-party linear protocol Πmult such that the main inputs of the i-
a b
th server Si to Πmult are its shares Shgi and Shgi corresponding to the
k
outputs of g a and g b . The auxiliary input of Si to Πmult is Shgi , which
is the results of the setup phase associated with g k . Each server Si holds
k
a share Shgi of the output of gate g k .
5. The protocol finishes before the output gate g out in C is processed and each
out
server Si holds a share Shgi of the output of gate g out .
The notion of linear protocols is an abstraction of the internal components
Πsetup , Πrandom and Πmult of linear-based MPC protocols.
Definition 13 ([GIP+ 14]). An n-party protocol Π is said to be a linear proto-
col, over some finite field F if Π has the following properties.
1. Inputs. The input of every server Si is a vector of field elements from F.
Moreover, Si ’s inputs can be divided into two distinct types, the main inputs
and auxiliary inputs.
37
2. Messages. Recall that each message in Π is a vector of field elements from
F. We require that every message m of Π, sent by some server Si , belongs
to one of the following categories:
(a) m is some fixed arbitrary function of Si ’s main inputs (and is indepen-
dent of its auxiliary inputs).
(b) every entry mj of m is generated as some fixed linear combination of
Si ’s auxiliary inputs and elements of previous messages received by Si .
3. The output of every party Si is a linear function of its incoming messages.
Let (Share, Recover) be the Shamir secret sharing scheme. The semi-honest
DN [DN07] n-server m-client protocol Π for computing a single-output m-client
circuit C : Fu1 ×. . .×Fum → Fu , where n = 2t+1 is given as follows (double-random(·)
and random(·) are defined immediately afterwards).
1. Setup phase. During this phase all servers participate in the linear protocol
Πsetup = double-random(`mult ) in order to generate the randomness needed for
evaluation of multiplication gates during the protocol, where `mult denotes
the number of multiplication gates. At the end of this phase every server
gk gk gk
Si holds a vector of shares setupShi = ri , Ri for every multiplication
gate g k ∈ GΠmult in C.
2. Randomness generation phase. During this phase all servers participate
in the linear protocol Πrandom = random(`rand ) in order to generate the shares
corresponding to the outputs of the randomness gates inside C, where `rand
denotes the number of randomness gates. At the end of this phase every
k k
server Si holds a share randShgi = rig for every randomness gate g k ∈ GΠrand
in C.
3. Input sharing phase. Π processes every input gate g k ∈ GΠinput in C be-
longing to a client as follows. The client shares its input x for g k using
k
(Share, Recover) and then sends each server Si its corresponding share Shgi .
4. Circuit evaluation phase. Π computes C in stages. During the k-th stage
in an honest execution, the k-th gate, g k , inside C is evaluated (in some
topological order) and at the end of the stage the servers hold a sharing of
the output of g k with a distribution induced by Share. The evaluation of
each gate is done as follows.
(a) If g k is an addition gate with inputs g a and g b , Π evaluates g k by having
each server Si sum its shares corresponding to the outputs of g a and g b .
Similarly, for a subtraction gate, Si subtracts its shares corresponding
to the outputs of g a and g b . There is no communication during these
k
rounds. Each server Si holds a share Shgi of the output of gate g k .
(b) If g k is a multiplication gate with inputs g a and g b , Π evaluates g k using
the following n-party linear protocol Πmult as follows. The main inputs of
a b
the i-th server Si to Πmult are its shares Shgi and Shgi corresponding to
k
a b g
the
koutputs of g and g . The auxiliary input of Si to Πmult is setupShi =
g gk
ri , Ri , which is the results of the setup phase associated with g k .
Server Si then does the following.
38
a b k
i. Compute Shi = Shgi · Shgi + Rig and send Shi to S1 .
ii. S1 upon receiving the shares (Sh1 , . . . , Shn ) from all the servers com-
putes D = Recover[n] (Sh1 , . . . , Shn ) and sends D to all the servers.
k
iii. Each server Si upon receiving a value D from S1 computes Shgi =
k
D − rig .
Note that Share(D) = (D, . . . , D) when the random polynomial chosen
by the share algorithm has zero coefficients for all positive degree terms.
k
Each server Si then holds a share Shgi of the output of gate g k .
5. The protocol finishes before the output gate g out in C is processed and each
out
server Si holds a share Shgi of the output of gate g out .
Definition 14. Let F be a finite field. Let M ∈ Fr×c be a matrix with the number
r of rows bigger than the number c of columns. The matrix M is said to be super
invertible if any sub-matrix formed by c rows of M is invertible.
39
write C : Fu1 × . . . × Fum → Fu to indicate that C is an arithmetic circuit over F
with m inputs and one single output. We denote by |C| the number of gates in C.
For an input x ∈ Fu1 × . . . × Fum we denote by C(x) the result of evaluating C
on x if C is deterministic and the resulting distribution if C is randomised.
An additive attack A on a deterministic or randomised circuit C assigns an
element of Fu to each of its internal wires as well as to each of its outputs. We
denote by Au,v the attack A restricted to the wire (u, v). For every wire (u, v)
in C, the value Au,v is added to the output of u before it enters the inputs of v.
Similarly we denote by Aout the restriction of A to the outputs of C and the value
Aout is added to the outputs of C. For simplicity, we assume u1 = . . . = um = u.
Definition 16 (Additively corruptible version of a circuit). Let C : Fu1 ×
. . .×Fum → Fu be a circuit containing w wires. The additively corruptible version
of C is the functionality
that takes additional input from the adversary A which indicates an additive
corruption for every wire of C. For all (x, A), f˜C (x, A) outputs the result of the
additively corrupted C as specified by the additive attack A when invoked on the
inputs x.
Definition 17 ([GIP+ 14]). A randomised circuit Ĉ : Fu1 × . . . × Fum → Fu
is an ε-secure implementation of a function C : Fu1 × . . . × Fum → Fu against
additive attacks if the following holds:
– Completeness. For all x ∈ Fu1 × . . . × Fum , it holds that Pr[Ĉ(x) = C(x)] = 1.
– Additive-attack security. For any circuit C̃ obtained by subjecting Ĉ to an
additive attack, there exists a ∈ Fu1 × . . . × Fum and a distribution A over
Fu such that for any x ∈ Fu1 × . . . × Fum , it holds that
SD Ĉ(x); C(x + a) + A ≤ ε.
The definition naturally extends to the case when the functionality computed by
C is randomised.
Lemma 3 ([GIP+ 14]). For any finite field F and arithmetic circuit C : Fu1 ×
. . . × Fum → Fu , there exists a randomised circuit Ĉ : Fu1 × . . . × Fum → Fu of
size O(|C|) such that Ĉ is O(|C|/|F|)-secure implementation of C against additive
attacks.
The following private version of AMD code [CDF+ 08] is due to [GIP+ 14]
Definition 18. An (u, u0 , ε)-AMD code is a pair of circuits (AMDEnc, AMDDec),
0 0
where AMDEnc : Fu → Fu is randomised and AMDDec : Fu → F × Fu is deter-
ministic such that the following properties hold:
– Perfect completeness. For all x ∈ Fu , it holds that Pr[AMDDec(AMDEnc(x)) =
(0, x)] = 1.
40
0 0
– Additive robustness. For any ∆ ∈ Fu , ∆ 6= 0u , and for any x ∈ Fu , it holds
that
Pr[AMDDec(AMDEnc(x + ∆)) ∈ / ERR] ≤ ε,
where ERR = F∗ × Fu denotes detection of error.
Moreover, (AMDEnc, AMDDec) is called a private (u, u0 , ε)-AMD code if for any
0 0
∆ ∈ Fu , ∆ 6= 0u , y ∈ F∗ × Fu and for any x0 , x1 ∈ Fu , it holds that
Private AMD codes can be constructed from plain AMD codes (AMDEnc, AMDDec)
through modifying its decoder as follows.
1. Compute (b, z) from AMDDec(c).
2. Output (0, z) + br, where r is generated uniformly from F × Fu .
The above trick together with the known construction of asymptotically op-
timal constructions of AMD codes [DKRS06,CDF+ 08] immediately yields the
following.
Lemma 4. For any positive integers u and σ, there exists a pair of circuits
(AMDEnc, AMDDec) s.t. for any finite field F it holds that (AMDEnc, AMDDec)
is a private (u, O(u + σ), |F|1σ )-AMD code. Moreover, the size of AMDEnc and
AMDDec is O(u + σ).
The construction of [GIP+ 14] (see Construction 1) starts with private MPC
protocols and strengthens them for protection against the deviation of active
Adv from the protocol. Intuitively, the linear-based protocols privately evaluate
a circuit through sharing the inputs using the same linear secret sharing scheme
(with independent randomness each time invoking the sharing algorithm) and,
with the help of the homomorphic property of the secret sharing scheme, having
the n servers operating on the secrets through operating on the shares. The
homomorphic property of the underlying secret sharing also plays a crucial role
in analysing what form of influence the deviation of malicious adversary has over
the execution of these protocols, which are designed for privacy only. Our focus
is the role of the assumption of secure point-to-point communication channels.
A secure communication channel is both private and authenticated.
The authenticity of the communication channel guarantees that among the n
shares for a secret held by a client or an honest server, n − t shares are correctly
received by honest servers, who, according to the definition of honest servers,
will follow the protocol and correctly operate on these correct shares. Roughly
speaking, since the n − t shares out of the total n shares contain full information
about the shared secret (honest majority), the execution of the protocol will
not deviate from its course “too much”, no matter how the corrupt servers may
deviate from the protocol. In particular, the authenticity guarantee allows for
an analysis that interprets the difference between what corrupt servers ought
to do and what they actually do as a blind additive offset for the circuit. More
41
concretely, the analysis basically “ignore” the shares of the corrupt servers S and
only consider the shares in S̄ (the honest servers). By doing this, the influence of
Adv is limited to the gates that require communication among servers (usually
the input gate and multiplication gate), where corrupt servers can influence the S̄
part through sending “wrong” messages to the honest servers S̄. For each corrupt
server, the difference between the n−t “wrong” shares and n−t “correct shares”
(if the server were to follow the protocol) determines an offset in the secret.
The privacy of the communication channel guarantees that only the t shares
of S are seen by the adversary. According to the privacy of the secret sharing
scheme, these t shares do not contain any information about the secret, and
hence have a distribution independent of the other n − t shares. This allows one
to claim that the offset described above is only blindly chosen. Finally, the fact
that the offset is additive follows from the homomorphic property we mentioned
earlier, which is necessary for the protocol even there is no adversary.
Upon interpreting the influence of a deviation of active Adv as a set of blind
additive offsets added to wires of the circuit being evaluated, one can seek a
circuit protection solution to achieving robustness of MPC protocols. This solu-
tion takes two steps (see Construction 1, Circuit Π(C) construction). The first
step is to protect the inputs and the output of the circuit against additive attack
using an AMD code. In particular, the encoding of the input and decoding of the
output are done by clients locally while the decoding of the input (before com-
puting) and encoding of the output (before outputting the result to the receiver
client) are processed collectively by the n servers through privately evaluating
some augmenting gates of the circuit. The second step is to compile the aug-
mented circuit in the precious step into a protected version such that any blind
additive corruption of the wires is turned into additive attacks at the input and
output only (see Definition 17).
D Proof of Theorem 2
To combine the NM-SS over prime fields with other building blocks, we need a
s
Fp -linear FSS,θ -NM-SS over an extension field Fpu .
Theorem 5. Let p be a prime number and s < log p. There is a Fp -linear
s
FSS,θ -NM-SS over Fup with privacy threshold t and non-malleability error ε =
2s sin(π/2s )
u
2 · 1
2 · 2s · (cs + 2−2s−1 )t−θ−1 , where cs = p sin(π/p) .
Proof. Let (Share, Recover) be Shamir’s secret sharing scheme over Fp with pri-
vacy threshold t for n players. We construct (u-Share, u-Recover) over Fup .
42
The output of the sharing algorithm is then
43
Proof (Theorem 2). The construction closely follows the secure with abort MPC
construction of [GIP+ 14]. We introduce a virtual output extractor to simplify the
proof (this also captures the type of applications where the output remains in
distributed form as shares of the underlying secret sharing scheme).
The fact that Π (t, ε)-securely computes C with abort was shown in [GIP+ 14].
In the sequel, we prove non-malleability of Π against channel tampering using
the non-malleability of the underlying secret sharing.
We start with an observation that greatly simplifies the construction of non-
malleability simulator (the simulation of the honest party’s output) for Π. This
observation is in fact a consequence of non-malleability of secret sharing and
s
the definition of FSS,θ . It can be shown using an argument similar to [CG16,
Theorem 5.3] that if a secret sharing scheme is non-malleable with respect to
44
s
FSS,θ , then the tampering experiment Tamperf,R s of the secret sharing scheme is
s
strictly independent of the secret s, for any tampering function f ∈ FSS,θ and any
reconstruction set R. This is slightly stronger than the plain NM-SS definition,
where the tampering experiment Tamperf,R s is allowed to depend on the secret
s if its simulation Df,R outputs the symbol same∗ with non-zero probability.
Π(C)
Let G be the set of gates in the circuit for computing C according to Π
that require secure point-to-point channels in processing them. With the above
observation, it follows that once the shares of a secret are tampered with using a
s
function from FSS,θ , non-malleability of (NMShare, NMRecst) guarantees that the
information about the original secret is destroyed in the sense that given a recon-
struction set R, the new secret contained in the tampered shares corresponding
to R can be simulated without knowing the original secret. This in particular
means that one does not need to keep track of the output values (in general they
are random variables with randomness from the randomised sharing algorithm
NMShare) of the erroneously evaluated intermediate gates, since they will not
Π(C)
affect the output of g out ∈ Π(C) if there is a gate g k ∈ G lying between
out
g and the intermediate gates. Our non-malleability simulator for Π works in
Π(C)
reverse topological order starting from the output gate g out ∈ Π(C). Let G be
the set of gates in the circuit for computing C according to Π that require secure
Π(C)
point-to-point channels in processing them. If the output gate g out ∈ G , then
out
˜ g , which determines the value (distribution) of the output of the virtual
Sh R
output extractor, can be directly simulated using the non-malleability simulator
Π(C)
of (NMShare, NMRecst) and the deviation of the corrupt servers. If g out ∈ / G ,
Π(C)
we need to look at the gates in G that are evaluated before the output gate
Π(C)
g out and do not have gates in G lying between g out and them. Let us denote
Π(C),last
these gates by G and the vector of shares corresponding to these gates by
Π(C),last out
˜ G
Sh ˜ g can be simulated through computing
, for server Si , i ∈ R. The Sh
i R
out Π(C),last
˜g
for each server Si in R a share Sh ˜ G
from Sh . Indeed, for severs in R\S,
i i
out Π(C),last
Sh ˜ G
˜ g from Sh according to how Adv would deviate from the protocol. De-
i i
Π(C),last
pending on the specific construction of ΠPriv , the set G can be different.
We next explicitly describe our non-malleability simulator for Π.
out
1. Simulating shares Sh ˜ g of the output gate g out :
R
Π(C)
– g out ∈ G . Directly simulate
Π(C) out
(a) Read from f S̄,G the subset {f j,g |j ∈ S̄} of tampering func-
tions corresponding to the honest servers S̄ for the output gate
g out and, for each j ∈ S̄, call the non-malleability simulator of
out
(NMShare, NMRecst) with tampering function f j,g and reconstruc-
tion set R.
i. s̃ ← Df j,gout ,R
45
j,g out out
˜
ii. (Sh ˜ j,g ) ← NMShare(s̃)
, . . . , Sh
1 n
out
j,g
˜
iii. output ShR\S
(b) Read from Adv the messages that it instructs the corrupt servers S
to send to the honest servers in R\S and the final share for output
gate if the corrupt server is in R ∩ S. For each j ∈ S,
out
˜ j,g
i. output Sh R\S
out
ii. if j ∈ S ∩ R, output Sh ˜g
j
(c) For each j ∈ R\S, compute according to the protocol for the gate
out out out
˜ g from the received messages (Sh
g out the share Sh ˜ 1,g , . . . , Sh
˜ n,g )
j j j
obtained in steps (a) and (b).
Π(C) Π(C),last
– g out ∈
/ G . Find G according to construction of ΠPriv and simu-
late the followig.
Π(C)
• ΠPriv without setup phase (e.g. [BGW88]): the gates G that re-
quire communication to process are the multiplication gates and in-
put gates.
k
(a) Simulating shares Sh ˜ g for each g k ∈ G Π(C),last .
R
Π(C),last
∗ If g k ∈ G is an input gate, similar to the “Directly sim-
Π(C)
ulate” steps described for g out ∈ G special case above with
simplification. The non-malleability simulator of (NMShare, NMRecst)
is only called for once and the item (c) is empty.
Π(C),last
∗ If g k ∈ G is a multiplication gate, same as the “Directly
Π(C)
simulate” steps described for g out ∈ G special case above.
g out
(b) Computing shares Sh ˜ out
R of the output gate g .
For each j ∈ R\S, compute according to the protocol the share
out
˜ g from the received messages in previous steps. Read from
Sh j
g out
˜
Adv the final share Sh of a corrupt server j for output gate if
j
the corrupt server is in R ∩ S
Π(C)
• ΠPriv with setup phase (e.g.[DN07]): the gates G that require com-
munication to process are the setup gates and input gates. Moreover,
the input gates are lying between the output gate and setup gates.
k
(a) Simulating shares Sh˜ g for each g k ∈ G Π(C),last .
R
Π(C),last
g k ∈ G is always an input gate, similar to the “Directly
Π(C)
simulate” steps described for g out ∈ G special case above with
simplification. The non-malleability simulator of (NMShare, NMRecst)
is only called for once and the item (c) is empty.
out
(b) Computing shares Sh ˜ g of the output gate g out .
R
For each j ∈ R\S, compute according to the protocol the share
out
˜ g from the received messages in previous steps. Read from
Sh j
out
˜ g of a corrupt server j for output gate if
Adv the final share Sh j
the corrupt server is in R ∩ S
46
2. Virtual output extractor:
(a) On input a reconstruction set R ⊂ [n] of the underlying secret sharing
out
˜g .
scheme of ΠPriv , reconstruct the secret (b, z 0 ) from Sh R
(b) If b 6= 0, then output aborts.
(c) Otherwise, compute (b0 , y) ← AMDDec(z 0 ).
(d) If b0 6= 0, then output aborts.
(e) Otherwise, output y.
47