0% found this document useful (0 votes)
7 views47 pages

Non-Malleable Multi-Party Computation: Abstract

Uploaded by

kareemtamer13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views47 pages

Non-Malleable Multi-Party Computation: Abstract

Uploaded by

kareemtamer13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Non-Malleable Multi-Party Computation

Fuchun Lin?

Abstract. We study a tamper-tolerant implementation security notion


for general purpose Multi-Party Computation (MPC) protocols, as an
analogue of the leakage-tolerant notion in the MPC literature. An MPC
protocol is tamper-tolerant, or more specifically, non-malleable (with re-
spect to a certain type of tampering) if the processing of the protocol
under corruption of parties (and tampering of some ideal resource as-
sumed by the protocol) can be simulated by an ideal world adversary
who, after the trusted party spit out the output, further decides how
the output for honest parties should be tampered with. Intuitively, we
relax the correctness of secure computation in a privacy-preserving way,
decoupling the two entangled properties that define secure computation.
The rationale behind this relaxation is that even the strongest notion of
correctness in MPC allows corrupt parties to substitute wrong inputs to
the trusted party and the output is incorrect anyway, maybe the impor-
tance of insisting on that the adversary does not further tamper with
the incorrect output is overrated, at least for some applications. Vari-
ous weak privacy notions against malicious adversary play an important
role in the study of two-party computation, where full security is hard to
achieve efficiently.
We begin with the honest majority setting, where efficient constructions
for general purpose MPC protocols with full security are well under-
stood assuming secure point-to-point channels. We then focus on non-
malleability with respect to tampered secure point-to-point channels.
(1) We show achievability of non-malleable MPC against the bounded
state tampering adversary in the joint tampering model through a naive
compiler approach, exploiting a known construction of interactive non-
malleable codes. The construction is currently not efficient and should be
understood as showing feasibility in a rather strong tampering model. (2)
We show efficient constructions of non-malleable MPC protocols against
weaker variants of bounded state tampering adversary in the independent
tampering model, where the protocol obtained have the same asymptotic
communication complexity as best MPC protocols against honest-but-
curious adversary. These are all information-theoretic results and are to
be contrasted against impossibility of secure MPC when secure point-to-
point channels are compromised.
Though general non-malleable MPC in no honest majority setting is
beyond the scope of this work, we discuss interesting applications of
honest majority non-malleable MPC in the celebrated MPC-in-the-head
paradigm. Other than an abstract result concerning non-malleability, we
also derive, in standard model where there is no tampering, that strong
?
Department of Electrical and Electronic Engineering, Imperial College London, UK.
E-mail: [email protected]
(ideal/real world) privacy against malicious adversary can be achieved
in a conceptually very simple way.

1 Introduction
In secure Multi-Party Computation (MPC), a set of players wish to evaluate
a function f on their private inputs without revealing information about their
private inputs beyond what is contained in the output. The function f is publicly
known to all players and is assumed to be an arithmetic circuit C over some
finite field. The task can be trivially accomplished given a trusted party: every
player gives his/her input to the trusted party and the trusted party does the
computation, returns the result. The study of secure MPC is about replacing
the trusted party with a protocol that works exactly the same as the trusted
party, despite possible active/passive attacks from bounded number of players.
Both correctness and privacy against corruption of parties are formulated as a
simulation based notion, in an entangled way, that involves a real world and an
ideal world (in ideal world there is a trusted party). The security is formulated
as the existence of an efficient simulator that, through oracle call to the trusted
party in the ideal world and may substitute the inputs of the corrupt parties,
simulates a set of views of the corrupt parties that have the same distribution
as the views they would get if they were to run the protocol in the real world
where the corrupt parties may deviate from the protocol.
MPC was introduced in the work of Yao [Yao82]. Feasibility results in the
computational setting on MPC were obtained by Yao [Yao82] and Goldreich et
al. [GMW87], where the adversary can corrupt all but one players but is assumed
to have bounded computational resources. Our focus in this work is information-
theoretic security. Feasibility results in information-theoretic setting were shown,
for up to less than 1/3 corrupt parties assuming secure point-to-point commu-
nication channels [BGW88,CCD88], and for up to less than 1/2 corrupt parties
assuming secure point-to-point communication channels and a broadcast channel
[RB89,Bea89]. A construction of secure MPC with a slightly weaker correctness
notion called security with abort for up to less than 1/2 corrupt parties under the
sole assumption of secure point-to-point communication channels was given that
has communication complexity asymptotically the same as best MPC protocols
for passive adversary [GIP+ 14].
The above feasibility results were proved in the theoretic security (black-box
security) model, where every cryptography primitive is an abstract object with
ideal functionality. In real life, every cryptographic algorithm is ultimately im-
plemented on a physical device that affects, is being affected by, the environment
around it. Security models that take this fact into account are called the imple-
mentation security models. The real-life adversary studied in implementation
security models can be divided into two groups: leakage adversary (passive at-
tack) and tampering adversary (active attack). Implementation security models
against leakage attacks for general purpose MPC protocols were first studied in

2
[BGJ+ 13,BGJK12]. The adversary considered in [BGJ+ 13] can corrupt an arbi-
trary subset of parties and, in addition, can learn arbitrary auxiliary informa-
tion on the entire states of all honest parties (including their inputs and random
coins), in an adaptive manner, throughout the protocol execution. The above
standard notion of secure computation is impossible (the adversary can simply
leak the private input of an honest party) and a weaker notion called leakage-
tolerant was shown to be achievable, which guarantees that for any amount of
information the real world adversary is able to (adaptively) acquire throughout
the protocol, this same amount of auxiliary information is given to the ideal
world simulator. In contrast to [BGJ+ 13], [BGJK12] constructed MPC proto-
cols that achieve standard ideal world security (where no leakage is allowed in
the ideal world) against real world adversaries that may leak repeatedly from
the secret state of each honest player separately, assuming a one-time leak-free
preprocessing phase, and assuming the number of parties is large enough. In-
tuitively, the one-time leak-free preprocessing phase is exploited by the honest
parties to secret share their private inputs (private inputs are erased once the
shares are stored) and with the independent leakage assumption, it is possible
to prevent the adversary from obtaining information about the private inputs.
The recent result on Leakage-Resilient MPC (LR-MPC) [BDIR18] belongs to
the latter category.

Formulating meaningful security notions for MPC against tampering is more


delicate than against leakage. We draw ideas from the tamper-resilient cryptogra-
phy literature. Non-malleability as a coding goal was proposed by Dziembowski,
Pietrzak and Wichs [DPW18], which offers an abstract solution: encoding the
sensitive state using a Non-Malleable Code (NMC) and decoding it before the
state is needed for an execution. An NMC with respect to a class F of tam-
pering functions guarantees that for any tampering function f ∈ F, there is a
distribution Df solely defined by f such that decoding under the influence of
f can be simulated using Df and interacting with a trusted party who has the
secret state, which in effect turns a possibly harmful tampering f into a harmless
one. The phrases “harmful” and “harmless” are best illustrated by the following
example that is usually used as a motivating example for NMC. It was shown
in [BDL01] that if the state to protect is a secret signing key of a signature
scheme used in an implementation of RSA based on the Chinese remainder the-
orem, and the application of a tampering function to the storage always results
in flipping a single bit of the secret signing key, then the RSA modulus can be
factored to recover the signing key, through observing an incorrectly generated
signature. Here, the influence that turns an unknown value into a related un-
known value is a harmful influence. On the other hand, turning an unknown
value into a different value without knowing the relation between the two is
harmless influence. Although special purpose protocols such as non-malleable
commitments [AGM+ 15b,AGM+ 15a,GPR16] were constructed using NMC, it is
not clear whether NMC can be used to secure a general purpose MPC protocol
against tampering.

3
The next object closer to providing a solution is the currently active research
area of Non-Malleable Secret Sharing (NM-SS) [GK18] (see Definition 4). Se-
cret sharing, introduced independently by Blakley [Bla79] and Shamir [Sha79],
is a major tool in MPC constructions (cf. [CDN15]) and threshold cryptogra-
phy [Des94]. The goal in secret sharing is to encode a secret s into a number of
shares Sh1 , . . . , Shn that are distributed among a set [n] = {1, . . . , n} of parties
such that for a privacy threshold t, any set R ⊆ [n] with |R| > t can reconstruct
the secret, while any set A ⊂ [n] with |A| ≤ t do not have information about the
secret. For special purpose protocols such as threshold signature schemes, com-
pilers based on NM-SS that transform a black-box security threshold signature
scheme into one that is resilient against independent tampering of all parties
were constructed in [ADN+ 19,FV19]. Given the ubiquitous presence of secret
sharing in the construction of general purpose MPC protocols, one would expect
a more prominent role of NM-SS in securing MPC protocols against tampering.
One bottle neck here is that NM-SS was proposed as the opposite of secret shar-
ing with homomorphic properties, which are exactly the kind of secret sharing
used in MPC protocols.
The object closest to providing a general solution is the following. Similar
to generalising block codes to interactive codes, NMC was recently generalised
to Interactive NMC (INMC) [FGJ+ 19]. It is shown that the interactive setting
allows INMC to be constructed in many powerful tampering models that would
have been impossible in non-interactive setting. For example, they considered a
Bounded State Tampering (BST) model, where an adversary can keep a state
that stores information about past messages and use it in tampering with the
current message (the current message is given to the adversary in full) and
showed that a rather strong security called protocol non-malleability is achiev-
able. However, the two parties executing the protocol are both honest and the
tampering is in an outsider model. Encoding a secure two-party computation
protocol using an INMC will not provide any protection, as the adversary of
two-party computation is executing the protocol as an insider. We will return to
this in Related works.
Our contributions. We propose a tamper-tolerant notion for general pur-
pose MPC protocols, as an analogue of the leakage-tolerant notion studied
in [BGJ+ 13] in the sense that we allow the ideal world adversary to tamper
with the output of the computation, to be compared with that the leakage ideal
world adversary in [BGJ+ 13] gets to leak same amount of information as the real
world adversary. On the other hand, in order to define a useful tamper-tolerant
notion, we insist that the ideal world adversary should only be allowed to harm-
lessly tamper with the output of the computation, in the sense of [DPW18].
Our motivating example. We continue with the signature scheme motivating
example for NMC and take it to a setting of threshold signature scheme, where
the secret signing key remains in the distributed form held by n servers all
through the signing process. Moreover, consider a natural situation where the
secret signing key itself is to be generated on the spot, from private values that
are held by honest but mutually distrusting clients (for instance, computing a

4
session key from global private keys). A standard solution that kills two birds
with one stone is to have the servers and the clients run an MPC protocol in
the so-called client-server model, where the clients provide private inputs and
the servers compute. If the MPC protocol employed is based on a secret sharing
scheme, the servers naturally have the distributed form of the secret signing key
upon the completion of circuit evaluation as each server’s last message to be
delivered for output reconstruction 1 . The robustness of MPC protocol prevents
a black-box adversary that corrupts bounded number of servers (servers do not
have input hence can not do input substitution) from changing the value of the
being computed secret signing key. But if a real life adversary exploits flaws in
the implementation of the MPC protocol and is capable of inflicting a bit-flip
in the computed signing key, the same attack of [BDL01] can be used to factor
the underlying RSA modulus. The application scenario motivates the study of
a non-malleable MPC, where tampered protocol should not compute an output
related to private inputs of honest parties.
In order to achieve the non-malleability described in the motivating example,
we must adapt at least some minimum restrictions, as otherwise, a tampering
adversary can trivially make the output depend on the private inputs of honest
parties if we were to allow the adversary access to everything following [BGJ+ 13].
But instead of assuming a harm-free phase as done in [BGJK12] (seems too much
to ask in real life), we begin with MPC protocols in honest majority setting
and go for an adversarial channel tampering model (cf. [FGJ+ 19]), where the
adversary can tamper with all the messages traveling between parties, but has to
follow certain patterns that define a structured type of tampering. We explicitly
model the execution of such an MPC protocol under corruption and a type F
of adversarial channel tampering (see the beginning of Section 3).
Definition[NM-MPC] (informal summary of Definition 6) An MPC protocol
evaluating a circuit is non-malleable if for any active adversary corrupting a
bounded number of parties (and any choice of a tampering function from tam-
pering class F), there exists a simulator that takes the adversary’s corruption
strategy (and tampering function) as inputs and simulate in the ideal world,
where there is an incorruptible trusted party evaluating the circuit, the real
execution of the protocol under corruption (and tampering): the simulator is
1
The client-server model with the output of computation remains in secret-shared
form is common practice and in particular useful (see [MR18] and references therein).
Machine learning is widely used to produce models that classify images, authenticate
biometric information, recommend products, choose which Ads to show, and iden-
tify fraudulent transactions, etc. Internet companies regularly collect users’ online
activities and browsing behaviour to train more accurate recommender systems. The
healthcare sector envisions a future where patients’ clinical and genomic data can be
used to produce new diagnostic models. There are efforts to share security incidents
and threat data, to improve future attack prediction. In all above application sce-
narios, the data being classified or used for training is often sensitive and may come
from multiple sources with different privacy requirements. MPC is an important tool
for supporting machine learning on private data held by mutually distrusting entities
and the end product of data training is stored in distributed form for privacy.

5
allowed to modify the incorruptible trusted party’s output for honest parties, if
the real world adversary deviates from the protocol.
Non-malleability as defined above is a relaxed notion of security for MPC
protocols which suggests an interesting way of decoupling the two entangled
properties (correctness and privacy) that define secure computation. Intuitively,
we no longer insist on correctness (we allow the ideal world adversary to tamper
with the output if the real world adversary deviates from the protocol), but pre-
serve the privacy notion in its strongest possible form (the view of the adversary
is simulated without information beyond the corrupted parties’ input and what
is implied in an output). The rationale behind this relaxation is that, since the
correctness of secure computation in its strongest possible form still allows ma-
licious adversary to substitute wrong inputs on behalf of the corrupted parties
leading to an unintended output, the importance of insisting on preventing the
adversary from further tampering with the unintended output may be overrated,
at least in some applications. For the two example NM-MPC constructions, we
in fact achieve a stronger security that allows the modified output for honest
parties to be either remains the same as the incorruptible trusted party has
outputted it or following a distribution determined by the adversary (the prob-
ability of remaining the same and the fixed distribution are both independent
of the incorruptible trusted party’s output). This makes them sufficient for the
application in Our motivating example.
We begin with showing that with this relaxation of security (privacy against
malicious adversary), information-theoretic honest majority MPC protocols can
be constructed even when their secure point-to-point channels are tampered
with. Bounded State Tampering is a well-established adversarial tampering model
since [CM97]. We follow the recent formulation in [FGJ+ 19]. The adversary
keeps a state of bounded size (at most s bits) storing information about past
messages that can be used in the tampering of the current message. For a 2-round
s
protocol, f = (f1 , f2 ) ∈ FBST can produce m̃1 = g1 (m1 ) and m̃2 = f2 (m1 , m2 ) =
g2 (m2 , h1 (m1 )) depending on h1 (m1 ) and m2 , where the range of h1 is {0, 1}s .
Our first result of feasibility nature is that once the amount of information of
s
past messages the adversary is allowed to “memorise” is limited: F = FBST , NM-
MPC can be constructed even in the joint-tampering model, where an adversary
can jointly tamper with messages intended for different receivers. We emphasize
that the adversary has full control over the current messages and there is no
information-theoretic approach to secure the channel against such adversary
(not even to detect its presence).
Theorem (informal summary of Theorem 1) For channel tampering class F over
which a pair of independent keys can be generated by two communicants, there
is a compiler that turns a secure MPC protocol into an NM-MPC protocol in the
joint tampering model and the communication complexity increases by (3/ρ + 1)
times, where ρ is the rate in which such independent keys can be generated.
We will discuss the notion of independent keys further in technical overview.
Substituting in a known construction for generating such independent keys
s
against FBST -tampering [FGJ+ 19], we obtain NM-MPC with respect to cor-

6
s
ruption of parties and FBST -tampering of secure channels in the joint tampering
model. In particular, the level of non-malleability achieved in this construction is
only marginally weaker than the secure with abort notion in [GIP+ 14] (see Def-
inition 1), where the adversary is allowed to individually decide, after learning
its own outputs, whether each honest party receives its correct output from the
functionality or a special ⊥ message which the party outputs. The level of non-
malleability achieved above further relaxes secure with abort notion and allows
each honest party to receive its correct output with a probability p and output
⊥ with probability 1 − p (“secure with probabilistic abort”). On the other hand,
the shortcoming of the above approach is the excessive cost (the generated key
is of length a close to zero fraction of total communication). The known con-
s
struction for generating the independent keys against FBST -tampering is based
on a split-state non-malleable extractor [CG17], whose efficient construction is a
notoriously hard bottle-neck problem in NMC and NM-SS literature [Li18].
Motivated by finding more efficient approaches that circumvent NM-KE, we
consider relying on the secret sharing scheme implementing the protocol. With-
out generating a long key to mask the transmitted messages, we can not hope
to defeat a joint tampering adversary. We fall back to independent tampering
s
model and also consider a weakened bounded state tampering FweakBST , where
the state size bound is effective for the current message and all past messages.
s
For a 2-round protocol, f = (f1 , f2 ) ∈ FweakBST can produce m̃1 = g1 (h1 (m1 ))
depending on h1 (m1 ) only and m̃2 = f2 (m1 , m2 ) = g2 (h2 (m2 , h1 (m1 ))) depend-
ing on h2 (m2 , h1 (m1 )) only, where the range of h1 , h2 is {0, 1}s . Though much
s s
weaker than FBST -tampering, here FweakBST -tampering still allows the adversary
to selectively overwrite the whole current message (again, impossible to detect
adversary’s presence).
Theorem (informal summary of Theorem 2) For constant integers s, θ, and big
enough prime number p, there exist a MPC protocol that non-malleably computes
an arithmetic circuit over Fp against an active adversary corrupting at most θ
s
servers, and FweakBST -tampering of the secure channel in the independent model.
The protocol has the same asymptotic communication complexity as best passive
security MPC.
s
It is fair to say that the combination of FweakBST -tampering and the indepen-
dent tampering model defines a rather contrived adversary that is difficult to find
applications in real life. The more interesting message carried in this construction
is the fact that weaker notion of security (privacy against active adversary) for
MPC is efficiently achievable without assuming secure point-to-point channels.
Using the technical framework initiated in [BDIR18], we are able to construct
NM-SS that provides flexible choice of parameters ranging from two extremes
(parameters of NM-SS directly translate into those of NM-MPC). One extreme
is the maximum state case, where for every share the adversary stores all infor-
mation except one bit (essentially the adversary can arbitrarily tamper with the
share, independently). It is still possible to obtain NM-SS as the non-malleability
error vanishes exponentially fast in the number t − θ of uncorrupted shares in a
reconstruction set of size t + 1. The other extreme is the minimum state case,

7
where for every share the adversary stores one bit information. In this case, for
a 10 bits prime p (log p = 10), choosing n = p − 1, t = 300 (approximately n/3)
allows for ε = 2−50 against up to θ = 175 fully tampered shares.
Allowing non-explicit Monte-Carlo constructions and further restricting to a
yet smaller class of tampering functions (the information to store in the state is
through reading physical bits only, called physical-bit s-BST), we are able to use
the more recent results of [MPSW21,MNP+ 21] to achieve another dimension of
extreme parameters: the reconstruction threshold can be set as low as t + 1 = 2
(privacy threshold t = 1) , and considering varying number of parties n = 10, 100,
and 1000, the non-malleability error as small as 2−50 against physical-bit 1-
bounded state tampering can be achieved with success probability 1 − 2−50
(over choosing the evaluation places), using a prime number p with more than
λ = 430, 4800, and 62000 bits, respectively.

As the applications of NM-MPC in honest majority setting, we discuss NM-


MPC in no honest majority setting making use of the MPC-in-the-head paradigm
[IPS08,IPS09]. One reason for this choice is that these protocols are described
in the Oblivious Transfer (OT)-hybrid model, where an ideal OT functionality
is assumed that the protocol can make repeated calls as an incorruptible trusted
party. Information-theoretic secure MPC protocols in no honest majority setting
can be efficiently constructed in OT-hybrid model similar to the honest major-
ity counterparts efficiently constructed assuming secure point-to-point channels.
Another reason is that the MPC-in-the-head framework gives us a quick way to
transfer the findings in one setting to another. The MPC-in-the-head framework
uses an inner semi-honestly secure two-party protocol π OT to emulate the exe-
cution of an outer MPC protocol Π in honest majority setting among a large
set of virtual servers computing the given circuit.
Theorem 4 Let T be a tampering class for OT functionality. Let FT be the
tampering class for the communication channels between virtual servers induced
by executing π OT under T -tampered OT. Let Π be an NM-MPC with respect to
FT . There is a compiler for Π and semi-honest π OT that gives a general purpose
non-malleable two-party computation protocol with respect to T -tampering of
OT.
We do not consider explicit tampering models for ideal OT functionality in
this work. When there is no tampering, non-malleability becomes privacy against
malicious adversary. The above result suggests a conceptually very simple way
to obtain real/ideal world privacy against malicious adversary (the strongest
form of privacy) for general purpose two-party protocols: construct an NM-MPC
protocol Π with respect to F∅ and use semi-honestly secure two-party protocol
π OT to emulate it.
Related works and open questions.
The leakage-tolerant notion of [BGJ+ 13] relaxes privacy of secure compu-
tation in a correctness-preserving way. This notion allows clean modeling of
the leakage and clean quantitive formulation: the amount of leakage given to
the ideal world adversary is the same as the amount allowed in the real world.

8
The tampering counterpart is more difficult to capture both conceptually and
quantitively. We propose a tamper-tolerant notion motivated by relaxing the
correctness of secure computation in a privacy-preserving way. Conceptually, we
draw inspirations from the non-malleability notion in tamper-resilient cryptog-
raphy and merge it with the simulation based formulation in MPC literature.
This generalizes the idea of harmless tampering, which is a tampering defined
independent of any information that would breach the privacy, to secure compu-
tation. Quantitively, the “amount” of tampering allowed in the real world does
not translate into the “amount” of tampering in the ideal world (as a symbol-by-
symbol analogy to [BGJ+ 13] would suggest), but into the amount of overhead
(e.g. communication complexity) required for turning the harmful real world
tampering into harmless ideal world tampering.
The secure with abort MPC protocols in [GIP+ 14] represent the maximum
corruption (up to 1/2), minimum assumptions (secure point-to-point channels
only) and highest efficiency (asymptotically same as best passive security MPC)
in the honest majority setting. The high efficiency is the result of an unusual
two-step approach to active security through first constructing an intermediate
protocol that computes an additively corruptible functionality/circuit (in fact,
most of the celebrated passive security MPC constructions suffice), and then
apply the intermediate protocol to compute an encoded version of the func-
tionality/circuit, instead of the functionality/circuit itself. Intuitively, the first
step, through protocol design and the secure channels assumption, reduces a
full-fledged malicious adversary to an additive adversary, who is then efficiently
(negligible overhead) defeated in the second step through a novel circuit encod-
ing technique and conventional input encoding against additive attack. We study
a weakening of security (privacy against malicious adversary) for MPC protocols
that is defined by allowing the output of the incorruptible functionality/circuit
to be corruptible and mainly show constructions that remove the secure point-
to-point channel assumption. To put this new notion of security in the right
context, we note here that malicious privacy in honest majority setting may not
be easy to achieve (much harder than semi-honest privacy), even assuming se-
cure point-to-point channels. The first step of the above construction does not
guarantee malicious privacy, since the functionality itself is corruptible, not only
the output of the functionality. Even after the circuit encoding (before input
encoding) in the second step above, the input to the functionality is still cor-
ruptible, rendering it not maliciously private. Achieving malicious privacy using
the above construction takes the same amount of efforts as achieving security
with abort. Finally, our conceptually very simple approach to privacy against
malicious adversary using MPC-in-the-head framework can be interpreted as an
example of circuit encoding such that running the encoded circuit using semi-
honestly secure two-party protocol yields privacy against malicious adversary.
The study of Interactive Non-Malleable Codes (INMC) [FGJ+ 19] considers
encoding of two-party protocols for achieving a strong non-malleability notion
called protocol non-malleable against an outsider tampering adversary (an in-
stance of outsider adversarial tampering is defined by a set of restrictions that

9
distinguish it from the insider tampering where one party is corrupted and
fully controlled by adversary). In [FGJ+ 19], three adversarial channel tamper-
ing classes: bounded state tampering, unbalanced split-state tampering and frag-
mented sliding window tampering were studied. The descriptions of the latter
two tampering models depend on the round number of the protocols to be en-
coded, which makes them not suitable for MPC (the round number of a general
purpose MPC protocol may depend on the depth of the circuit to be computed).
We first show that INMC against bounded state tampering can be used to con-
struct general purpose NM-MPC in the honest majority setting (at least three
parties). We then show that NM-MPC in the honest majority setting can be
used to construct a two-party protocol through the MPC-in-the-head paradigm.
Combining these two results, we have a correct way of obtaining non-malleability
against an insider adversary in two-party setting (as opposed to the naive direct
encoding approach).

This preliminary study of NM-MPC leaves behind many interesting open


questions. Many cryptographic primitives (e.g. non-malleable commitments) have
been studied in non-synchronizing man-in-the-middle setting, we leave non-
malleability of MPC protocols against richer tampering models as future works.
The NM-MPC we define is corresponding to static (v.s. adaptive) adversary
MPC, where the adversary decides on a fixed set of parties to corrupt and stick to
the set all through the execution. We do include partial adaptiveness in defining
the tampering, for instance, the joint tampering model and the BST functions.
The more challenging fully adaptive NM-MPC is open. The non-malleability
security discussed in this work is a stand-alone (v.s. universally composible) no-
tion, where only a single protocol session runs in isolation. Such stand-alone
notion provides only limited guarantees regarding the security of systems that
involve the execution of two or more protocols. Although the absence of cor-
rectness guarantee may limit its applications (the motivating example describes
an application in composition of protocols), universally composible NM-MPC is
open.
Lastly and most importantly, a systematic study of NM-MPC in no honest
majority setting is the most interesting open area to explore. Given the impor-
tance of weak security notions in this setting in the literature, one would most
likely find non-trivial applications of NM-MPC here.
Technical overview. The ideal functionality of secure point-to-point channel
assumed in the study of honest majority secure MPC is implemented either us-
ing public key cryptography [DH76] under computational hardness assumptions
or using imperfect correlated randomness [Mau92,Mau93] obtained from some
assumed resource, by having the two communicants first agree on a shared key
private from outsiders and then mask their outgoing messages with a One-Time-
Pad (OTP) followed by a Message Authentication Code (MAC) tag to establish
an abstract secure channel. Through exchanging independent uniform messages
using the given imperfect channel controlled by a BST adversary, one obtains
correlated randomness in the latter model. An important observation here is
these correlated randomness are possibly not correlated at all, since any BST

10
adversary has full writing capability (in the sense of completely overwriting the
messages) and in effect cutting off the communication. To make it worse, there is
no public discussion channel usually assumed in the later model [Mau92,Mau93]
for information reconciliation that results in a shared imperfect secret (correct-
ness) and privacy amplification that generates a secret key (privacy) using a
randomness extractor. Intuitively, the channel controlled by a BST adversary
is too weak a resource for establishing secure communication. A novel idea of
generating independent keys that are sufficient for establishing non-malleable
communication was proposed in [FGJ+ 19] and such a protocol was termed Non-
Malleable Key Exchange (NM-KE) (see Definition 9 for an exact definition). It
was shown that NM-KE can be constructed by having both communicants apply-
ing a 2-split-state non-malleable extractor [CG17] to their correlated randomness
mentioned above. Intuitively, independent keys does not need the information
reconciliation (correctness) and can be generated from the privacy amplification
(non-malleability is malicious privacy).
The remaining technicalities for this idea to work in MPC setting (especially
in the joint tampering model) are carefully analyzing the type of information
available for a BST adversary at the tampering of a current message. Note
that each pair of parties are to independently run an NM-KE protocol and
observe that the messages exchanged are fresh uniform messages independent
from previous messages within one protocol and across multiple protocols.
In the independent tampering model, we observe that a Non-Malleable Secret
Sharing (NM-SS) (see Definition 4) with n shares has, in particular, an implicit
n-split-state non-malleable extractor [CG17] in its reconstruction function, which
guarantees that (roughly speaking) for any n independently chosen tampering
functions applied to the n shares, reconstructing from the tampered shares is
independent from reconstructing from the n clean shares. This suggests that one
could skip the independent keys generation step (and the corresponding OTP
followed by MAC) and simply rely on the reconstruction function of NM-SS
to provide non-malleability. Unfortunately, it is trivial to show that such NM-
SS implies that its reconstruction function is not a linear function (define a
tampering that adds a constant share vector for the secret 1, share by share
independently, the linear reconstruction function returns a related secret). But
for a secret sharing scheme to be useful in constructing general purpose MPC
protocols, linearity is the least property required to enable privacy-preserving
evaluation of secret values. For the efficient protocols in the honest majority
setting, a stronger multiplication property is required for the underlying secret
sharing scheme, which allows the reconstruction of the product of two secrets
from the share-wise multiplication of their share vectors (effectively requires that
the reconstruction of each secret should take less than n/2 shares). These make
relying on NM-SS for constructing general purpose NM-MPC protocols highly
technical, though efficiency-wise highly attractive.
The obstacles discussed above are the main reasons that the NM-MPC proto-
cols constructed in this approach can only tolerate a rather limited BST variant.
Firstly, to overcome the impossibility of NM-SS with linear reconstruction func-

11
tion, we restrict the type of functions the adversary is allowed to tamper with
s
each share, resulting in the FweakBST -tampering class. Secondly, we are able to
show that without the need to further restrict the tampering adversary, there
are linear NM-SS against the above restricted share tampering that at the same
time has (strong) multiplication property.

2 Preliminary
The statistical distance of two random variables (their corresponding distribu-
tions) is defined as follows. For X, Y ← Ω,
1 X
SD(X; Y) = |Pr(X = ω) − Pr(Y = ω)|.
2
ω∈Ω

ε
We say X and Y are ε-close (denoted X ∼ Y) if SD(X, Y) ≤ ε.
A secure computation task is defined by a function specifying the desired
mapping from the inputs to the final output. We consider arithmetic circuits over
some finite field and will identify a circuit C with the functionality it computes.
Formalizing the real world computation. We use the so-called client-server
model refinement of MPC protocols in this work. The inputs are provided by
the Clients {C1 , . . . , Cm }, each client Ci holds an input xi , and the computation
is processed by the Servers {S1 , . . . , Sn }, who do not have inputs. An n-server
m-client protocol Π over a finite field F proceeds in rounds where in each round
j in circuit evaluation phase, the protocol’s description contains n next message
functions nextMSGji for i = 1, . . . , n that can be represented as arithmetic circuits
over F. The next message function nextMSGji of server Si for the j-th round gets
as input all the messages that Si received until (not including) the j-th round and
Si ’s local randomness, and outputs Si ’s messages in the j-th round. The view
of Si during an execution of a protocol Π on inputs x, denoted by viewΠ i (x),
contains the random input Ri and all the messages received from the clients
and other servers. For every S = {i1 , . . . , it } ⊂ [n], we denote by viewΠ S (x) =
(viewΠ Π
i1 (x)), . . . , (viewit (x)). The input sharing phase and output reconstruction
phase happen before and after the circuit evaluation phase, respectively. For
every C ⊂ [m], we denote the view of clients in C by viewΠ Π
C (x). Let outC (x)
denote the output of the clients in C. The real world adversary Adv corrupts a
set S of servers and a set C of clients, which means the adversary act on the
corrupted parties’ behalf in the protocol. The execution of Π in the presence of
Adv who corrupts S ∪ C is characterized by a random variable
 
RealΠ,Adv,S∪C (x) = viewΠ,Adv
S∪C (x), outΠ,Adv
C
(x) ,

where the superscript ,Adv highlights the presence of Adv. The first component
is adversary’s view, which can be divided into the truncated views of S ∪ C and
the last communication
 round messages from honest servers to corrupted clients:
Π,Adv Π,Adv Π,Adv
viewS∪C (x) = truncviewS∪C (x), lastmviewS→C (x) . The second component is

12
the output of honest clients, appending which is crucial for a unified formation
of privacy and correctness.
Formalizing the ideal world computation. There is an incorruptible trusted
party in the ideal world, who evaluates the circuit C on m inputs provided by
the clients and provides the outputs. The ideal world adversary Sim corrupts a
set S ∪ C of parties, which means that Sim gets to substitute the inputs of the
corrupted clients in C before they are given to the trusted party, and simulate
views for S ∪ C. The ideal world computation is characterized by a random
variable  
C,Sim C,Sim
Idealabort
C,Sim,S∪C (x) = viewS∪C (x), outC (x) ,

where the superscript abort indicates that here Sim is allowed to individually
decide, after learning its own outputs, whether each honest party receives its
correct output from the functionality or a special ⊥ message which the party
outputs.
Comparing real/ideal world. The combination of privacy and correctness are
captured by requiring that given any real world adversary Adv corrupting S ∪ C,
there exist an ideal world Sim that (also corrupts S ∪ C) simulates Adv in the
sense that the two random variables RealΠ,Adv,S∪C (x) and Idealabort
C,Sim,S∪C (x) are
indistinguishable.
Definition 1. Let C be a m-input functionality and let Π be an m-client n-
server protocol. We say that Π (t, )-securely computes C if for every probabilistic
adversary Adv in the real world controlling a set S ⊂ [n] of servers such that
|S| ≤ t and a set C ⊂ [m], there exists a probabilistic simulator Sim in the ideal
world such that for every input x, it holds that
 
SD RealΠ,Adv,S∪C (x); Idealabort
C,Sim,S∪C (x) ≤ .

The security defined above is sometimes referred to as security with “selective


abort”, which does not require honest parties to simultaneously abort. In the
stronger notion of “unanimous abort”, the ideal world adversary needs to decide
whether all honest parties receive their correct output or all of them abort. The
advantage of using the weaker variant is that it supports feasibility results which
only use secure point-to-point channels.
Definition 2. A pair of algorithms (Sharen,t , Recovern,t ), where Sharen,t : F →
Fn is randomised, mapping a secret to n shares, and Recovern,t is deterministic,
mapping a reconstruction set R ⊂ [n] and the corresponding shares to a secret,
are said to be a secret sharing scheme if the following conditions hold for every
n, t ∈ N.
– Reconstruction. For any set R ⊂ [n] such that |R| > t and for any s ∈ F it
holds that
Pr[Recovern,t (Sharen,t (s)R , R) = s] = 1,
where the subscript R denotes the projection of a vector to the components
in R.

13
– Privacy. For any set A ⊂ [n] such that |A| ≤ t and for any s, s0 ∈ F, it holds
that
0
Sharen,t (s)A ∼ Sharen,t (s0 )A .
When the parameters are clear from the context, we simply write (Share, Recover).
Linear secret sharing schemes are closely related to linear codes.
Definition 3. A subset C ⊂ Fn is an [n, k, d]-linear code over finite field F if C
is a subspace of Fn of dimension k such that: for all c ∈ C\{0}, the Hamming
weight wH (c) > d (i.e., the minimum Hamming distance between two elements
of the code is at least d). A code is called Maximum Distance Separable (MDS)
if n − k + 1 = d.
For R ⊂ [n], let CR ⊂ F|R| be the projection of the code C on R:
CR : = {cR |c ∈ C}.
It can be shown that if C is an [n, k, n − k + 1]-linear MDS code, then for any
R ⊂ [n] with |R| ≥ k, we always have that CR is [|R|, k, |R| − k + 1]-linear MDS
code.
An [n, k, d]-linear code C over finite field F can be represented by its generator
matrix G ∈ Fk×n , whose rows form a basis of C. Let R ← Ft be a uniform t-
tuple. The sharing algorithm of Shamir’s secret sharing scheme can be described
as follows.  
1 1 ... 1
 a1 a2 . . . an 
Share(s) = (s, R)  . . . ,
 
 .. .. . . . .. 
at1 at2 . . . atn
where a1 , . . . , an are distinct non-zero elements in F. It can be seen that the
support of the random variable Share(0) is an [n, t, n − t + 1]-linear MDS code.
Definition 4 ([GK18]). A secret sharing scheme (Share, Recover) with n shares
and privacy threshold t is non-malleable with respect to a class F of tampering
functions (F-NM for short) if for any secret s ∈ S, any f ∈ F and any R ⊂ [n]
of size |R| = t + 1, there is a distribution Df,R over the set S ∪ {⊥} ∪ {same∗ }
determined solely by f and R, such that the following real tampering experiment
Tamperf,Rs and the simulation Patch(s, Df,R ) using Df,R are distinguishable with
a negligible advantage ε.
ε
Tamperf,R
s ∼ Patch(s, Df,R ),
where the real tampering experiment Tamperf,R
s and the simulation Patch(s, Df,R )
are defined as follows.
– The real tampering experiment is a random variable with randomness from
the randomised sharing algorithm Share:
 

 v ← Share(s) 

ṽ = f (v)
 
f,R
Tampers = .

 s̃ = Recover(ṽR , R) 

Output s̃.
 

14
– The simulation is a random variable defined from the distribution Df,R :
 
 s̃ ← Df,R 
Patch(s, Df,R ) = output s, if s̃ = same∗ ; .
output s̃, otherwise.
 

An encoding Π0 of an interactive protocol Π between Alice and Bob is defined


by two simulators S A , S B with black-box access to stateful oracles encapsulat-
ing the next-message functions of Alice and Bob, respectively. Let TransΠ (x, y)
denote the function mapping inputs x, y to the transcript of an honest execution
of Π between Alice holding input x and Bob holding input y. The protocol Π0 =
(S A , S B ) is an ε-correct encoding of protocol Π if for all inputs x ∈ X , y ∈ Y,
protocol Π0 ε-correctly evaluate the functionality (TransΠ (x, y), TransΠ (x, y)).

Definition 5 ([FGJ+ 19]). An encoding Π0 = (S A , S B ), of an interactive pro-


tocol Π between Alice and Bob is ε-protocol-non-malleable for a family F of
tampering functions if the following holds: For each tampering function f ∈ F,
there exists a distribution Df over {⊥, same}2 such that for all x, y, the product
distribution of S A (A, x)’s and S B (B, y)’s outputs is ε-close to the distribution
Patch(Trans(x, y), Df ).

3 NM-MPC in Honest Majority Setting


We follow the standard three-step procedure of defining a security notion for
MPC protocols.
Formalizing the real world computation. We follow the clock-driven execu-
tion modeling of synchronous MPC protocols (c.f. [CDN15, Chapter 4]). It is
then natural to model a synchronous tampering of such protocols, where the
adversary cannot drop or delay messages or desynchronize the parties. We al-
low the tampering of the messages transmitted by the currently active agent be
possibly related to the messages transmitted in the past. However, we do not
allow the tampering of current messages to depend on future messages. In the
case the adversary is allowed to tamper with all the messages transmitted by an
agent jointly using a tampering function that takes all the messages transmitted
during an activation as input and output the tampered messages, we call it the
joint tampering model. In the case the adversary must tamper with each message
independent of the messages transmitted through other channels, we call it the
independent tampering model. We formally describe the process of executing an
MPC protocol under corruption and tampering in Appendix A. We formalize the
tampering of all messages transmitted between two agents (say, a1 and a2 ) as an
interactive tampering function fCh(a1 ,a2 ) . Without loss of generality, we assume
the adversary does not tamper with channels connecting at least one corrupted
agent (this does not mean that corruption and tampering are separated). In this
way, we define a tampering function for an active adversary corrupting a set S

15
of servers and a set C of clients in an MPC protocol Π computing a circuit C as
a sequence of channel tampering functions as follows.

fΠ(C),S∪C : = {fCh(i,i0 ) }i,i0 ∈S̄, i<i0 ; {fCh(i,n+i0 ) }i∈S̄,i0 ∈C̄ . (1)
To summarize, the real world computation is the same as standard model where
there is no tampering (we recycle some of the notations from Definition 1 and
indicate the changes), except that the views of honest parties are subject to a
tampering denoted by fΠ(C),S∪C . The global view in real world contains the view
of corrupted parties and the output of honest clients. The real world computation
is characterized by a random variable

Π,Adv∗
 
fΠ(C),S∪C
RealΠ,Adv,S∪C (x) = viewΠ,Adv
S∪C (x), outC
(x) ,

where the superscript ,Adv highlights the presence of both Adv and fΠ(C),S∪C .
Formalizing the ideal world computation. There is an incorruptible trusted
party in the ideal world, who evaluates the circuit C on m inputs provided by
the clients and provides the outputs. The ideal world adversary Sim corrupts a
set S ∪ C of parties, which means that Sim gets to substitute the inputs of the
corrupted clients in C before they are given to the trusted party, and simulate
views for S ∪ C. Moreover, Sim is allowed to choose a function f and apply it
to the output of honest clients. We clarify at this point that the application of
f to the output happens after the circuit evaluation (without interfering with
the working of the incorruptible trusted party). The ideal world computation is
characterized by a random variable
f ←DfΠ(C),S∪C 
C,Sim
 
IdealC,Sim,S∪C (x) = viewS∪C (x), f outCC,Sim (x) ,
f ←DfΠ(C),S∪C
where the superscript indicates that here Sim is allowed to individ-
ually modify, after learning its own outputs, the output that each honest party
receives.
Comparing real/ideal world. The privacy (decoupled from correctness) against
malicious adversary is captured by requiring that given any real world adversary
Adv corrupting S ∪ C and tampering fΠ(C),S∪C , there exists an ideal world Sim
that (also corrupts S ∪ C) simulates the adversary in the sense that the two ran-
f
Π(C),S∪C
f ←Df
Π(C),S∪C
dom variables RealΠ,Adv,S∪C (x) and IdealC,Sim,S∪C (x) are indistinguishable.
Definition 6 (NM-MPC). Let C be an m-input functionality and let Π be an
m-client n-server protocol that securely computes C when all parties follow the
protocol specifications. We say that Π (t, F, )-non-malleablly computes C if for
every probabilistic adversary Adv in the real world controlling a set S ⊂ [n] of
servers such that |S| ≤ t and a set C ⊂ [m] of clients as well as imposing a se-
quence fΠ(C),S∪C of F-tampering functions, there exists a probabilistic simulator
Sim in the ideal world such that the following holds for a tuple of distributions
{Di }i∈C with each Di supported on {⊥, same∗ } ∪ F.
 
fΠ(C),S∪C f ←DfΠ(C),S∪C
SD RealΠ,Adv,S∪C (x); IdealC,Sim,S∪C (x) ≤ , (2)

16
where the distribution DfΠ(C),S∪C satisfies
 
f ← DfΠ(C),S∪C
Patch(y, {Di }i∈C ) = .
output f (y)

Specially, the protocol is a detection NM-MPC if the tuple of distributions {Di }i∈C
is supported on {⊥, same∗ }|C| .
We only require the protocol to securely compute C when all parties follow
the protocol specifications in Definition 6, which is the weakest form of useful-
ness that successfully rules out vacuous private protocols (e.g. simply ignore all
parties and output a constant, this protocol is private against malicious adver-
sary although it does not compute anything). Another natural way to define
NM-MPC with a stronger form of usefulness is to further require (2) to hold for
fΠ(C),S∪C = Id (no tampering of secure point-to-point channels) with {Di }i∈C
solely supported on some ∆ ∈ {⊥, same∗ }|C| when there is deviation from pro-
tocol specifications. Secure (with abort) MPC protocols in Definition 1 satisfy
this stronger form of usefulness.

The class F of interactive tampering functions considered in this preliminary


study are the Bounded State Tampering (BST) functions defined in [FGJ+ 19]
and its variants.
Definition 7 ([FGJ+ 19]). Functions of the class of s-bounded state tampering
s
functions FBST for an r-round interactive protocol are defined by an r-tuple of
pairs of functions ((g1 , h1 ), . . . , (gr , hr )) where the range of the functions hi is
{0, 1}s . Let m1 , . . . , mi be the messages sent by the participants of the protocol in
a partial execution. The tampering function for the ith message is then defined
as
fi (m1 , . . . , mi ) : = gi (mi , hi−1 (mi−1 , hi−2 (mi−2 , . . .))). (3)
For the constructions in Section 3.1, we consider a slightly stronger tampering
s,auxi
class called Bounded State Tampering with auxiliary information FBST . They
are BST functions that can be described as follows.

fi (m1 , . . . , mi ) : = gi (auxii , mi , hi−1 (auxii−1 , mi−1 , hi−2 (auxii−2 , mi−2 , . . .))).


(3’)
For the constructions in Section 3.2, we consider a weak Bounded State Tam-
s
pering FweakBST , where the state size bound s is effective for the current message
s s
as well as past messages. The class FweakBST is strictly weaker than FBST . The
s
tampering access of the FweakBST adversary to a message is bounded by the num-
ber of values it is allowed to use to replace the original message. Suppose the
message is a vector in the space Fu . The admissible current message tampering
function over Fu is given by

{gj : Fu → Fu ||Range(gj )| ≤ 2s },

where Range(gj ) denotes the range of the function gj .

17
s
Definition 8. The class FSS,θ of independent tampering functions for secret
u
sharing scheme over F induced by a θ-bounded (corrupting at most θ servers)
s
adversary of MPC protocol with FweakBST channel tampering access is defined as
s
follows. Any function f ∈ FSS,θ is written as
f : (Fu )n → (Fu )n , f = (f1 , . . . , fn ),
where at most θ components fi : Fu → Fu are arbitrary functions and the rest of
the n − θ components fi : Fu → Fu should satisfy
|fi (Fu )| ≤ 2s .

3.1 Feasibility Results with Strong Tampering


Consider an interactive protocol Π executed by Alice and Bob on a communica-
tion channel that is partially controlled by an adversary Eve, whose capability is
characterised by a set F of tampering functions, from which the adversary can
arbitrarily choose one to tamper with all the messages communicated between
Alice and Bob. At the end of protocol Π, Alice and Bob privately (locally) gen-
erate KA , KB ∈ {0, 1}k ∪ {⊥} from their respective views. For a function f ∈ F,
let viewfAdv denote the adversary’s view when f is chosen to tamper with the
execution. Given a string v ∈ {0, 1}k ∪ {⊥}, let purify(v) be ⊥ if v =⊥, and oth-
erwise replace v 6=⊥ by a fresh k-bit random string: purify(v) ← Uk . Intuitively,
we want the guarantee that the uniform keys generated by Alice and Bob are
either the same or independent of each other, and, in all cases, independent from
the view of the tampering adversary.
Definition 9 (implicit in [FGJ+ 19]). An interactive protocol Π executed by
Alice and Bob is a ε-non-malleable key exchange (ε-NM-KE) protocol with respect
to F if it satisfies the following properties.
1. The keys KA , KB are close to uniform conditioned on adversary’s view.
   
SD (KA , viewfAdv ); (Uk , viewfAdv ) ≤ ε; SD (KB , viewfAdv ); (Uk , viewfAdv ) ≤ ε.

2. If the adversary chooses a function f ∈ F that does not alter any message
(we say the adversary is passive in this case), then
Pr[KA = KB ∧ KA 6=⊥ ∧KB 6=⊥] = 1.
3. If the adversary chooses a function f ∈ F that alters messages (we say the
adversary is active in this case), then there exists a probability pf such that
Pr[KA = KB ∧ KA 6=⊥ ∧KB 6=⊥] = pf ,
and (when KA and KB are not equal) at least one of the following must hold.
 
SD (KA , viewfAdv , KB ); (purify(KA ), viewfAdv , KB ) ≤ ε;
 
SD (KA , viewfAdv , KB ); (KA , viewfAdv , purify(KB )) ≤ ε.

18
k Π
The rate of a non-malleable key exchange is the ratio |Trans Π | , where Trans de-
notes the transcript of the protocol in the case that no abort occurs. A rate
0 < ρ < 1 is achievable by NM-KE with respect to F, if there exists ε-non-
malleable key exchange protocols with respect to F with rate approaching ρ and
ε goes to zero as the transcript size grows.
The high level idea of the construction in Theorem 1 is to encode the com-
munication between each pair of parties using an INMC with respect to bounded
state tampering. In order to argue non-malleability in the joint tampering model,
which is a natural extension as we move from interactive coding to multi-party
coding, we need a careful analysis of how the auxiliary information available at
the tampering of one message can affect the execution.
Theorem 1. Let Π be an MPC protocol (t, ε)-securely computes a circuit C with
r rounds and a communication complexity of Σ bits. Let ΠNM-KE be the rNM-KE -
s
round ε-non-malleable key exchange protocol with respect to FBST -channel tam-
pering with rate ρNM-KE [FGJ 19]. Let MAC : {0, 1} × {0, 1}λ → {0, 1}λ be a
+ 2λ

2−λ -secure information theoretic message authentication code, where λ is a big


enough constant (assume each message transmitted in Π is also of length λ).
There is an (r + rNM-KE + 1)-round MPC protocol ΠNM (t, FBST s
, O(ε, 2−λ ))-non-
malleably computes a circuit C in the joint tampering model, which has commu-
3
nication complexity ρ + 1 Σ. In particular, ΠNM is a detection NM-MPC.
NM-KE

Proof. The NM-MPC protocol ΠNM is described as follows.


1. Key generation phase.
Each server Si runs a key exchange ΠKE with any other server Si0 to generate
a key keyi,i0 of length |keyi,i0 | = ri,i0 · 3λ, where ri,i0 denotes the number of
messages exchanged between the two parties, and split the key keyi,i0 into
substrings
r 0 r 0
keyi,i0 = MACkey1i,i0 ||OTPkey1i,i0 || . . . ||MACkeyi,ii,i0 ||OTPkeyi,ii,i0 ,

where |MACkeyji,i0 | = 2λ and |OTPkeyji,i0 | = λ for j = 1, . . . , ri,i0 .


Each server Si runs a key exchange ΠKE with any client Ci0 (considered as the
(n + i0 )th party) to generate a key keyi,n+i0 of length |keyi,n+i0 | = ri,n+i0 · 3λ,
where ri,n+i0 denotes the number of messages exchanged between the two
parties, and split the key keyi,n+i0 into substrings
r 0 r 0
keyi,n+i0 = MACkey1i,n+i0 ||OTPkey1i,n+i0 || . . . ||MACkeyi,n+i
i,n+i i,n+i
0 ||OTPkeyi,n+i0 ,

where |MACkeyji,n+i0 | = 2λ and |OTPkeyji,n+i0 | = λ for j = 1, . . . , ri,n+i0 .


In this phase, all (n−t)(n−t−1+2m)/2 independent copies of ΠKE are run in
parallel. Since different copies of ΠKE use independent randomness, the non-
malleability of the keys against joint tampering adversary reduces to the
non-malleability against the independent tampering adversary. Moreover,
since the tampering class F is round number insensitive, combining the key
exchange phase with the protocol evaluation phase does not affect the non-
malleability of the keys.

19
2. Protocol evaluation phase.
– If the jth round is not a round of input sharing gate for Π,
(a) When it is an honest server Si ’s turn to send messages, the party
invokes the next-message function nextMSGji of Π to compute

(mji,1 , . . . , mji,n ) : = nextMSGji (Rji , viewΠ


i ).

Si appends the local randomness Rji and the message mji,i to the
party’s own view viewΠ i .
(b) Next for every receiver Si0 of Si in the jth round of Π, the party Si
computes the one-time pad encryption as well as authentication tag
0 0
 
cji,i0 = mji,i0 ⊕ OTPkeyji,i0 and tji,i0 = MAC MACkeyji,i0 , cji,i0 ,

where j 0 denotes that this is the j 0 th message transmitted between


Si and Si0 . The masked messages together with their corresponding
tags are then transmitted in place of the plain messages.
(c) When every server that sends messages in the jth round of Π com-
pletes the transmission, each honest server Si0 verifies the tag of the
received messages (cji,i0 , tji,i0 ) from each server Si using the corre-
0
sponding authentication key MACkeyji,i0 then decrypts
0 0
Vf(MACkeyji,i0 , cji,i0 , tji,i0 ) = 1, mji,i0 = cji,i0 ⊕ OTPkeyji,i0 ,

and finally appends mji,i0 to the party’s view viewΠ i0 . If the protocol is
executed under tampering, then the MAC key of server Si0 may be
different from the MAC key of server Si , or the messages received by
server Si0 may no longer be equal to (cji,i0 , tji,i0 ), and the verification
may fail causing the server Si0 to abort.
– If the jth round is a round of input sharing or output reconstruction
gate for Π, do the following.
(a) When it is a client Ci ’s (considered as the (n + i)th party) turn to
send messages, the party invokes nextMSGjn+i of Π to compute

(mjn+i,1 , . . . , mjn+i,n ) : = nextMSGji (Rjn+i , viewΠ


n+i ).

(b) Next for every receiver Si0 of Ci in the jth round of Π, the party Ci
computes the one-time pad encryption as well as authentication tag
0 0
 
cjn+i,i0 = mjn+i,i0 ⊕OTPkeyji0 ,n+i and tjn+i,i0 = MAC MACkeyji0 ,n+i , cjn+i,i0 ,

where j 0 denotes that this is the j 0 th message transmitted between


Si and Ci0 . The masked messages together with their corresponding
tags are then transmitted instead of the plain messages.

20
(c) When every client that sends messages in the jth round of Π com-
pletes the transmission, each honest server Si0 verifies the tag of
the received messages (cjn+i,i0 , tjn+i,i0 ) from each client Ci using the
0
corresponding authentication key MACkeyji0 ,n+i then decrypts
0 0
Vf(MACkeyji0 ,n+i , cjn+i,i0 , tjn+i,i0 ) = 1, mji,i0 = cji,i0 ⊕ OTPkeyji0 ,n+i ,

and finally appends mjn+i,i0 to the party’s view viewΠ


i0 . If the protocol
is executed under tampering, then the MAC key of server Si0 may
be different from the MAC key of server Si , or the messages received
by server Si0 may no longer be equal to (cjn+i,i0 , tjn+i,i0 ), and the
verification may fail causing the server Si0 to abort.
We define the non-malleability simulator Sim for ΠNM . We begin with sim-
ulating the corrupted parties’ view. In the key exchange phase, the corrupted
parties receive two uniform messages from each honest party (the length depends
on how many messages are to be communicated between a corrupted party and
the honest party during execution of the underlying MPC protocol). The cor-
rupted parties’ view after the key exchange phase is simulated with the help
of the simulator of the underlying MPC protocol (before harden for NM) until
when there are honest parties abort. Here the NM-MPC simulator first invokes
the MPC simulator to obtain corrupted parties’ “plain-text” view and then uses
the keys generated in key exchange phase to encrypt (OTP and MAC) the “plain-
text” view. Whenever an honest party aborts before completing his/her role in
the protocol execution, the NM-MPC simulator stops calling the MPC simula-
tor (because in the real world, the honest party stops sending messages). We
now discuss how to simulate whether or not honest parties abort and, if they do,
when exactly does each of them abort. The NM-MPC simulator uses the message
topology of the underlying MPC protocol (before hardened for NM), to proceed
with simulating. It starts with the first round in the message topology when we
have an honest sender and do the following for each of his/her honest receivers.
In the case a pair of honest parties have different keys (according to the cor-
responding NM-KE simulator), the honest receiver aborts. Here the NM-MPC
simulator invokes several copies of NM-KE simulators, each NM-KE simulator
for one honest receiver, which simulates whether the sender and receiver have
the same key according to the corresponding tampering function provided by
the adversary. In the case when the keys of a pair of honest parties are the same,
the NM-MPC simulator decide whether or not the receiver should abort accord-
ing to whether or not there is tampering at the transmission from the sender
to the receiver (adversary provides this information to NM-MPC simulator). In
this way, following the message topology, when the next time an honest party is
the sender, if the sender has already aborted in previous rounds, the NM-MPC
simulator let all honest receivers in the current round abort; if the sender has
not aborted in previous rounds, the NM-MPC simulator repeat calling NM-KE
simulator (if all pairs of honest parties are checked, can skip calling NM-KE sim-
ulator in the future) and adversary to decide whether or not each receiver should

21
abort. The fixed message topology agreed on by all parties before hand allows
the simulation to match the real protocol execution. Note that it is possible that
some honest parties may not abort if only their last messages are tampered with.
Without loss of generality, we describe the simulation of honest party’s output
for single output C. If an honest party aborts before completing his/her role in the
full MPC protocol execution, when he/she is the sender in the round after he/she
aborts, the parties expecting his/her messages will abort. In this way, more and
more parties abort until the party (reconstructor) who is supposed to compute
the final output aborts. This leads to an ⊥ output of the NM-MPC simulator.
It remains to consider the case when the last message (the sender has no further
role in the execution after the current round) of an honest party is tampered
with. In the honest majority MPC protocols, these rounds are collectively called
the “last message round”, where each party sends a share to the reconstructor,
who reconstructs the final output value. If tampering happens when an honest
party is sending his/her share to the reconstructor, he/she will not abort. But
according to the hardened protocol the reconstructor verifies the last message of
each party before using a subset of them for reconstruction, hence would catch
a tampering and abort. To summarize, the MPC simulator will output ⊥ with a
probability computed from the tampering functions or a value from the trusted
party. The concrete simulation algorithm is given below, followed by analysis.
First step, non-malleability simulator reads the sequence fΠ(C),S∪C of chan-
nel tampering functions for honest parties. Extracts the tampering functions
for the rounds corresponding to ΠKE and sample the reaction of key exchange
phase. If all keys are correctly generated, move on to next step, otherwise output
⊥. Let p1 be the probability that Sim continues. Second step, non-malleability
simulator extracts the tampering functions for the remaining rounds and ap-
ply them to uniform messages. In this step, compute the probability p2 that
all tampering functions fix all messages simultaneously. This defines, till here,
the distribution DfΠ(C),S∪C , which is supported on {⊥, same∗ }. Third step, non-
malleability simulator invoke the simulator of Π on the adversary strategy Adv
to obtain either ⊥ or an output provided by the trusted party. In the first step,
the tampering functions receives messages of other parallel executions of ΠNM as
auxiliary information in the joint tampering model. Note that, in particular, the
ΠNM considered here sends independent uniform strings each round and there is
no dependence across parallel execution. These auxiliary information does not
affect the security of the generated keys. In the second step, since all messages
are masked by the keys generated in the first step, independence of auxiliary
information in the joint tampering model reduces to security of the keys. In the
third step, if the real world protocol did not abort up to this point, with over-
whelming probability the adversary did not tamper with secure point-to-point
channels and the analysis reduces to corruption only adversary case.

3.2 Optimal Efficiency with Weak Tampering

22
s s
Theorem 2. Let cs = 2p sin(π/2 sin(π/p)
)
< 1 (when 2s < p). Let C : Fp × . . . ×
Fp → Fp be an m-input functionality. There is an m-client n-server MPC pro-
tocol ΠNM (basing on a secret sharing scheme (Share, Recover) over Fup ) that
, O(m, n − θ) · u2 · 2s · cs n/2−θ−1 +  )-non-malleably computes C in
s

(θ, FweakBST
the independent tampering model. Moreover, the protocol ΠNM has asymptotic
communication complexity same as best passive security MPC.

The protocol ΠNM in Theorem 2 is a special instantiation of the secure (with


abort) MPC construction proposed in [GIP+ 14] (achieving same asymptotic
communication complexity as best passive security MPC) that adds one more
layer of algebraic structure to the construction for achieving the non-malleability
s
against FweakBST -tampering. This algebraic structure comes from the choice of
a finite field Fp of large prime order p, which in effect turns the Shamir secret
sharing scheme over a field of characteristic p into a NM-SS with respect to the
s
tampering class FSS,θ in Definition 8 (this explains the constant cs in the state-
ment). Since our construction uses the construction of [GIP+ 14] in a black-box
manner, we will only mention here that the construction is based on a redundant
dense linear secret sharing scheme (see Appendix B for the definition of linear-
based MPC, which captures the typical structure of many information-theoretic
secure MPC protocols such as the BGW protocol [BGW88,AL17] and the DN
protocol [DN07]). For completeness, we include a brief introduction of the full
construction of [GIP+ 14] in Appendix C. The proof of Theorem 2 uses nota-
tions introduced in Appendix C, hence is given in Appendix D. In the rest of
this subsection, we discuss the construction of NM-SS with linear reconstruction
function, which is the core of the technicality behind Theorem 2.

Properties of Shamir’s scheme over large prime fields


With the application to securing general purpose MPC protocols against pas-
sive implementation attacks in mind, the pioneering work [BDIR18] and var-
ious follow-up works [BDIR21,MPSW21,MNP+ 21] analyze a special property
of Shamir’s scheme over large prime fields that makes them Leakage-Resilient
Secret Sharing (LR-SS) schemes. In order to give a unified exposition of these
recent findings for both passive and active implementation security applications
(leakage functions and tampering functions have different output length), we
describe a function f = (f1 , . . . , fn ) (here f can be an s-bit leakage function
or an s-BST function) in an abstract fashion using a sequence of partitions
P = (P1 , . . . , Pn ), where each partition Pi is defined by the pre-image sets of all
elements in the range of fi . We continue with this abstract characterization of
functions and capture the state size bound of BST as follows (here we deliber-
ately use parameter k instead of n to emphasize that it is possible that only a
subset of the n components of f appear in the analysis).
Definition 10. Let P = (P1 , . . . , Pk ) be a sequence of partitions of Fp . We say
that the partition sequence P is s-bounded if each partition Pi divides Fp into no
more than 2s parts.

23
When P = (P1 , . . . , Pk ) is s-bounded, we simply write (in the case when a
partition contains strictly less than 2s subsets, we pad empty sets in the back)
s s
P = (P11 ∪ . . . ∪ P12 , . . . , Pk1 ∪ . . . ∪ Pk2 ),
s s
where for each i ∈ [k], subsets Pi1 , . . . , Pi2 are disjoint and Pi1 ∪ . . . ∪ Pi2 = Fp .
In this way, an element y = (y1 , . . . , yk ) in the range of f = (f1 , . . . , fk ) is labeled
by a tuple (j1 , . . . , jk ) ∈ [2s ]k and f (x) = y means (x1 ∈ P1j1 ) ∧ . . . ∧ (xk ∈ Pkjk ).
Using the connection between linear secret sharing schemes and linear MDS
codes (see Section 2), the problem is reduced to the study of the distribution of
the random variable f (C), where C ← C is sampled uniformly from a linear MDS
code C. The Fourier analysis approach transforms the probability of f (C) taking
a certain value into a sum concerning products of Fourier coefficients, and then,
through bounding the Fourier coefficients, analyses the probability distribution
of f (C). The probability of f (C) taking the value labeled by (j1 , . . . , jk ) is com-
puted by summing, over the codeword space C, the probability of C = (c1 , . . . , ck )
and (c1 ∈ P1j1 ) ∧ . . . ∧ (ck ∈ Pkjk ), which is either 0 or |C|
1
. Using the Poisson Sum-
mation Formula, the sum over the linear space C is transformed into a sum over
the dual space C⊥ concerning products of Fourier coefficients. Consider another
random variable f (U ), where U ← Fkp is sampled uniformly from the full space
Fkp . The probability of f (U ) taking the same value labeled by (j1 , . . . , jk ) (a sum
over Fkp ) is transformed using the Poisson Summation Formula into a sum over
⊥
the dual space Fkp = {0}, which is the zero space consisting of an all-0 vector.
This means that the difference of the probability of the random variables f (C)
and f (U ) taking the same value labeled by (j1 , . . . , jk ) is expressed as a sum over
⊥
C⊥ \ Fkp = C⊥ \{0} of products concerning Fourier coefficients (the quantity
inside the | · | below), and hence the statistical distance between f (C) and f (U )
is expressed as follows.

1 X X Y
SD(f (C); f (U )) = 1̂P ji (αi ) .
2 i
(j1 ,...,jk )∈[2s ]k (α1 ,...,αk )∈C⊥ \{0} i∈[k]

Applying the triangle inequality and rearrange, we have an upper bound


 
1 X Y X
SD(f (C); f (U )) ≤  |1̂P ji (αi )| . (4)
2 ⊥ s
i
(α1 ,...,αk )∈C \{0} i∈[k] ji ∈[2 ]

It is shown (see [BDIR21, Lemma 4.16]) that the quantity inside the (·) in (4)
can be bounded as follows.
(P
s |1̂ ji (αi )| ≤ cs , if αi 6= 0;
Pji ∈[2 ] Pi
ji ∈[2s ] |1̂P ji (αi )| = 1, if αi = 0,
i

where the constant cs is defined and bounded as follows (see [BDIR21, Lemma
4.10])
2s sin(π/2s )
cs = ≤ 1 − 2−2s , 1 ≤ s ≤ log p − 1. (5)
p sin(π/p)

24
Now for a given s, through choosing a big enough dimension of C, one can
make the sum over C⊥ \{0} of the product concerning Fourier coefficients (each

product in (4) is upper bounded by cds , where d⊥ denotes the minimum distance

of C ) smaller than a given error parameter. There is an unnecessary undesir-
able dependence on the cardinality |C⊥ \{0}| of the dual code space (|C⊥ \{0}|
increases as p increases), which can be removed (through more sophisticated
analysis) yielding the bound in Lemma 1.

Definition 11. Let MDS[k, k−1, 2]p be an MDS code over alphabet Fp with code
parameter [k, k − 1, 2]. Let C ← MDS[k, k − 1, 2]p denote a random codeword of
MDS[k, k − 1, 2]p chosen uniformly from the codebook. Let U ← Fkp be the random
variable uniformly distributed over Fkp . We say that C ← MDS[k, k − 1, 2]p is
ε-indistinguishable from uniform by s-bounded partitions if for any s-bounded
s s
P = (P11 ∪ . . . ∪ P12 , . . . , Pk1 ∪ . . . ∪ Pk2 ),
k k
1 X Y Y
Pr[Ci ∈ Piji ] − Pr[Ui ∈ Piji ] ≤ ε.
2
(j1 ,...,jk )∈[2s ]k i=1 i=1

s s
Lemma 1 ([BDIR21] Th 4.6 with n = k and t = k−1). Let cs = 2p sin(π/p) sin(π/2 )
<
1 (when 2s < p). The random variable C ← MDS[k, k−1, 2]p is ε-indistinguishable
from uniform by s-bounded partitions for ε = 21 · 2s · (cs + 2−2s−1 )k−2 .

Multiplicative NM-SS by overcoming n/2 threshold barrier


In terms of explicit parameters, the LR-SS schemes obtained in this line of works
[BDIR18,BDIR21,MPSW21,MNP+ 21] can only have reconstruction threshold
0.8675n [MPSW21] and 0.85n [BDIR21] for Shamir’s secret sharing scheme. Hav-
ing a large reconstruction threshold renders that they can only be applied to re-
stricted types of MPC protocols. Secure multiplication of two secrets requires a
secret sharing reconstruction threshold k < n/2, even against passive adversary.
The reason why we can overcome the n/2 threshold barrier originates from
the distinction between security notions against passive attacks and active at-
tacks (in this case, LR-SS v.s. NM-SS). Although tampering adversaries are in
general more difficult to handle, there is one aspect in the security requirement
for tampering attacks that could be manipulated in the designer’s favour. Let
f ∈ F be an attack function (be it a leakage function or a tampering function).
The notion of leakage-resilience requires that f (Share(s0 )) ∼ f (Share(s1 )) for
any pair of distinct secrets s0 and s1 . The notion of non-malleability, roughly
speaking, requires that RecoverR (f (Share(s0 ))) ∼ RecoverR (f (Share(s1 ))) 2 , for
any reconstruction set R. The aspect that designers can exploit is the fact that
although f (Share(s0 )) and f (Share(s1 )) may have distinguishable distributions
(over the randomness of the sharing algorithm), it is still possible that the trans-
formation RecoverR (·) turns them into indistinguishable distributions. This is
2
This is called strong non-malleability in [DPW18].

25
exactly what we exploit to prove that Shamir’s secret sharing scheme with re-
construction threshold below n/2 can be a NM-SS.
s s
Theorem 3. Let cs = 2p sin(π/p)
sin(π/2 )
< 1 (when 2s < p). Shamir’s secret sharing
s
scheme over Fp with privacy threshold t is FSS,θ -NM with error ε = 12 · 2s · (cs +
2−2s−1 )t−θ−1 .

Proof. We first prove the theorem for the special case of θ = 0 using Lemma 1
and then show a reduction of the θ > 0 case to an instance of the θ = 0 case
with shortened code lengthen.
s
Assume θ = 0. Let f = (f1 , . . . , fn ) ∈ FSS,θ be a secret sharing tampering
s
function. In particular, let Range(fi ) = {c̃i , . . . , c̃2i }, i = 1, . . . , n. In the case
1

when |Range(fi )| < 2s , we pad values not in Range(fi ) and let the pre-mage
s
sets of the padded values be empty. Let Pi = (Pi1 , . . . , Pi2 ) be defined by the fi
s
pre-image sets of c̃1i , . . . , c̃2i . By definition, we have

Pr[Ci ∈ Piji ] = Pr[fi (Ci ) = c̃ji i ], i = 1, . . . , n;




Pr[Ui ∈ Piji ] = Pr[fi (Ui ) = c̃ji i ], i = 1, . . . , n.

According to the linearity of Shamir’s secret sharing scheme, there is constant


vector ∆s ∈ Fnp such that Share(s) = Share(0) + ∆s . Now, for any reconstruction
set R = {i1 , . . . , it+1 }, define the simulation
 n


  U ← F p 

 
ṽR ← fi1 (Ui1 + ∆si1 ), . . . , fit+1 (Uit+1 + ∆sit+1 )
 
Df,R = ,


 s̃ = Recover(ṽR , R) 


Output s̃.
 

where the distributions of Ui1 + ∆si1 , . . . , Uit+1 + ∆sit+1 are in fact independent of
the secret s. On the other hand, the real tampering experiment is
 

  v + ∆s ← Share(0) + ∆s 

 
ṽR = fi1 (vi1 + ∆si1 ), . . . , fit+1 (vit+1 + ∆sit+1 )
 
f,R
Tampers = .


 s̃ = Recover(ṽR , R) 


Output s̃.
 

Firstly, the offset ∆s appears in both Df,R and Tamperf,R


s can be dropped with-
f,R
out affecting SD(Tampers ; Df,R ). Secondly, Share(0)R has the same distribu-
tion as C ← MDS[t+1, t, 2]p , where MDS[t+1, t, 2]p is defined by puncturing the
components in [n]\R in the MDS code corresponding to the support of Share(0).
Finally, we invoke Lemma 1 to claim

SD(Tamperf,R s ; Df,R )
1
P Qt+1 jik Qt+1 jik
≤ 2 (ji ,...,ji )∈[2s ]t+1 k=1 Pr[Ck ∈ Pik ] − k=1 Pr[Uik ∈ Pik ]
1 t+1
≤ 12 · 2s · (cs + 2−2s−1 )t−1

26
where the first inequality is due to the fact that we are bounding the statistical
distance between the outputs of Recover(·, R) using that of the inputs.
In the case when θ > 0, let Θ ⊂ [n] with |Θ| = θ denote the set of shares that
are arbitrarily tampered with by f . A simple observation to begin with is that
if R ∩ Θ = ∅, the arguments above for θ = 0 case still go through without any
modification. It is only when R ∩ Θ 6= ∅, we need adjustments. The tampering
at the shares in R ∩ Θ are not s-bounded. We will exclude these shares from the
analysis and construct a new instance where we can apply Lemma 1. It suffices
to bound the statistical distance for the worst-case scenario Θ ⊂ R.
We have used the fact that Share(0)R has the same distribution as C ←
MDS[t + 1, t, 2]p . Now conditioned on Share(0)Θ = w, for a constant vector
w ∈ Fθp , the distribution

Share(0)R |(Share(0)Θ = w) ≡ C|(CΘ0 = w),

where Θ0 denotes the corresponding indices in [t + 1] of Θ in R ⊂ [n]. Using


properties of linear MDS codes, the support of random variable C|(CΘ0 = 0θ )
is a sub-code of MDS[t + 1, t, 2]p of dimension t − θ and minimum distance 2.
Since the components in Θ0 of the codewords in this sub-code are all zeros, the
components in [t + 1]\Θ0 form a linear code of a shorter length t + 1 − θ, which
is also MDS. We then have

C[t+1]\Θ0 |(CΘ0 = 0θ ) ≡ C 0 , C 0 ← MDS[t + 1 − θ, t − θ, 2]p .

Finally, the support of C|(CΘ0 = w) is a coset of MDS[t + 1, t, 2]p with respect


to the (sub-space) support of C|(CΘ0 = 0θ ). There is a codeword cw of MDS[t +
1, t, 2]p such that

C[t+1]\Θ0 |(CΘ0 = w) ≡ C 0 + cw 0
[t+1]\Θ 0 , C ← MDS[t + 1 − θ, t − θ, 2]p .

On the other hand, we trivialy have

UR\Θ |(UΘ = w) ≡ U 0 + cw 0 t+1−θ


[t+1]\Θ 0 , U ← Fp .

Similar to the θ = 0 case, the offset cw


[t+1]\Θ 0 can be dropped when computing
statistical distance. Invoking Lemma 1 on the code MDS[t + 1 − θ, t − θ, 2]p , we
have
  1
SD (Tamperf,R s |(Share(s)Θ = w)); (Df,R |(UΘ = w) ≤ ·2s ·(cs +2−2s−1 )t−θ−1 .
2
The claim of the theorem follows from averaging over all w ∈ Fθp .

In order to give some idea of what Theorem 3 implies, we consider a few


extreme parameter settings. Since the constant cs satisfies cs ≤ 1 − 2−2s , the
error bound in Theorem 3 becomes
1 s 1
ε= · 2 · (cs + 2−2s−1 )t−θ−1 ≤ · 2s · (1 − 2−2s−1 )t−θ−1 .
2 2

27
One extreme is the maximum state bound s = blog p − 1c case. The above bound
means that once we fix s = blog p − 1c to a constant, the non-malleability error
vanishes exponentially fast in t − θ, which implies non-malleability is possible
even for state bounds close to log p − 1 3 .
Another extreme in the opposite direction is the minimum state bound s = 1
case. We want to know how small can we choose t and p while still have reasonable
indistinguishability error ε. We substitute concrete values and estimate that for
a 10 bits prime p (log p = 10), choosing n = p − 1, t = 300 (approximately n/3)
allows for ε = 2−50 against up to θ = 125 fully tampered shares.

Obtaining constant threshold or share size via Monte-Carlo


There are application scenarios that require certain parameters of the under-
lying secret sharing scheme to be fixed to a constant, without being affected
by the choice of other parameters. We first discuss the constant reconstruction
threshold scenario to complement the results in previous subsection, where the
non-malleability error crucially relies on a big enough reconstruction threshold.
Intuitively, finding tighter bounds for (4) leads to more flexible choice of pa-
rameters. Note that the bounds proved in [BDIR21] hold for any linear MDS
codes, which characterises the worst case performance of linear MDS codes. It
is natural to ask whether there are some linear MDS codes that provide better
bounds than others. This idea was exploited in [MPSW21,MNP+ 21] to yield
Monte-Carlo constructions of linear secret sharing schemes. Such schemes con-
tain a randomised step in the generation of the sharing algorithm (and cor-
responding recovering algorithm), after which they function the same way as
deterministic schemes where the sharing algorithm and recovering algorithm
are fixed from the beginning. The randomised process of generating the shar-
ing algorithm (and corresponding recovering algorithm) is efficient, but there
is no efficient way to verify whether the generated algorithms do provide the
desired security guarantee. The meaningfulness of such schemes rely on the fact
that a random choice successful leads to working algorithms with overwhelm-
ing probability (so one could as well skip the inefficient verification). The re-
sults of [MPSW21,MNP+ 21] are shown in the setting of large alphabet size
(n = poly(log p)) and motivated by providing a better bound of (4) (for the
majority of the respective set of linear MDS codes) while minimising the code
dimension (reconstruction threshold). Let λ = log p be the security parameter. It
is shown in [MPSW21] that if nk > 12 , a random linear [n, k, n − k + 1]p MDS code
admits an exponentially (in λ) small bound for (4) with exponentially (in λ)
small failure probability. The state-of-the-art results achieved by deterministic
constructions only allow k = 0.85n [BDIR21]. In order to prove results for con-
stant reconstruction threshold k (for example k = 2), by restricting the attack
functions f in (4) to the physical-bit functions, where the partition is defined by
the binary representation of the prime number p, it is shown in [MNP+ 21] that a
3
Here the field size p and t are not independent, due to the fact that we are using an
[n, t + 1, n − t]p MDS code in the construction.

28
random [n, k, n − k + 1]p punctured Reed-Solomon code over finite field Fp admits
an exponentially small bound with exponentially small failure probability (see
Lemma 2 below).
An undesirable consequence of the Monte-Carlo nature of these constructions
is that we can not directly combine it with the technique we used in previous
subsection that exploits the difference between LR-SS and NM-SS. Assume we
were to apply the results of [MPSW21] in combination with the technique in
previous subsection, we should have the guarantee that for each reconstruction
set, the non-malleability error is exponentially small except with exponentially
small probability. But NM-SS requires non-malleability error to be negligible for
all reconstruction sets simultaneously. A naive union bound argument seems to
be not sufficient for keeping success probability close to 1, when the number of
reconstruction sets is large, which is typically the case when k = poly(log p).
We then only discuss the implication of the results concerning the physical-bit
attack functions, for which k can be chosen as small as a constant 2.

Lemma 2 ([MNP+ 21] Cor 3). Let 0 < δ < ln 2 be an arbitrary constant. Let
RS[n, k; X]p denote a random [n, k] punctured Reed-Solomon code over finite field
Fp of prime order with evaluation places X ← (F∗p )n . There exists a (slightly)
super-linear function P (·, ·) such that the following holds. For any block length
n ∈ N, code dimension 2 ≤ k ∈ N, physical-bit state bound s ∈ N, and indistin-
guishability error parameter ε = 2−κ , there exists λ0 = P (sn/k, κ/k) such that if
the number of bits λ needed to represent the order of the prime field Fp satisfies
λ > λ0 , then C ← RS[n, k; X]p is ε-indistinguishable from uniform by physical-
bit s-bounded partitions with probability (over the randomness of choosing the
evaluation places X ← (F∗p )n ) at least 1 − exp(−δ · (κ − 1)λ).
2 sn
In particular, a function P (sn/k, κ/k) = δ 0 · sn κ κ
 
k + k · log k + k , for an
appropriate universal positive constant δ 0 , suffices.

Applying Lemma 2 to Theorem 3 instead of Lemma 1 gives NM-SS with


respect to physical-bit bounded state tampering. With this extremely weak tam-
pering (and relaxing from an explicit construction to a Monte-Carlo construc-
tion), the reconstruction threshold of the NM-SS can be an arbitrarily small con-
stant independent of other parameters. Using the estimations from [MNP+ 21],
for the extreme case privacy threshold t = 1 (equivalently, reconstruction thresh-
old k = 2), and considering varying number of parties n = 10, 100, and 1000,
the non-malleability error as small as 2−50 against physical-bit 1-bounded state
tampering can be achieved with success probability 1 − 2−50 (over choosing the
evaluation places), using a prime number p with more than λ = 430, 4800, and
62000 bits, respectively.
There are MPC protocols that use a linear ramp secret sharing scheme with
constant share size and unbounded number of shares, which can be constructed
from Algebraic Geometric (AG) codes. A ramp secret sharing scheme allows a
gap g = k−t > 1 between the reconstruction threshold k and privacy threshold t.
The Fourier analysis techniques applied to MDS codes was recently extended in
[TX21] to provide bounds for (4) with respect to AG codes. Since we focus on

29
techniques that are applicable for most of the MPC protocols, we refer interested
readers to read their work.

4 Applications in MPC-in-the-head Paradigm


We first recall the definitions of an outer protocol and an inner protocol in the
MPC-in-the-head paradigm [IPS08].
The goal is a two-party computation protocol (between Alice and Bob) that
computes an algebraic circuit C.

– Outer protocol. The outer protocol is an n-server MPC protocol Π evaluating


C with the following protocol structure. The protocol Π proceeds in rounds
where in each round each server sends messages to the other servers and
updates its state by computing on its current state, and then also incorpo-
rates the messages it receives into its state. More concretely, each server S i
maintains a state
Σ i = (σ i , µi↔A , µi↔B ) ,
where µi↔A , µi↔B denote the collection of messages between client Alice and
server S i , client Bob and server S i , respectively; and the rest of the state σ i
is called “non-local” part of the state. The next message function nextMSGji
in the jth round is evaluated on its current state and incoming messages, by
server Si , to update its state and compute the next messages uji→1 , . . . , uji→n
to be sent to other servers:
 
[j−1] [j−1]
nextMSGji σ j−1i , µi↔A , µi↔B ; µji←A , µji←B ; ui←1 j−1 j−1
, . . . , ui←n
 
[j] [j]
= σ ji , µi↔A , µi↔B ; uji→1 , . . . , uji→n .

– Inner protocol. The action of the abstract server S i above is in fact emulated
by Alice and Bob running an inner protocol to compute the functionality Gij .
Alice has
 
[j−1]
ShA (σ j−1
i ), µi↔A ; µji←A ; ShA (uj−1 j−1
i←1 ), . . . , ShA (ui←n ) .

Bob has
 
[j−1]
ShB (σ j−1
i ), µi↔B ; µji←B ; ShB (ui←1
j−1 j−1
), . . . , ShB (ui←n ) .

The functionality Gij takes inputs from Alice and Bob, and proceeds by
first reconstructing σ j−1 i and uj−1 j−1
i←1 , . . . , ui←n , and then evaluating a circuit
defined by nextMSGji . The functionality divides each of the private values
(σ ji and uji→1 , . . . , uji→n ) into two shares and outputs to Alice and Bob.
The inner protocol π OT is an OT-hybrid two-party computation protocol
that computes the collection of {Gij }i,j .

30
An oblivious watch list technique was then used to build a compiler out of Π and
π OT such that if Π is secure against active adversary in honest majority setting
and π OT is private against semi-honest adversary, then the compiled protocol
is secure against active adversary. Intuitively, the semi-honest π OT guarantees
private communication between the virtual servers and the oblivious watch list
technique enforces authenticated communication between the virtual servers,
which collectively reduces the security of the compiled protocol to the security
of Π with secure point-to-point channels.
We propose a general (abstract) framework of constructing non-malleable
two-party computation with/without tampering of the assumed ideal function-
ality OT.
Theorem 4. Let T be a tampering class for OT functionality. Let FT be the
tampering class for the communication channels between virtual servers induced
by executing π OT under T -tampered OT. Let Π be an NM-MPC with respect to
FT . There is a compiler for Π and semi-honest π OT that gives a general purpose
non-malleable two-party computation protocol with respect to T -tampering of OT.
The proof of Theorem 4 is straightforward and is omitted. Interesting tamper-
ing classes studied in the implementation security literature are those capturing
powerful tampering adversaries against whom it is impossible to recover the
ideal functionality. The types of imperfectness studied in the line of works on
OT combiners [HKN+ 05] and more generally OT extractors [IKOS09] may serve
as examples of weak tampering adversary (they are weak tampering because it
is possible to extract ideal OT from the tampered version). Here we briefly dis-
cuss the special case of T = ∅ (no tampering, OT is an ideal functionality). In
this special case, non-malleability becomes privacy against malicious adversary
in the OT-hybrid model. We argue that Theorem 4 provides a conceptually very
simple way of achieving privacy against malicious adversary for general purpose
two-party computation. The compiler can skip the oblivious watch list construc-
tion and naively compile Π and π OT , which is amount to Alice and Bob running
semi-honest π OT . Intuitively, the emulation of the NM-MPC protocol Π (with
respect to F∅ -tampering) servers as an encoding of the circuit C (in the sense of
[GIP+ 14]) such that semi-honestly evaluate the encoded circuit is private against
malicious adversary. Note that we do not have explicit construction of NM-MPC
Π (with respect to F∅ -tampering) computing a circuit C and it is not clear if
this approach yields efficient protocols.

5 Conclusion
We extended the non-malleability notion in tamper-resilient storage to tamper-
resilient computation and defined Non-Malleable Multi-Party Computation (NM-
MPC) using standard ideal/real world formulation: the ideal world adversary is
allowed to tamper with the output of the trusted party, yielding a way to relax
correctness of secure computation in a privacy-preserving way. For MPC proto-
cols in honest majority setting, where efficient constructions with full security

31
assuming secure point-to-point channels are well understood and no security is
known without the assumption, we showed non-malleability is achievable when
the assumed channels are severely tampered. For MPC protocols in no honest
majority setting, where weak secure computation notions play important roles,
we discussed the implications of NM-MPC in honest majority setting in two-
party computation, via MPC-in-the-head paradigm.

Acknowledgement
The author would like to thank Yuval Ishai for suggesting a technical strength-
ening in Definition 6, while a preliminary version of this paper was presented in
ICCC 2022 (https://www.bilibili.com/video/BV1br4y1x7Qu/?spm_id_from=
333.788.recommend_more_video.1).

References
ADN+ 19. Divesh Aggarwal, Ivan Damgård, Jesper Buus Nielsen, Maciej Obremski,
Erick Purwanto, João L. Ribeiro, and Mark Simkin. Stronger leakage-
resilient and non-malleable secret-sharing schemes for general access struc-
tures. In Advances in Cryptology - CRYPTO, pages 510–539, 2019.
AGM+ 15a. Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, and
Manoj Prabhakaran. Explicit non-malleable codes against bit-wise tam-
pering and permutations. In Advances in Cryptology - CRYPTO 2015,
pages 538–557, 2015.
AGM+ 15b. Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, and
Manoj Prabhakaran. A rate-optimizing compiler for non-malleable codes
against bit-wise tampering and permutations. In Theory of Cryptography
Conference, TCC 2015, pages 375–397, 2015.
AL17. Gilad Asharov and Yehuda Lindell. A full proof of the BGW protocol for
perfectly secure multiparty computation. J. Cryptol., 30(1):58–151, 2017.
BDIR18. Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, and Tal Rabin. On
the local leakage resilience of linear secret sharing schemes. In Advances
in Cryptology - CRYPTO 2018, pages 531–561, 2018.
BDIR21. Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, and Tal Rabin. On
the local leakage resilience of linear secret sharing schemes. J. Cryptol.,
34(2):10, 2021.
BDL01. Dan Boneh, Richard A. DeMillo, and Richard J. Lipton. On the impor-
tance of eliminating errors in cryptographic computations. J. Cryptol.,
14(2):101–119, 2001.
Bea89. Donald Beaver. Multiparty protocols tolerating half faulty processors. In
Advances in Cryptology - CRYPTO ’89, volume 435, pages 560–572, 1989.
BGJ+ 13. Elette Boyle, Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, and Amit
Sahai. Secure computation against adaptive auxiliary information. In
Advances in Cryptology - CRYPTO, pages 316–334, 2013.
BGJK12. Elette Boyle, Shafi Goldwasser, Abhishek Jain, and Yael Tauman Kalai.
Multiparty computation secure against continual memory leakage. In Sym-
posium on Theory of Computing Conference, STOC, pages 1235–1254,
2012.

32
BGW88. Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness the-
orems for non-cryptographic fault-tolerant distributed computation (ex-
tended abstract). In Proceedings of the 20th Annual ACM Symposium on
Theory of Computing, pages 1–10, 1988.
Bla79. George R. Blakley. Safeguarding cryptographic keys. In Proceedings of the
1979 AFIPS National Computer Conference, pages 313–317, 1979.
CCD88. David Chaum, Claude Crépeau, and Ivan Damgård. Multiparty uncondi-
tionally secure protocols (extended abstract). In Symposium on Theory of
Computing, STOC, pages 11–19, 1988.
CDF+ 08. Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padró, and Daniel
Wichs. Detection of algebraic manipulation with applications to robust
secret sharing and fuzzy extractors. In Advances in Cryptology - EURO-
CRYPT, volume 4965, pages 471–488, 2008.
CDN15. Ronald Cramer, Ivan Damgård, and Jesper Buus Nielsen. Secure Multi-
party Computation and Secret Sharing. Cambridge University Press, 2015.
CG16. Mahdi Cheraghchi and Venkatesan Guruswami. Capacity of non-malleable
codes. IEEE Trans. Information Theory, 62(3):1097–1118, 2016.
CG17. Mahdi Cheraghchi and Venkatesan Guruswami. Non-malleable coding
against bit-wise and split-state tampering. J. Cryptology, 30(1):191–241,
2017.
CM97. Christian Cachin and Ueli M. Maurer. Unconditional security against
memory-bounded adversaries. In Advances in Cryptology - CRYPTO ’97,
pages 292–306, 1997.
Des94. Yvo Desmedt. Threshold cryptography. In Eur. Trans. Telecommun.,
volume 5, pages 449–458, 1994.
DH76. Whitfield Diffie and Martin E Hellman. New directions in cryptography.
IEEE Transactions on Information Theory, 22(6):644–654, 1976.
DKRS06. Yevgeniy Dodis, Jonathan Katz, Leonid Reyzin, and Adam Smith. Robust
fuzzy extractors and authenticated key agreement from close secrets. In
Advances in Cryptology-CRYPTO 2006, pages 232–250. Springer, 2006.
DN07. Ivan Damgård and Jesper Buus Nielsen. Scalable and unconditionally
secure multiparty computation. In Advances in Cryptology - CRYPTO
2007, volume 4622, pages 572–590, 2007.
DPW18. Stefan Dziembowski, Krzysztof Pietrzak, and Daniel Wichs. Non-
malleable codes. J. ACM, 65(4):20:1–20:32, 2018.
FGJ+ 19. Nils Fleischhacker, Vipul Goyal, Abhishek Jain, Anat Paskin-Cherniavsky,
and Slava Radune. Interactive non-malleable codes. In Theory of Cryp-
tography TCC, pages 233–263, 2019.
FV19. Antonio Faonio and Daniele Venturi. Non-malleable secret sharing in the
computational setting: Adaptive tampering, noisy-leakage resilience, and
improved rate. In Advances in Cryptology - CRYPTO, pages 448–479,
2019.
GIP+ 14. Daniel Genkin, Yuval Ishai, Manoj Prabhakaran, Amit Sahai, and Eran
Tromer. Circuits resilient to additive attacks with applications to secure
computation. In Symposium on Theory of Computing, STOC, pages 495–
504, 2014.
GK18. Vipul Goyal and Ashutosh Kumar. Non-malleable secret sharing. In ACM
SIGACT Symposium on Theory of Computing, STOC 2018, pages 685–
698, 2018.

33
GMW87. Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental
game or A completeness theorem for protocols with honest majority. In
Symposium on Theory of Computing STOC, pages 218–229, 1987.
GPR16. Vipul Goyal, Omkant Pandey, and Silas Richelson. Textbook non-
malleable commitments. In ACM SIGACT Symposium on Theory of Com-
puting, STOC, pages 1128–1141, 2016.
HKN+ 05. Danny Harnik, Joe Kilian, Moni Naor, Omer Reingold, and Alon Rosen.
On robust combiners for oblivious transfer and other primitives. In Ad-
vances in Cryptology - EUROCRYPT, pages 96–113, 2005.
IKOS09. Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Ex-
tracting correlations. In Symposium on Foundations of Computer Science,
FOCS, number 261–270, 2009.
IPS08. Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. Founding cryptography
on oblivious transfer - efficiently. In Advances in Cryptology - CRYPTO,
pages 572–591, 2008.
IPS09. Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. Secure arithmetic com-
putation with no honest majority. In Theory of Cryptography, TCC, pages
294–314, 2009.
Li18. Xin Li. Pseudorandom correlation breakers, independence preserving
mergers and their applications. Electronic Colloquium on Computational
Complexity (ECCC), 25:28, 2018.
Mau92. Ueli M. Maurer. Protocols for secret key agreement by public discussion
based on common information. In Advances in Cryptology - CRYPTO ’92,
volume 740, pages 461–470, 1992.
Mau93. Ueli M Maurer. Secret key agreement by public discussion from common
information. IEEE Transactions on Information Theory, 39(3):733–742,
1993.
MNP+ 21. Hemanta K. Maji, Hai H. Nguyen, Anat Paskin-Cherniavsky, Tom Suad,
and Mingyuan Wang. Leakage-resilience of the shamir secret-sharing
scheme against physical-bit leakages. In Advances in Cryptology - EU-
ROCRYPT, pages 344–374, 2021.
MPSW21. Hemanta K. Maji, Anat Paskin-Cherniavsky, Tom Suad, and Mingyuan
Wang. Constructing locally leakage-resilient linear secret-sharing schemes.
In Advances in Cryptology - CRYPTO, pages 779–808, 2021.
MR18. Payman Mohassel and Peter Rindal. Aby3 : A mixed protocol framework
for machine learning. In Proceedings of the 2018 ACM SIGSAC Conference
on Computer and Communications Security, CCS, pages 35–52, 2018.
RB89. Tal Rabin and Michael Ben-Or. Verifiable secret sharing and multiparty
protocols with honest majority (extended abstract). In Symposium on
Theory of Computing, STOC, pages 73–85, 1989.
Sha79. Adi Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979.
TX21. Ivan Tjuawinata and Chaoping Xing. Leakage-resilient secret sharing with
constant share size. 2021.
Yao82. Andrew Chi-Chih Yao. Protocols for secure computations (extended ab-
stract). In Foundations of Computer Science, FOCS, pages 160–164, 1982.

Supplementary Materials

34
A Execution of MPC protocol under tampering
An MPC protocol Π for evaluating an arithmetic circuit C evaluates different
types of gates in topological order determined by the design of the protocol and
the circuit structure. Each gate typically contains several rounds of communica-
tions and in each round 4 a party transmits one message to other parties. Here
we enforce a sequential order for the rounds from beginning of the protocol till
the end of the protocol and assume every party knows about this message topol-
ogy. Most gates are evaluated by the n servers, except the input sharing gates,
where the m clients also take part. We first formally describe the processing of
one round evaluated by the n servers under tampering. Assume it is server Si ’s
turn to activate his/her next message function nextMSGji in the jth round. The
function nextMSGji is evaluated to compute the next messages for server Si to
be sent to other servers

(mji,1 , . . . , mji,n ) : = nextMSGji (Rji , viewΠ


i ),

where Rji is the local randomness of the server Si generated in the jth round and
viewΠ
i denotes the collection of the view of Si up to this point. The tampering
function fij for the current transmitted messages of server Si is applied and
results in
j j j j
(m̃
i,1 , . . . , m̃i,i−1 , m̃i,i+1 , . . . , m̃i,n ) : = 
fij previousMSG, (mji,1 , . . . , mji,i−1 , mji,i+1 , . . . , mji,n ) ,

where previousMSG denotes the collection of messages transmitted before the


current batch of messages. The n next message functions nextMSGji , i ∈ [n] are
activated in certain order. As the activation token traverses from party to party
followed by the application of the corresponding tampering function, all parties
update their views at the end of the jth round as follows.
 
j j j j j j
viewΠ Π
i : = viewi ||Ri ||(m̃1,i , . . . , m̃i−1,i , mi,i , m̃i+1,i , . . . , m̃n,i ) ,

where the local randomness Rji of the server Si and the message mji,i that Si
keeps on his/her own are not tampered with. In the input sharing phase of an
MPC protocol, the m clients provide the input in the form of secret shares to the
servers. We model the input sharing phase to be interactive and multiple-round
as in the rest of the protocol. The communication only happens between the
clients and the servers (the servers do not communicate with each other and
the clients do not communicate with each other). We still denote the tampering
function for the messages transmitted by the server Si by fij , although the mes-
sages of Si in this case are consist of m components intended for m clients. Let
the tampering function for the messages transmitted by the client Ci be denoted
4
Here the term round is to be distinguished from its usual usage in honest majority
MPC literature where it allows multiple senders to send messages simultaneously.

35
j
by fn+i (we consider Ci as the (n + i)th party). The messages transmitted by
the client Ci are consist of n components intended for n servers.
In general, the tampering of an MPC protocol with a total of r rounds can
be described by the following sequence of tampering functions.
 j 
{fi }i∈[n] , jth round is not a round of input sharing gate
{fij }i∈[n+m] , jth round is a round of input sharing gate j=1,...,r

The execution of an MPC protocol under corruption means there are a set
S of t corrupt servers that the adversary can read their views real time and
can make these servers coordinately modify their next message functions. When
analysing the combined influence, through controlling the corrupt servers and
through tampering with the transmitted messages, we sometimes assume that
the adversary does not tamper with the channels of the corrupt servers. This is
without loss of generality because the adversary can always make the corrupt
servers further modify their next message functions in order to achieve what the
adversary could have additionally achieved through tampering with the mes-
sages corrupt servers receive/send. In the sequel, we only need to consider the
tampering of the communication between honest parties,
(" )
{fij }i∈S̄ , jth round is not a round of input sharing gate
{fij }i∈S̄∪{n+1,...,n+m} , jth round is a round of input sharing gate
j=1,...,r

We next formulate these tampering functions into adversarial channels, one


channel for each pair of honest parties. We first define the adversarial channels
j j
connecting honest servers. Let fi,i 0 denote the projection of fi on the component

corresponding to the message to be sent to the server Si0 . That is


 
j
m̃ji,i0 = fi,i 0 previousMSG, (mji,1 , . . . , mji,i−1 , mji,i+1 , . . . , mji,n ) .

In the special case of independent tampering, we have


 
m̃ji,i0 = fi,i
j
0 previousMSG{i,i0 } , mji,i0 ,

where previousMSG{i,i0 } denotes the subset of previous messages that are trans-
mitted between the server Si and the server Si0 . We collect the tampering func-
tions that are applied to the messages transmitted between the server Si and
the server Si0 , and define a tampering function fCh(i,i0 ) for the channel Ch(i, i0 ).
n o
j j
fCh(i,i0 ) = (fi,i 0 , fi0 ,i ) ,
j th round is not a round of input sharing gate

where some of these message tampering functions could be empty if there is no


message transmitted from the server Si to the server Si0 or from the server Si0
to the server Si . Similarly, we also define the channel tampering functions for
the input sharing phase. The channel tampering function for the server Si and
the client Ci0 is
n o
j j
fCh(i,n+i0 ) = (fi,n+i0 , fn+i0 ,i ) .
j th round is a round of input sharing gate

36
B Linear-based protocol with illustration
Definition 12 ([GIP+ 14]). Let n = 2t + 1 and let (Sharen,t , Recovern,t ) be a
redundant dense linear secret sharing scheme. An n-server m-client protocol Π for
computing a single-output m-client circuit C : Fu1 × . . . × Fum → Fu is said to be
linear-based with respect to (Sharen,t , Recovern,t ) if Π has the following structure
with linear protocols (defined immediately afterwards) as internal components.
1. Setup phase. During this phase all servers participate in some linear proto-
col Πsetup that gets no auxiliary inputs. At the end of this phase every server
k
Si holds a vector of shares setupShgi for every multiplication gate g k ∈ GΠmult
in C.
2. Randomness generation phase. During this phase all servers participate
in some linear protocol Πrandom that gets no auxiliary inputs. At the end of
k
this phase every server Si holds a share randShgi for every randomness gate
g k ∈ GΠrand in C.
3. Input sharing phase. Π processes every input gate g k ∈ GΠinput in C be-
longing to a client as follows. The client shares its input x for g k using
k
(Share, Recover) and then sends each server Si its corresponding share Shgi .
4. Circuit evaluation phase. Π computes C in stages. During the k-th stage
in an honest execution, the k-th gate, g k , inside C is evaluated (in some
topological order) and at the end of the stage the servers hold a sharing of
the output of g k with a distribution induced by Share. The evaluation of each
gate is done as follows.
(a) If g k is an addition gate with inputs g a and g b , Π evaluates g k by having
each server Si sum its shares corresponding to the outputs of g a and g b .
Similarly, for a subtraction gate, Si subtracts its shares corresponding
to the outputs of g a and g b . There is no communication during these
k
rounds. Each server Si holds a share Shgi of the output of gate g k .
(b) If g k is a multiplication gate with inputs g a and g b , Π evaluates g k using
some n-party linear protocol Πmult such that the main inputs of the i-
a b
th server Si to Πmult are its shares Shgi and Shgi corresponding to the
k
outputs of g a and g b . The auxiliary input of Si to Πmult is Shgi , which
is the results of the setup phase associated with g k . Each server Si holds
k
a share Shgi of the output of gate g k .
5. The protocol finishes before the output gate g out in C is processed and each
out
server Si holds a share Shgi of the output of gate g out .
The notion of linear protocols is an abstraction of the internal components
Πsetup , Πrandom and Πmult of linear-based MPC protocols.
Definition 13 ([GIP+ 14]). An n-party protocol Π is said to be a linear proto-
col, over some finite field F if Π has the following properties.
1. Inputs. The input of every server Si is a vector of field elements from F.
Moreover, Si ’s inputs can be divided into two distinct types, the main inputs
and auxiliary inputs.

37
2. Messages. Recall that each message in Π is a vector of field elements from
F. We require that every message m of Π, sent by some server Si , belongs
to one of the following categories:
(a) m is some fixed arbitrary function of Si ’s main inputs (and is indepen-
dent of its auxiliary inputs).
(b) every entry mj of m is generated as some fixed linear combination of
Si ’s auxiliary inputs and elements of previous messages received by Si .
3. The output of every party Si is a linear function of its incoming messages.
Let (Share, Recover) be the Shamir secret sharing scheme. The semi-honest
DN [DN07] n-server m-client protocol Π for computing a single-output m-client
circuit C : Fu1 ×. . .×Fum → Fu , where n = 2t+1 is given as follows (double-random(·)
and random(·) are defined immediately afterwards).
1. Setup phase. During this phase all servers participate in the linear protocol
Πsetup = double-random(`mult ) in order to generate the randomness needed for
evaluation of multiplication gates during the protocol, where `mult denotes
the number of multiplication gates. At  the end of this phase every server
gk gk gk
Si holds a vector of shares setupShi = ri , Ri for every multiplication
gate g k ∈ GΠmult in C.
2. Randomness generation phase. During this phase all servers participate
in the linear protocol Πrandom = random(`rand ) in order to generate the shares
corresponding to the outputs of the randomness gates inside C, where `rand
denotes the number of randomness gates. At the end of this phase every
k k
server Si holds a share randShgi = rig for every randomness gate g k ∈ GΠrand
in C.
3. Input sharing phase. Π processes every input gate g k ∈ GΠinput in C be-
longing to a client as follows. The client shares its input x for g k using
k
(Share, Recover) and then sends each server Si its corresponding share Shgi .
4. Circuit evaluation phase. Π computes C in stages. During the k-th stage
in an honest execution, the k-th gate, g k , inside C is evaluated (in some
topological order) and at the end of the stage the servers hold a sharing of
the output of g k with a distribution induced by Share. The evaluation of
each gate is done as follows.
(a) If g k is an addition gate with inputs g a and g b , Π evaluates g k by having
each server Si sum its shares corresponding to the outputs of g a and g b .
Similarly, for a subtraction gate, Si subtracts its shares corresponding
to the outputs of g a and g b . There is no communication during these
k
rounds. Each server Si holds a share Shgi of the output of gate g k .
(b) If g k is a multiplication gate with inputs g a and g b , Π evaluates g k using
the following n-party linear protocol Πmult as follows. The main inputs of
a b
the i-th server Si to Πmult are its shares Shgi and Shgi corresponding to
k
a b g
the
 koutputs of g and g . The auxiliary input of Si to Πmult is setupShi =
g gk
ri , Ri , which is the results of the setup phase associated with g k .
Server Si then does the following.

38
a b k
i. Compute Shi = Shgi · Shgi + Rig and send Shi to S1 .
ii. S1 upon receiving the shares (Sh1 , . . . , Shn ) from all the servers com-
putes D = Recover[n] (Sh1 , . . . , Shn ) and sends D to all the servers.
k
iii. Each server Si upon receiving a value D from S1 computes Shgi =
k
D − rig .
Note that Share(D) = (D, . . . , D) when the random polynomial chosen
by the share algorithm has zero coefficients for all positive degree terms.
k
Each server Si then holds a share Shgi of the output of gate g k .
5. The protocol finishes before the output gate g out in C is processed and each
out
server Si holds a share Shgi of the output of gate g out .

Definition 14. Let F be a finite field. Let M ∈ Fr×c be a matrix with the number
r of rows bigger than the number c of columns. The matrix M is said to be super
invertible if any sub-matrix formed by c rows of M is invertible.

Let M ∈ F(t+1)×n be a super invertible matrix and let (Share, Recover) be


the Shamir secret sharing scheme. The following n-party protocol Πsetup =
double-random(`mult ) on input `mult , each party Pi performs the following steps
d `t+1
mult
e times:

1. Generate a uniformly random value si ∈ F.


2. Compute (Shi1 , . . . , Shin ) ← Share(si , t, n).
3. Compute (Sh0i 0i i
1 , . . . , Shn ) ← Share(s , 2t, n).
4. Send each party Pj the shares (Shj , Sh0ii
j ).
5. Upon receiving from all the parties the shares (Sh1i , . . . , Shni ) and (Sh01 0n
i , . . . , Shi ),
the party Pi performs the following:
(a) Compute (ri1 , . . . , rit+1 ) = M (Sh1i , . . . , Shni ).
(b) Compute (Ri1 , . . . , Rit+1 ) = M (Sh01 0n
i , . . . , Shi ).
6. The output of the i-th party Pi is (ri , Ri ), . . . , (rit+1 , Rit+1 ) .
1 1


The n-party protocol Πrandom = random(`rand ) is the same as double-random(`mult )


except that Steps 3 and 5(b) are skipped.

C Secure with abort MPC protocol of [GIP+ 14]


Definition 15. Let F be a finite field and let G = {+, −, ×}, where gates labeled
by +, − and × indicate the addition, subtraction and multiplication over F,
respectively. An arithmetic circuit C over the gate set G and a set of variables
X = {x1 , . . . , xm } is a directed acyclic graph whose vertices are called gates
and whose edges are called wires. Every gate in C of in-degree 0 is labeled by a
variable from X and is referred to as an input gate. Every gate of out-degree 0
is called an output gate. All other gates are labeled by functions from G. In
some cases we also allow in-degree 0 gates labeled by rand and referred to as
randomness gates. A circuit containing rand gates is called a randomized circuit
and a circuit that does not contain rand gates is called a deterministic circuit. We

39
write C : Fu1 × . . . × Fum → Fu to indicate that C is an arithmetic circuit over F
with m inputs and one single output. We denote by |C| the number of gates in C.
For an input x ∈ Fu1 × . . . × Fum we denote by C(x) the result of evaluating C
on x if C is deterministic and the resulting distribution if C is randomised.
An additive attack A on a deterministic or randomised circuit C assigns an
element of Fu to each of its internal wires as well as to each of its outputs. We
denote by Au,v the attack A restricted to the wire (u, v). For every wire (u, v)
in C, the value Au,v is added to the output of u before it enters the inputs of v.
Similarly we denote by Aout the restriction of A to the outputs of C and the value
Aout is added to the outputs of C. For simplicity, we assume u1 = . . . = um = u.
Definition 16 (Additively corruptible version of a circuit). Let C : Fu1 ×
. . .×Fum → Fu be a circuit containing w wires. The additively corruptible version
of C is the functionality

f˜C : Fu1 × . . . × Fum × Fwu → Fu

that takes additional input from the adversary A which indicates an additive
corruption for every wire of C. For all (x, A), f˜C (x, A) outputs the result of the
additively corrupted C as specified by the additive attack A when invoked on the
inputs x.
Definition 17 ([GIP+ 14]). A randomised circuit Ĉ : Fu1 × . . . × Fum → Fu
is an ε-secure implementation of a function C : Fu1 × . . . × Fum → Fu against
additive attacks if the following holds:
– Completeness. For all x ∈ Fu1 × . . . × Fum , it holds that Pr[Ĉ(x) = C(x)] = 1.
– Additive-attack security. For any circuit C̃ obtained by subjecting Ĉ to an
additive attack, there exists a ∈ Fu1 × . . . × Fum and a distribution A over
Fu such that for any x ∈ Fu1 × . . . × Fum , it holds that
 
SD Ĉ(x); C(x + a) + A ≤ ε.

The definition naturally extends to the case when the functionality computed by
C is randomised.
Lemma 3 ([GIP+ 14]). For any finite field F and arithmetic circuit C : Fu1 ×
. . . × Fum → Fu , there exists a randomised circuit Ĉ : Fu1 × . . . × Fum → Fu of
size O(|C|) such that Ĉ is O(|C|/|F|)-secure implementation of C against additive
attacks.
The following private version of AMD code [CDF+ 08] is due to [GIP+ 14]
Definition 18. An (u, u0 , ε)-AMD code is a pair of circuits (AMDEnc, AMDDec),
0 0
where AMDEnc : Fu → Fu is randomised and AMDDec : Fu → F × Fu is deter-
ministic such that the following properties hold:
– Perfect completeness. For all x ∈ Fu , it holds that Pr[AMDDec(AMDEnc(x)) =
(0, x)] = 1.

40
0 0
– Additive robustness. For any ∆ ∈ Fu , ∆ 6= 0u , and for any x ∈ Fu , it holds
that
Pr[AMDDec(AMDEnc(x + ∆)) ∈ / ERR] ≤ ε,
where ERR = F∗ × Fu denotes detection of error.
Moreover, (AMDEnc, AMDDec) is called a private (u, u0 , ε)-AMD code if for any
0 0
∆ ∈ Fu , ∆ 6= 0u , y ∈ F∗ × Fu and for any x0 , x1 ∈ Fu , it holds that

Pr[AMDDec(AMDEnc(x0 + ∆)) = y|AMDDec(AMDEnc(x0 + ∆)) ∈ ERR]


= Pr[AMDDec(AMDEnc(x1 + ∆)) = y|AMDDec(AMDEnc(x1 + ∆)) ∈ ERR].

Private AMD codes can be constructed from plain AMD codes (AMDEnc, AMDDec)
through modifying its decoder as follows.
1. Compute (b, z) from AMDDec(c).
2. Output (0, z) + br, where r is generated uniformly from F × Fu .
The above trick together with the known construction of asymptotically op-
timal constructions of AMD codes [DKRS06,CDF+ 08] immediately yields the
following.
Lemma 4. For any positive integers u and σ, there exists a pair of circuits
(AMDEnc, AMDDec) s.t. for any finite field F it holds that (AMDEnc, AMDDec)
is a private (u, O(u + σ), |F|1σ )-AMD code. Moreover, the size of AMDEnc and
AMDDec is O(u + σ).
The construction of [GIP+ 14] (see Construction 1) starts with private MPC
protocols and strengthens them for protection against the deviation of active
Adv from the protocol. Intuitively, the linear-based protocols privately evaluate
a circuit through sharing the inputs using the same linear secret sharing scheme
(with independent randomness each time invoking the sharing algorithm) and,
with the help of the homomorphic property of the secret sharing scheme, having
the n servers operating on the secrets through operating on the shares. The
homomorphic property of the underlying secret sharing also plays a crucial role
in analysing what form of influence the deviation of malicious adversary has over
the execution of these protocols, which are designed for privacy only. Our focus
is the role of the assumption of secure point-to-point communication channels.
A secure communication channel is both private and authenticated.
The authenticity of the communication channel guarantees that among the n
shares for a secret held by a client or an honest server, n − t shares are correctly
received by honest servers, who, according to the definition of honest servers,
will follow the protocol and correctly operate on these correct shares. Roughly
speaking, since the n − t shares out of the total n shares contain full information
about the shared secret (honest majority), the execution of the protocol will
not deviate from its course “too much”, no matter how the corrupt servers may
deviate from the protocol. In particular, the authenticity guarantee allows for
an analysis that interprets the difference between what corrupt servers ought
to do and what they actually do as a blind additive offset for the circuit. More

41
concretely, the analysis basically “ignore” the shares of the corrupt servers S and
only consider the shares in S̄ (the honest servers). By doing this, the influence of
Adv is limited to the gates that require communication among servers (usually
the input gate and multiplication gate), where corrupt servers can influence the S̄
part through sending “wrong” messages to the honest servers S̄. For each corrupt
server, the difference between the n−t “wrong” shares and n−t “correct shares”
(if the server were to follow the protocol) determines an offset in the secret.
The privacy of the communication channel guarantees that only the t shares
of S are seen by the adversary. According to the privacy of the secret sharing
scheme, these t shares do not contain any information about the secret, and
hence have a distribution independent of the other n − t shares. This allows one
to claim that the offset described above is only blindly chosen. Finally, the fact
that the offset is additive follows from the homomorphic property we mentioned
earlier, which is necessary for the protocol even there is no adversary.
Upon interpreting the influence of a deviation of active Adv as a set of blind
additive offsets added to wires of the circuit being evaluated, one can seek a
circuit protection solution to achieving robustness of MPC protocols. This solu-
tion takes two steps (see Construction 1, Circuit Π(C) construction). The first
step is to protect the inputs and the output of the circuit against additive attack
using an AMD code. In particular, the encoding of the input and decoding of the
output are done by clients locally while the decoding of the input (before com-
puting) and encoding of the output (before outputting the result to the receiver
client) are processed collectively by the n servers through privately evaluating
some augmenting gates of the circuit. The second step is to compile the aug-
mented circuit in the precious step into a protected version such that any blind
additive corruption of the wires is turned into additive attacks at the input and
output only (see Definition 17).

D Proof of Theorem 2
To combine the NM-SS over prime fields with other building blocks, we need a
s
Fp -linear FSS,θ -NM-SS over an extension field Fpu .
Theorem 5. Let p be a prime number and s < log p. There is a Fp -linear
s
FSS,θ -NM-SS over Fup with privacy threshold t and non-malleability error ε =
2s sin(π/2s )
u
2 · 1
2 · 2s · (cs + 2−2s−1 )t−θ−1 , where cs = p sin(π/p) .

Proof. Let (Share, Recover) be Shamir’s secret sharing scheme over Fp with pri-
vacy threshold t for n players. We construct (u-Share, u-Recover) over Fup .

– u-Share(): On the input of a secret (s1 , . . . , su ) ∈ Fup , the sharing algorithm


generate, using fresh independent randomness,
1 1

 Share(s1 ) = (Sh1 , . . . , Shn );

..
 .
Share(su ) = (Shu1 , . . . , Shun ).

42
The output of the sharing algorithm is then

u-Share(s1 , . . . , su ) = (Sh11 . . . Shu1 ), . . . , (Sh1n . . . Shun ) .




– u-Recover(, ): On the input of a reconstruction set R = {i1 , . . . , it+1 } and


the shares of the corresponding players, the reconstruction algorithm does
the following.
 1
˜ 1 ), R = s̃1 ;
 
 Recover ( ˜ , . . . , Sh
Sh

 i1 it+1

..
  u .
˜ u ), R = s̃u .

˜ , . . . , Sh

 Recover (Sh

i1 it+1

The output of the reconstruction algorithm is (s̃1 , . . . , s̃u ).

The correctness and privacy of (u-Share, u-Recover) follows straightforwardly by


construction. So is the fact that the scheme is Fp -linear. We now show the non-
s
malleability of the scheme with respect to FSS,θ .
We begin with the observation that a tampering function f over Fup in-
duces u random tampering functions over Fp in a natural sense. Let Range(f ) =
s
{c̃1 , . . . , c̃2 }, where c̃j = (c̃j1 , . . . , c̃ju ) ∈ Fup for j = 1, . . . , 2s . For each (say the
ith) of the u components of a vector in Fup , f will only map it to at most 2s dif-
ferent values (c̃ji , j = 1, . . . , 2s ). The tricky part is that the tampering function
on the ith component induced by f can depend on the other input components.
This fortunately does not create problem for using the non-malleability of the
(Share, Recover) to argue the non-malleability of the ith component of the secret.
The idea is since each component of the secret is shared using fresh independent
randomness, of all the u components in one share of (u-Share, u-Recover), only
the ith component is chosen according to the randomness that is used for sharing
the ith component of the secret; all other components in that share are inde-
pendent. We then can interpret this tampering on the ith component of a share
as a random s-bounded tampering with randomness coming from a distribution
that is independent of the randomness that generates the ith component of the
share. The non-malleability of the (Share, Recover) asserts that the tampered
version s̃i of the ith component of secret is independent of the original value
si . A second tricky part is then to assert that the tampered version s̃i of the
ith component of the secret is independent of the the original values of other
components {sj |j 6= i} of the secret. We have seen the value of the tampered
ith component of a share can depend on other components of the share. But the
decision of j index of c̃ji induces a partition of Fu−1 p . All the possible influence
that other components of the share have over the tampering of ith component
of a share is captured in this induced partition of Fpu−1 . Starting from the basis
case of u = 2, we can use Lemma 1 to directly claim that this induced partition
is independent of the other components of the secret. Building on this basis case
of u = 2, we can argue by mathematical induction that this independence holds
for any u.

43
Proof (Theorem 2). The construction closely follows the secure with abort MPC
construction of [GIP+ 14]. We introduce a virtual output extractor to simplify the
proof (this also captures the type of applications where the output remains in
distributed form as shares of the underlying secret sharing scheme).

Construction 1 Let C : Fu1 × . . . × Fum → Fu be an m-client circuit. Let ΠPriv


be a linear-based t-private m-client n-party protocol with respect to a redun-
s
dant dense secret sharing scheme (NMShare, NMRecst) that is FSS,θ -NM with
0
error parameter εNM . Let (AMDEnc, AMDDec) be an (u, u , εAMD )-AMD code.
We construct a protocol Π for computing C.
– Circuit Π(C) construction.
1. Augmented circuit CAMD for enabling input/output detection.
0 0
Modify C into the randomised m-client circuit CAMD : Fu × . . . × Fu →
u0 0 0
F × F that on inputs (x1 , . . . , xm ) performs the following:
(a) For all 1 ≤ i ≤Pm, compute (bi , xi ) ← AMDDec(x0i ).
m
(b) Compute b ← i=1 ri bi , where ri is a random field element.
(c) Output (b, AMDEnc(C(x1 , . . . , xm )) + br0 ).
2. Compiled circuit ĈAMD secure against additive corruptions.
Compile the augmented circuit CAMD into ĈAMD according to Lemma 3
to turn any additive circuit corruption into additive attack to circuit
input/out.
– Protocol Π.
1. Client side local pre-computation:
(a) Each client Ci locally computes x0i ← AMDEnc(xi ).
(b) Each client Ci sends its encoded input x0i to protocol ΠPriv .
2. Circuit evaluation:
(a) Run ΠPriv on inputs (x01 , . . . , x0m ) evaluating Π(C) = ĈAMD .
out
(b) Each server Si exits ΠPriv with a share Shgi of the output gate of
Π(C) = ĈAMD .
Output extractor (virtual)
1. On input a reconstruction set R ⊂ [n] of the underlying secret sharing
out
scheme of ΠPriv , reconstruct the secret (b, z 0 ) from ShgR .
2. If b 6= 0, then output aborts.
3. Otherwise, compute (b0 , y) ← AMDDec(z 0 ).
4. If b0 6= 0, then output aborts.
5. Otherwise, output y.

The fact that Π (t, ε)-securely computes C with abort was shown in [GIP+ 14].
In the sequel, we prove non-malleability of Π against channel tampering using
the non-malleability of the underlying secret sharing.
We start with an observation that greatly simplifies the construction of non-
malleability simulator (the simulation of the honest party’s output) for Π. This
observation is in fact a consequence of non-malleability of secret sharing and
s
the definition of FSS,θ . It can be shown using an argument similar to [CG16,
Theorem 5.3] that if a secret sharing scheme is non-malleable with respect to

44
s
FSS,θ , then the tampering experiment Tamperf,R s of the secret sharing scheme is
s
strictly independent of the secret s, for any tampering function f ∈ FSS,θ and any
reconstruction set R. This is slightly stronger than the plain NM-SS definition,
where the tampering experiment Tamperf,R s is allowed to depend on the secret
s if its simulation Df,R outputs the symbol same∗ with non-zero probability.
Π(C)
Let G be the set of gates in the circuit for computing C according to Π
that require secure point-to-point channels in processing them. With the above
observation, it follows that once the shares of a secret are tampered with using a
s
function from FSS,θ , non-malleability of (NMShare, NMRecst) guarantees that the
information about the original secret is destroyed in the sense that given a recon-
struction set R, the new secret contained in the tampered shares corresponding
to R can be simulated without knowing the original secret. This in particular
means that one does not need to keep track of the output values (in general they
are random variables with randomness from the randomised sharing algorithm
NMShare) of the erroneously evaluated intermediate gates, since they will not
Π(C)
affect the output of g out ∈ Π(C) if there is a gate g k ∈ G lying between
out
g and the intermediate gates. Our non-malleability simulator for Π works in
Π(C)
reverse topological order starting from the output gate g out ∈ Π(C). Let G be
the set of gates in the circuit for computing C according to Π that require secure
Π(C)
point-to-point channels in processing them. If the output gate g out ∈ G , then
out
˜ g , which determines the value (distribution) of the output of the virtual
Sh R
output extractor, can be directly simulated using the non-malleability simulator
Π(C)
of (NMShare, NMRecst) and the deviation of the corrupt servers. If g out ∈ / G ,
Π(C)
we need to look at the gates in G that are evaluated before the output gate
Π(C)
g out and do not have gates in G lying between g out and them. Let us denote
Π(C),last
these gates by G and the vector of shares corresponding to these gates by
Π(C),last out
˜ G
Sh ˜ g can be simulated through computing
, for server Si , i ∈ R. The Sh
i R
out Π(C),last
˜g
for each server Si in R a share Sh ˜ G
from Sh . Indeed, for severs in R\S,
i i
out Π(C),last

since these servers are honest, we compute Sh ˜ g from Sh ˜ G according to


i i
Π(C). For severs in R ∩ S, since these servers are controlled by Adv, we compute
out Π(C),last

Sh ˜ G
˜ g from Sh according to how Adv would deviate from the protocol. De-
i i
Π(C),last
pending on the specific construction of ΠPriv , the set G can be different.
We next explicitly describe our non-malleability simulator for Π.
out
1. Simulating shares Sh ˜ g of the output gate g out :
R
Π(C)
– g out ∈ G . Directly simulate
Π(C) out
(a) Read from f S̄,G the subset {f j,g |j ∈ S̄} of tampering func-
tions corresponding to the honest servers S̄ for the output gate
g out and, for each j ∈ S̄, call the non-malleability simulator of
out
(NMShare, NMRecst) with tampering function f j,g and reconstruc-
tion set R.
i. s̃ ← Df j,gout ,R

45
j,g out out
˜
ii. (Sh ˜ j,g ) ← NMShare(s̃)
, . . . , Sh
1 n
out
j,g
˜
iii. output ShR\S
(b) Read from Adv the messages that it instructs the corrupt servers S
to send to the honest servers in R\S and the final share for output
gate if the corrupt server is in R ∩ S. For each j ∈ S,
out
˜ j,g
i. output Sh R\S
out
ii. if j ∈ S ∩ R, output Sh ˜g
j
(c) For each j ∈ R\S, compute according to the protocol for the gate
out out out
˜ g from the received messages (Sh
g out the share Sh ˜ 1,g , . . . , Sh
˜ n,g )
j j j
obtained in steps (a) and (b).
Π(C) Π(C),last
– g out ∈
/ G . Find G according to construction of ΠPriv and simu-
late the followig.
Π(C)
• ΠPriv without setup phase (e.g. [BGW88]): the gates G that re-
quire communication to process are the multiplication gates and in-
put gates.
k
(a) Simulating shares Sh ˜ g for each g k ∈ G Π(C),last .
R 
Π(C),last
∗ If g k ∈ G is an input gate, similar to the “Directly sim-
Π(C)
ulate” steps described for g out ∈ G special case above with
simplification. The non-malleability simulator of (NMShare, NMRecst)
is only called for once and the item (c) is empty.
Π(C),last
∗ If g k ∈ G is a multiplication gate, same as the “Directly
Π(C)
simulate” steps described for g out ∈ G special case above.
g out
(b) Computing shares Sh ˜ out
R of the output gate g .
For each j ∈ R\S, compute according to the protocol the share
out
˜ g from the received messages in previous steps. Read from
Sh j
g out
˜
Adv the final share Sh of a corrupt server j for output gate if
j
the corrupt server is in R ∩ S
Π(C)
• ΠPriv with setup phase (e.g.[DN07]): the gates G that require com-
munication to process are the setup gates and input gates. Moreover,
the input gates are lying between the output gate and setup gates.
k
(a) Simulating shares Sh˜ g for each g k ∈ G Π(C),last .
R 
Π(C),last
g k ∈ G is always an input gate, similar to the “Directly
Π(C)
simulate” steps described for g out ∈ G special case above with
simplification. The non-malleability simulator of (NMShare, NMRecst)
is only called for once and the item (c) is empty.
out
(b) Computing shares Sh ˜ g of the output gate g out .
R
For each j ∈ R\S, compute according to the protocol the share
out
˜ g from the received messages in previous steps. Read from
Sh j
out
˜ g of a corrupt server j for output gate if
Adv the final share Sh j
the corrupt server is in R ∩ S

46
2. Virtual output extractor:
(a) On input a reconstruction set R ⊂ [n] of the underlying secret sharing
out
˜g .
scheme of ΠPriv , reconstruct the secret (b, z 0 ) from Sh R
(b) If b 6= 0, then output aborts.
(c) Otherwise, compute (b0 , y) ← AMDDec(z 0 ).
(d) If b0 6= 0, then output aborts.
(e) Otherwise, output y.

47

You might also like