Keywords

1 Introduction

In a two-party coin flipping protocol, introduced by Blum [6], the parties wish to output a common (close to) uniform bit, even though one of the parties may be corrupted and try to bias the output. Slightly more formally, an \(\varepsilon \)-fair coin flipping protocol should satisfy the following two properties: first, when both parties behave honestly (i.e., follow the prescribed protocol), they both output the same uniform bit. Second, in the presence of a corrupted party that may deviate from the protocol arbitrarily, the distribution of the honest party’s output may deviate from the uniform distribution (unbiased bit) by at most \(\varepsilon \). We emphasize that the above notion requires an honest party to always output a bit, regardless of what the corrupted party does, and, in particular, it is not allowed to abort if a cheat is detected.Footnote 1 Coin flipping is a fundamental primitive with numerous applications, and thus lower bounds on coin flipping protocols yield analogous bounds for many basic cryptographic primitives, including other inputless primitives and secure computation of functions that take input (e.g., XOR).

In his seminal work, Cleve [8] showed that, for any efficient two-party r-round coin flipping protocol, there exists an efficient adversarial strategy that biases the output of the honest party by \(\varTheta (1/r)\). The above lower bound on coin flipping protocols was met for the two-party case by Moran, Naor, and Segev [20] improving over the \(\varTheta (n/\sqrt{r})\)-fairness achieved by the majority protocol of Awer-buch, Blum, Chor, Goldwasser, and Micali [2]. The protocol of [20], however, uses oblivious transfer; to be compared with the protocol of [2] that can be based on any one-way function. An intriguing open question is whether oblivious transfer, or more generally “public-key primitives”, is required for an \(o(1/\sqrt{r})\)-fair coin flip. The question was partially answered in the black-box setting by Dachman- Soled, Lindell, Mahmoody, and Malkin [10] and Dachman-Soled, Mahmoody, and Malkin [11], who showed that restricted types of fully black-box reductions cannot establish \(o(1/\sqrt{r})\)-bias coin flipping protocols from one-way functions. In particular, for constant-round coin flipping protocols, [10] yields that black-box techniques from one-way functions can only guarantee fairness of order \(1/\sqrt{r}\).

1.1 Our Results

Our main result is that constant-round coin flipping protocols with better bias compared to the majority protocol of [2] imply the existence of infinitely-often key-agreement. We recall that infinitely-often key-agreement protocols satisfy correctness (parties agree on a common bit with overwhelming probability), and, for an infinite number of security parameters, no efficient eavesdropper can deduce the output with probability noticeably far from a random guess.Footnote 2

Theorem 1.1

(Main result, informal). For any (constant) , the existence of an \(1/(c\,\cdot \,\sqrt{r})\)-fair, r-round coin flipping protocol implies the existence an infinitely-often key-agreement protocol, for \(c>0\) being a universal constant (independent of r).

As in [8, 10, 11], our result extends via a simple reduction to general multi-party coin flipping protocols (with more than two-parties) without an honest majority. Our non black-box reduction makes a novel use of the recent dichotomy for two-party protocols of Haitner et al. [12]. Specifically, assuming that io-key-agreement does not exist and applying Haitner et al.’s dichotomy, we show that a two-party variant of the recent multi-party attack of Beimel et al. [3] yields a \( 1/(c\cdot \sqrt{r}) \)-bias attack.

1.2 Our Technique

Let \(\varPi =(\mathsf {A},{\mathsf {B}})\) be a r-round two-party coin flipping protocol. We show that the nonexistence of key-agreement protocols yields an efficient \(\varTheta (1/\sqrt{r})\)-bias attack on \(\varPi \). We start by describing the \(1/\sqrt{r}\)-bias inefficient attack of Cleve and Impagliazzo [9], and the approach of Beimel et al. [3] towards making this attack efficient. We then explain how to use the recent results by Haitner et al. [12] to obtain an efficient attack (assuming the nonexistence of io-key-agreement protocols).

Cleve and Impagliazzo’s Inefficient Attack. We describe the inefficient \(1/\sqrt{r}\)-bias attack due to Cleve and Impagliazzo [9]. Let \(M_1,\ldots ,M_r\) denote the messages in a random execution of \(\varPi \), and let C denote the (without loss of generality) always common output of the parties in a random honest execution of \(\varPi \). Let \(X_i = {{\mathbf {E}}}\left[ C \mid M_{\le i} \right] \). Namely, \(M_{\le i} = M_1,\ldots ,M_i\) denotes the partial transcript of \(\varPi \) up to and including round i, and \(X_i\) is the expected outcome of the parties in \(\varPi \) given \(M_{\le i}\). It is easy to see that \(X_0,\ldots ,X_r\) is a martingale sequence: \({{\mathbf {E}}}\left[ X_i \mid X_{0},\ldots , X_{i-1}\right] = X_{i-1}\) for every i. Since the parties in an honest execution of \(\varPi \) output a uniform bit, it holds that \(X_0 = {\mathrm {Pr}}\left[ C = 1\right] = 1/2\) and \(X_r\in \{0,1\}\). Cleve and Impagliazzo [9] (see Beimel et al. [3] for an alternative simpler proof) prove that, for such a sequence (omitting absolute values and constant factors),

$$\begin{aligned}&\text{ Gap: }&{\mathrm {Pr}}\left[ \exists i\in [r] :X_i- X_{i-1} \ge 1/\sqrt{r}\right] \ge 1/2. \end{aligned}$$
(1)

Let the \(i^\mathrm{th}\) backup value of party \({\mathsf {P}}\), denoted \(Z_i^{\mathsf {P}}\), be the output of party \({\mathsf {P}}\) if the other party aborts prematurely after the \(i^\mathrm{th}\) message was sent (recall that the honest party must always output a bit, by definition). In particular, \(Z^{\mathsf {P}}_r\) denotes the final output of \({\mathsf {P}}\) (if no abort occurred). We claim that without loss of generality for both \({\mathsf {P}}\in \left\{ \mathsf {A}, {\mathsf {B}}\right\} \) it holds that

Backup values approximate outcome:

$$\begin{aligned}&\quad {\mathrm {Pr}}\left[ \exists i\in [r] :\left| X_{i} - {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid M_{\le i} \right] \right| \ge 1/2\sqrt{r}\right] \le 1/4 . \end{aligned}$$
(2)

To see why, assume Eq. (2) does not hold. Then, the (possibly inefficient) adversary controlling \(\overline{{\mathsf {P}}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \setminus {\mathsf {P}}\) that aborts at the end of round i if \((-1)^{1-z}\cdot (X_{i} - {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid M_{\le i} \right] ) \ge 1/\sqrt{r}\), for suitable \(z\in \{0,1\}\), biases the output of \({\mathsf {P}}\) towards \(1-z\) by \(\varTheta (1/\sqrt{r})\).

Finally, since the coins of the parties are independent conditioned on the transcript (a fundamental fact about protocols), if party \(\mathsf {A} \) sends the \((i+1)\) message then

$$\begin{aligned}&\text{ Independence: }&{{\mathbf {E}}}\left[ Z_{i}^{\mathsf {B}}\mid M_{\le i} \right] = {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {B}}\mid M_{ \le i+1} \right] . \end{aligned}$$
(3)

Combining the above observations yields that without loss of generality:

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i\in [r] :\mathsf {A} \text { sends the } i^\mathrm{th} \text { message} \wedge X_{i} - {{\mathbf {E}}}\left[ Z_{i-1}^{\mathsf {B}}\mid M_{\le i} \right] \ge 1/2\sqrt{r}\right] \ge 1/8. \end{aligned}$$
(4)

Equation (4) yields the following (possibly inefficient) attack for a corrupted party \(\mathsf {A} \) biasing \({\mathsf {B}}\)’s output towards zero: before sending the \(i^\mathrm{th}\) message \(M_i\), party \(\mathsf {A} \) aborts if \( X_{i} - {{\mathbf {E}}}\left[ Z_{i-1}^{\mathsf {B}}\mid M_{\le i} \right] \ge 1/2\sqrt{r}\). By Eq. (4), this attack biases \({\mathsf {B}}\)’s output towards zero by \(\varOmega (1/2\sqrt{r})\).

The clear limitation of the above attack is that, assuming one-way functions exist, the value of \(X_i={{\mathbf {E}}}\left[ C \mid M_{\le i} = (m_1,\ldots , m_i) \right] \) and the value of \({{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid M_{\le i} = (m_1,\ldots , m_i) \right] \) might not be efficiently computable as a function of t.Footnote 3 Facing this difficulty, Beimel et al. [3] considered the martingale sequence \(X_i = {{\mathbf {E}}}\left[ C \mid Z^{\mathsf {P}}_{\le i} \right] \) (recall that \(Z_i^{\mathsf {P}}\) is the \(i^\mathrm{th}\) backup value of \({\mathsf {P}}\)). It follows that, for constant-round protocols, the value of \(X_i\) is only a function of a constant size string, and thus it is efficiently computable ([3] have facilitated this approach for protocols of super-constant round complexity, see Footnote 4). The price of using the alternative sequence \(X_1,\ldots ,X_r\) is that the independence property (Eq. (3)) might no longer hold. Yet, [3] manage to facilitate the above approach into an efficient \(\widetilde{\varOmega }(1/\sqrt{r})\)-attack on multi-party protocols. In the following, we show how to use the dichotomy of Haitner et al. [12] to facilitate a two-party variant of the attack from [3].

Nonexistence of Key-Agreement Implies an Efficient Attack. Let \(U_p\) denote the Bernoulli random variable taking the value 1 with probability p, and let \(P \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_\rho Q\) stand for Q and P are \(\rho \)-computationally indistinguishability (i.e., an efficient distinguisher cannot tell P from Q with advantage better than \(\rho \)). We are using two results by Haitner et al. [12]. The first one given below holds for any two-party protocol.

Theorem 1.2

(Haitner et al. [12]’s forecaster, informal). Let \(\varDelta = \left( \mathsf {A},{\mathsf {B}}\right) \) be a single-bit output (each party outputs a bit) two-party protocol. Then, for any constant \(\rho >0\), there exists a constant output-length poly-time algorithm (forecaster) \(\mathsf {F}\) mapping transcripts of \(\varDelta \) into (the binary description of) pairs in \([0,1] \times [0,1]\) such that the following holds: let (XYT) be the parties outputs and transcript in a random execution of \(\varDelta \) , then

  • \((X,T) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_\rho (U_{p^{\mathsf {A}}},T)_{(p^{\mathsf {A}},\cdot ) \leftarrow \mathsf {F} (T)}\), and

  • \((Y,T) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_\rho (U_{p^{{\mathsf {B}}}},T)_{(\cdot , p^{{\mathsf {B}}}) \leftarrow \mathsf {F} (T)}\).

Namely, given the transcript, \(\mathsf {F} \) forecasts the output-distribution for each party in a way that is computationally indistinguishable from (the distribution of) the real output.

Consider the \((r+1)\)-round protocol \({\widetilde{\varPi }}= (\widetilde{\mathsf {A}},\widetilde{{\mathsf {B}}})\), defined by \(\widetilde{\mathsf {A}}\) sending a random \(i\in [r]\) to \(\widetilde{{\mathsf {B}}}\) as the first message and then the parties interact in a random execution of \(\varPi \) for the first i rounds. At the end of the execution, the parties output their \(i^\mathrm{th}\) backup values \(z_i^{\mathsf {A}}\) and \(z_i^{{\mathsf {B}}}\) and halt. Let \(\mathsf {F} \) be the forecaster for \({\widetilde{\varPi }}\) guaranteed by Theorem 1.2 for \(\rho = 1/r^2\) (note that \(\rho \) is indeed constant). A simple averaging argument yields that

$$\begin{aligned} (Z_i^{{\mathsf {P}}},M_{\le i}) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{1/r} (U_{p^{{\mathsf {P}}}},M_{\le i})_{(p^{\mathsf {A}},p^{{\mathsf {B}}}) \leftarrow \mathsf {F} (M_{\le i})} \end{aligned}$$
(5)

for both \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and every \(i\in [r]\), letting \(\mathsf {F} (m_{\le i})= \mathsf {F} (i,m_{\le i})\). Namely, \(\mathsf {F} \) is a good forecaster for the partial transcripts of \(\varPi \).

Let \(M_1,\ldots ,M_r\) denote the messages in a random execution of \(\varPi \) and let C denote the output of the parties in \(\varPi \). Let \(F_i = \left( F_i^\mathsf {A},F_i^{\mathsf {B}}\right) =\mathsf {F} (M_{\le i})\) and let \(X_i = {{\mathbf {E}}}\left[ C \mid F_{\le i} \right] \). It is easy to see that \(X_1,\ldots ,X_r\) is a martingale sequence and that \(X_0 = 1/2\). We assume without loss of generality that the last message of \(\varPi \) contains the common output. Thus, it follows from Eq. (5) that \(F_r\approx (C,C) \in \left\{ (0,0),(1,1)\right\} \) (otherwise, it will be very easy to distinguish the forecasted outputs from the real ones, given \(M_r\)). Hence, similarly to Sect. 1.2, it holds that

$$\begin{aligned}&\text{ Gap: }&{\mathrm {Pr}}\left[ \exists i\in [r] :X_i- X_{i-1} \ge 1/\sqrt{r}\right] \ge 1/2. \end{aligned}$$
(6)

Since \(F_i\) has constant-size support and since \(\varPi \) is constant round, it follows that \(X_i\) is efficiently computable from \(M_{\le i}\).Footnote 4

Let \(Z_i^{\mathsf {P}}\) denote the backup value computed by party \({\mathsf {P}}\) in round i of a random execution of \(\varPi \). The indistinguishability of \(\mathsf {F}\) yields that \({{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{\le i} \right] \approx F_i^{\mathsf {P}}\). Similarly to Sect. 1.2, unless there is a simple \(1/\sqrt{r}\)-attack, it holds that

Backup values approximate outcome:

$$\begin{aligned}&\quad {\mathrm {Pr}}\left[ \exists i\in [r] :\left| X_{i} - {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{\le i} \right] \right| \ge 1/2\sqrt{r}\right] \le 1/4 . \end{aligned}$$
(7)

Thus, for an efficient variant of [9]’s attack, it suffices to show that

$$\begin{aligned}&\text{ Independence: }&{{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{ \le i}\right] \approx {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{ \le i+1} \right] . \end{aligned}$$
(8)

for every \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and round i in which party \({\overline{{\mathsf {P}}}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \setminus \left\{ {\mathsf {P}}\right\} \) sends the \((i+1)\) message. However, unlike Eq. (3) in Sect. 1.2, Eq. (8) does not hold unconditionally (in fact, assuming oblivious transfer exists, the implied attack must fail for some protocols, yielding that Eq. (8) is false for these protocols). Rather, we relate Eq. (8) to the existence of a key-agreement protocol. Specifically, we show that if Eq. (8) is not true, then there exists a key-agreement protocol.

Proving that \({\varvec{F}}_{{\varvec{i+1}}}\) and \({{\varvec{Z}}}_{{\varvec{i}}}^{{\mathsf {P}}}\) are Approximately Independent Given \({\varvec{F}}_{\le {\varvec{i}}}\) .

The next (and last) argument is the most technically challenging part of our proof. At this time, we provide a brief yet meaningful overview of the technique. The full details are provided in the main body (Claim 3.8 in Sect. 3).

We show that assuming nonexistence of io-key-agreement, \(F_{i+1}\) and \(Z_{i}^{\mathsf {P}}\) are approximately independent given \(F_{\le i}\). In more detail, the triple \((Z_{i}^{\mathsf {P}},F_{i+1}, F_{\le i})\) is \(\rho \)-indistinguishable from \((Y_1,Y_2, F_{\le i})\) where \((Y_1,Y_2)\) is a pair of random variables that are mutually independent given \(F_{\le i}\). It would then follow that \({{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{i+1}, F_{\le i}\right] \approx {{\mathbf {E}}}\left[ Y_1\mid Y_2, F_{\le i}\right] = {{\mathbf {E}}}\left[ Y_1\mid F_{\le i}\right] \approx {{\mathbf {E}}}\left[ Z_{i}^{\mathsf {P}}\mid F_{\le i}\right] \) as required. To this end, we use a second result by Haitner et al. [12].Footnote 5

Theorem 1.3

(Haitner et al. [12]’s dichotomy, informal). Let \(\varDelta = \left( \mathsf {A},{\mathsf {B}}\right) \) be an efficient single-bit output two-party protocol and assume infinitely-often key-agreement protocol does not exist. Then, for any constant \(\rho >0\), there exists a poly-time algorithm (decorrelator) \(\mathsf {Dcr} \) mapping transcripts of \(\varDelta \) into \([0,1] \times [0,1]\) such that the following holds: let (XYT) be the parties’ outputs and transcript in a random execution of \(\varDelta \), then

$$\begin{aligned} (X,Y,T) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_\rho (U_{p^{\mathsf {A}}},U_{p^{{\mathsf {B}}}},T)_{(p^{\mathsf {A}},p^{{\mathsf {B}}}) \leftarrow \mathsf {Dcr} (T)}. \end{aligned}$$

Namely, assuming io-key-agreement does not exist, the distribution of the parties’ output given the transcript is \(\rho \)-close to the product distribution given by \(\mathsf {Dcr} \). We assume for simplicity that the theorem holds for many-bit output protocols and not merely single bit (we get rid of this assumption in the actual proof).

We define another variant \({\widehat{\varPi }}\) of \(\varPi \) that internally uses the forecaster \(\mathsf {F} \), and show that the existence of a decorrelator for \({\widehat{\varPi }}\) implies that \(F_{i+1}\) and \(Z_{i}^{\mathsf {P}}\) are approximately independent given \(F_{\le i}\), and Eq. (8) follows. For concreteness, we focus on party \({\mathsf {P}}={\mathsf {B}}\).

Fix i such that \(\mathsf {A} \) sends the \((i+1)\) message in \(\varPi \) and define protocol \({\widehat{\varPi }}= ({\widehat{\mathsf {A}}},{\widehat{{\mathsf {B}}}})\) according to the following specifications: the parties interact just as in \(\varPi \) for the first i rounds; then \({\widehat{{\mathsf {B}}}}\) outputs the \(i^\mathrm{th}\) backup value of \({\mathsf {B}}\) and \({\widehat{\mathsf {A}}}\) internally computes \(m_{i+1}\) and outputs \(f_{i+1}= \mathsf {F} (m_{\le i+1})\). By Theorem 1.3 there exists an efficient decorrelator \(\mathsf {Dcr} \) for \({\widehat{\varPi }}\) with respect to \(\rho = 1/r\). That is:

$$\begin{aligned} (F_{i+1},Z_i^{{\mathsf {B}}},M_{\le i}) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{1/r} (U_{p^{{\widehat{\mathsf {A}}}}},U_{p^{{\widehat{{\mathsf {B}}}}}},M_{\le i})_{(p^{{\widehat{\mathsf {A}}}},p^{{\widehat{{\mathsf {B}}}}}) \leftarrow \mathsf {Dcr} (M_{\le i})}, \end{aligned}$$
(9)

where now \(p^{{\widehat{\mathsf {A}}}}\) describes a non-Boolean distribution, and \(U_{p^{{\widehat{\mathsf {A}}}}}\) denotes an independent sample from this distribution.

Since \(\mathsf {F} \) and \(\mathsf {Dcr} \) both output an estimate of (the expectation of) \(Z_i^{{\mathsf {B}}}|M_{\le i}\) in a way that is indistinguishable from the real distribution of \(Z_i^{{\mathsf {B}}}\) (given \(M_{\le i}\)), both algorithms output essentially the same value. Otherwise, the “accurate” algorithm can be used to distinguish the output of the “inaccurate” algorithm from the real output. It follows that

$$\begin{aligned} (U_{p^{{\widehat{\mathsf {A}}}}},U_{p^{{\widehat{{\mathsf {B}}}}}},M_{\le i})_{(p^{{\widehat{\mathsf {A}}}},p^{{\widehat{{\mathsf {B}}}}}) \leftarrow \mathsf {Dcr} (M_{\le i})} \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{1/r} (U_{p^{{\widehat{\mathsf {A}}}}},U_{F_i^{{\mathsf {B}}}},M_{\le i})_{(p^{{\widehat{\mathsf {A}}}},\cdot ) \leftarrow \mathsf {Dcr} (M_{\le i})} \end{aligned}$$
(10)

Using a data-processing argument in combination with Eqs. (9) and (10), we deduce that

$$\begin{aligned} \left( F_{i+1},Z_i^{{\mathsf {B}}},F_{\le i}\right)&\mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{1/r} \left( U_{p^{{\widehat{\mathsf {A}}}}},U_{p^{{\widehat{{\mathsf {B}}}}}},F_{\le i}\right) _{(p^{{\widehat{\mathsf {A}}}},p^{{\widehat{{\mathsf {B}}}}}) \leftarrow \mathsf {Dcr} (M_{\le i})} \end{aligned}$$
(11)
$$\begin{aligned}&\mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{1/r} \left( U_{p^{{\widehat{\mathsf {A}}}}},U_{F_i^{{\mathsf {B}}}},F_{\le i}\right) _{(p^{{\widehat{\mathsf {A}}}},\cdot ) \leftarrow \mathsf {Dcr} (M_{\le i}) }. \end{aligned}$$
(12)

Finally, conditioned on \(F_{\le i} \), we observe that the pair of random variables \((U_{p^{{\widehat{\mathsf {A}}}}},U_{F_i^{{\mathsf {B}}}})_{(p^{{\widehat{\mathsf {A}}}},\cdot ) \leftarrow \mathsf {Dcr} (M_{\le i})}\) are mutually independent since \(U_{F_i^{{\mathsf {B}}}}\) is sampled independently according to \(F_i^{{\mathsf {B}}}\), and \(F_i^{{\mathsf {B}}}\) is fully determined by \(F_{\le i}\).

1.3 Related Work

We review some of the relevant work on fair coin flipping protocols.

Necessary Hardness Assumptions. This line of work examines the minimal assumptions required to achieve an \(o(1/\sqrt{r})\)-bias two-party coin flipping protocols, as done in this paper. The necessity of one-way functions for weaker variants of coin flipping protocol where the honest party is allowed to abort if the other party aborts or deviates from the prescribed protocol, were considered in [5, 13, 17, 18]. More related to our bound is the work of Dachman-Soled et al. [10] who showed that any fully black-box construction of O(1 / r)-bias two-party protocols based on one-way functions (with r-bit input and output) needs \(\varOmega (r/\log r)\) rounds, and the work of Dachman-Soled et al. [11] showed that there is no fully black-box and function oblivious construction of O(1 / r)-bias two-party protocols from one-way functions (a protocol is function oblivious if the outcome of protocol is independent of the choice of the one-way function used in the protocol). For the case we are interested in, i.e. constant-round coin flipping protocols, [10] yields that black-box techniques from one-way functions can only guarantee fairness of order \(1/\sqrt{r}\).

Lower Bounds. Cleve [8] proved that, for every r-round two-party coin flipping protocol, there exists an efficient adversary that can bias the output by \(\varOmega (1/r)\). Cleve and Impagliazzo [9] proved that, for every r-round two-party coin flipping protocol, there exists an inefficient fail-stop adversary that biases the output by \(\varOmega (1/\sqrt{r})\). They also showed that a similar attack exists if the parties have access to an ideal commitment scheme. All above bounds extend to the multi-party case (with no honest majority) via a simple reduction. Very recently, Beimel et al. [3] showed that any r-round \(n\)-parties coin flipping with \(n^k > r\), for some , can be biased by \(1/(\sqrt{r} \cdot (\log r)^k)\). Ignoring logarithmic factors, this means that if the number of parties is \(r^{\varOmega (1)}\), the majority protocol of [2] is optimal.

Upper Bounds. Blum [6] presented a two-party two-round coin flipping protocol with bias 1 / 4. Awerbuch et al. [2] presented an \(n\)-party r-round protocol with bias \(O(n/\sqrt{r})\) (the two-party case appears also in Cleve [8]). Moran et al. [19] solved the two-party case by giving a two-party r-round coin flipping protocol with bias O(1 / r). Haitner and Tsfadia [14] solved the three-party case up to poly-logarithmic factor by giving a three-party coin flipping protocol with bias \(O({\text {polylog}}(r)/r)\). Buchbinder et al. [7] showed an \(n\)-party r-round coin flipping protocol with bias \({\widetilde{O}}(n^3 2^n/r^{\frac{1}{2}+\frac{1}{2^{n-1}-2}})\). In particular, their protocol for four parties has bias \({\widetilde{O}}(1/r^{2/3})\), and for \(n= \log \log r\) their protocol has bias smaller than Awerbuch et al. [2].

For the case where less than 2 / 3 of the parties are corrupt, Beimel et al. [4] showed an \(n\)-party r-round coin flipping protocol with bias \(2^{2^k}/r\), tolerating up to \(t=(n+k)/2\) corrupt parties. Alon and Omri [1] showed an \(n\)-party r-round coin flipping protocol with bias \({\widetilde{O}}(2^{2^n}/r)\), tolerating up to t corrupted parties, for constant \(n\) and \(t<3n/4\).

1.4 Open Questions

We show that constant-round coin flipping protocol with “small” bias (i.e., \(o(1/\sqrt{r})\)-fair, for r round protocol) implies io-key-agreement. Whether such a reduction can be extended to protocols with super-constant round complexity remains open. The barrier to extending our results is that the dichotomy result of Haitner et al. [12] only guarantees indistinguishablility with constant advantage (as opposed to vanishing or negligible advantage). It is worth mentioning that for protocols of super-constant round complexity, even a black-box separation between optimal (and thus between small bias) coin flipping protocol and one-way functions is not known.

The question of reducing oblivious transfer to optimally-fair coin flip is also open. We recall that all known small bias coin flipping protocols rely on it [7, 15, 20]. It is open whether the techniques of Haitner et al. [12] can provide a similar dichotomy with respect to (io-) oblivious transfer (as opposed to io-key-agreement) allowing for the realization of oblivious transfer from \(o(1/\sqrt{r})\)-fair (constant round) coin flip via the techniques of the present paper.

Paper Organization Basic definitions and notation used through the paper are given in Sect. 2. The formal statement and proof of the main theorem are given in Sect. 3.

2 Preliminaries

2.1 Notation

We use calligraphic letters to denote sets, uppercase for random variables and functions, lowercase for values. For , let \(a\pm b\) stand for the interval \([a-b,a+b]\). For , let \([n] = \left\{ 1,\ldots ,n\right\} \) and \((n) = \left\{ 0,\ldots ,n\right\} \). Let \({\text {poly}}\) denote the set of all polynomials, let ppt stand for probabilistic polynomial time and pptm denote a ppt algorithm (Turing machine). A function is negligible, denoted \(\nu (n) = {\text {neg}}(n)\), if \(\nu (n)<1/p(n)\) for every \(p\in {\text {poly}}\) and large enough n. For a sequence \(x_1,\ldots , x_r\) and \(i\in [r]\), let \(x_{\le i}=x_1,\ldots ,x_i\) and \(x_{< i}=x_1,\ldots ,x_{i-1}\).

Given a distribution, or random variable, D, we write \(x\leftarrow D\) to indicate that x is selected according to D. Given a finite set \({{{\mathcal {S}}}}\), let \(s\leftarrow {{{\mathcal {S}}}}\) denote that s is selected according to the uniform distribution over \({{{\mathcal {S}}}}\). The support of D, denoted \({\text {Supp}}(D)\), be defined as \(\left\{ u\in {\mathord {{\mathcal {U}}}}: D(u)>0\right\} \). The statistical distance between two distributions P and Q over a finite set \({\mathord {{\mathcal {U}}}}\), denoted as \(\mathsf {\textsc {SD}}(P,Q)\), is defined as \(\max _{{{{\mathcal {S}}}}\subseteq {\mathord {{\mathcal {U}}}}} \left| P({{{\mathcal {S}}}})-Q({{{\mathcal {S}}}})\right| = \frac{1}{2} \sum _{u\in {\mathord {{\mathcal {U}}}}}\left| P(u)-Q(u)\right| \). Distribution ensembles and are \(\delta \)-computationally indistinguishable in the set \({\mathcal {K}}\), denoted by \(X\mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{{\mathcal {K}},\delta } Y\), if for every pptm \(\mathsf {D} \) and sufficiently large \(\kappa \in {\mathcal {K}}\): \(\left| {\mathrm {Pr}}\left[ {{{\mathsf {D}}}}(1^\kappa , X_\kappa )=1\right] - {\mathrm {Pr}}\left[ {{{\mathsf {D}}}}(1^\kappa , Y_\kappa )=1\right] \right| \le \delta \).

2.2 Protocols

Let \(\varPi = (\mathsf {A},{\mathsf {B}})\) be a two-party protocol. The protocol \(\varPi \) is ppt if the running time of both \(\mathsf {A} \) and \({\mathsf {B}}\) is polynomial in their input length (regardless of the party they interact with). We denote by \((\mathsf {A} (x),{\mathsf {B}}(y))(z)\) a random execution of \(\varPi \) with private inputs x and y, and common input z, and sometimes abuse notation and write \((\mathsf {A} (x),{\mathsf {B}}(y))(z)\) for the parties’ output in this execution.

We will focus on no-input two-party single-bit output ppt protocol: the only input of the two ppt parties is the common security parameter given in unary representation. At the end of the execution, each party outputs a single bit. Throughout, we assume without loss of generality that the transcript contains \(1^\kappa \) as the first message. Let \(\varPi = (\mathsf {A},{\mathsf {B}})\) be such a two-party single-bit output protocol. For , let \(C^{\mathsf {A},\kappa }_\varPi \), \(C^{{\mathsf {B}},\kappa }_\varPi \) and \(T^\kappa _\varPi \) denote the outputs of \(\mathsf {A} \), \({\mathsf {B}}\) and the transcript of \(\varPi \), respectively, in a random execution of \(\varPi (1^\kappa )\).

Fair Coin Flipping Protocols. Since we are concerned with a lower bound, we only give the game-based definition of coin flipping protocols (see [15] for the stronger simulation-based definition).

Definition 2.1

(Fair coin flipping protocols). A ppt single-bit output two-party protocol \(\varPi = (\mathsf {A},{\mathsf {B}})\) is an \(\varepsilon \)-fair coin flipping protocol if the following holds.

  • Output delivery: The honest party always outputs a bit (even if the other party acts dishonestly, or aborts).

  • Agreement: The parties always output the same bit in an honest execution.

  • Uniformity: \({\mathrm {Pr}}\left[ C^{\mathsf {A},\kappa }_\varPi = b\right] = 1/2\) (and thus \({\mathrm {Pr}}\left[ C^{{\mathsf {B}},\kappa }_\varPi = b\right] = 1/2)\), for both \(b\in \{0,1\}\) and all .

  • Fairness: For any ppt \(\mathsf {A} ^*\) and \(b\in \{0,1\}\), for sufficiently large it holds that \({\mathrm {Pr}}\left[ C^{{\mathsf {B}},\kappa }_\varPi =b\right] \le 1/2 + \varepsilon \), and the same holds for the output bit of \(\mathsf {A} \).

Key-Agreement. We focus on single-bit output key-agreement protocols.

Definition 2.2

(Key-agreement protocols). A ppt single-bit output two-party protocol \(\varPi = (\mathsf {A},{\mathsf {B}})\) is io-key-agreement, if there exist an infinite such that the following hold for \(\kappa \)’s in \({\mathcal {K}}\):

  • Agreement. \({\mathrm {Pr}}\left[ C^{\mathsf {A},\kappa }_\varPi = C^{{\mathsf {B}},\kappa }_\varPi \right] \ge 1- {\text {neg}}(\kappa )\).

  • Secrecy. \({\mathrm {Pr}}\left[ \mathsf {Eve} (T^\kappa _\varPi )=C^{\mathsf {A},\kappa }_\varPi \right] \le 1/2 + {\text {neg}}(\kappa )\), for every ppt \(\mathsf {Eve}\).

2.3 Martingales

Definition 2.3

(Martingales). Let \(X_0, \ldots , X_r\) be a sequence of random variables. We say that \(X_0, \ldots , X_r\) is a martingale sequence if \({{\mathbf {E}}}\left[ X_{i+1} \mid X_{\le i} = x_{\le i} \right] = x_{i} \) for every \(i\in [r-1]\).

In plain terms, a sequence is a martingale if the expectation of the next point conditioned on the entire history is exactly the last observed point. One way to obtain a martingale sequence is by constructing a Doob martingale. Such a sequence is defined by \(X_i = {{\mathbf {E}}}\left[ f(Z) \mid Z_{\le i}\right] \), for arbitrary random variables \(Z= (Z_1,\ldots ,Z_r)\) and a function f of interest. We will use the following fact proven by [9] (we use the variant as proven in [3]).

Theorem 2.1

Let \(X_0,\ldots , X_r\) be a martingale sequence such that \(X_i\in [0,1]\), for every \(i\in [r]\). If \(X_0=1/2\) and \({\mathrm {Pr}}\left[ X_r\in \left\{ 0,1\right\} \right] =1\), then

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i\in [r]\text { s.t.\ }\left| X_i-X_{i-1}\right| \ge \frac{1}{4\sqrt{r}} \right] \ge \frac{1}{20}. \end{aligned}$$

3 Fair Coin Flipping to Key-Agreement

In this section, we prove our main result: if there exist constant-round coin flipping protocols which improve over the \(1/\sqrt{r}\)-bias majority protocol of [2], then infinitely-often key-agreement exists as well. Formally, we prove the following theorem.

Theorem 3.1

The following holds for any (constant) : if there exists an r-round, \(\frac{1}{25600\sqrt{r}}\)-fair two-party coin flipping protocol, see Definition 2.1, then there exists an infinitely-often key-agreement protocol.Footnote 6\(^,\)Footnote 7

Before formally proving Theorem 3.1, we briefly recall the outline of the proof as presented in the introduction (we ignore certain constants in this outline). We begin with a good forecaster for the coin flipping protocol \(\varPi \) (which must exist, according to [12]), and we define an efficiently computable conditional expected outcome sequence \(X= (X_0,\ldots ,X_r)\) for \(\varPi \), conditioned on the forecaster’s outputs. Then, we show that (1) the \(i^\mathrm{th}\) backup value (default output in case the opponent aborts) should be close to \(X_i\); otherwise, an efficient attacker can use the forecaster to bias the output of the other party (this attack is applicable regardless of the existence of infinitely-often key-agreement). And (2), since X is a martingale sequence, “large” \(1/\sqrt{r}\)-gaps are bound to occur in some round, with constant probability. Hence, combining (1) and (2), with constant probability, for some i, there is a \(1/\sqrt{r}\)-gap between \(X_i\) and the forecasters’ prediction for one party at the preceding round \(i-1\). Therefore, unless protocol \(\varPi \) implies io-key-agreement, the aforementioned gap can be exploited to bias that party’s output by \(1/\sqrt{r}\), by instructing the opponent to abort as soon as the gap is detected. In more detail, the success of the attack requires that (3) the event that a gap occurs is (almost) independent of the backup value of the honest party. It turns out that if \(\varPi \) does not imply io-key-agreement, this third property is guaranteed by the dichotomy theorem of [12]. In summary, if io-key-agreement does not exist, then protocol \(\varPi \) is at best \(1/\sqrt{r}\)-fair.

Moving to the formal proof, fix an r-round, two-party coin flipping protocol \(\varPi = (\mathsf {A},{\mathsf {B}})\) (we assume nothing about its fairness parameter for now). We associate the following random variables with a random honest execution of \(\varPi (1^\kappa )\). Let \(M^\kappa = (M^\kappa _1,\ldots ,M^\kappa _r)\) denote the messages of the protocol and let \(C^\kappa \) denote the (always) common output of the parties. For \(i\in \left\{ 0,\ldots ,r\right\} \) and \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \), let \(Z_i^{{\mathsf {P}},\kappa }\) be the “backup” value party \({\mathsf {P}}\) outputs, if the other party aborts after the \(i^\mathrm{th}\) message was sent. In particular, \(Z_r^{\mathsf {A},\kappa } = Z_r^{{\mathsf {B}},\kappa } =C^\kappa \) and \({\mathrm {Pr}}\left[ C^\kappa = 1\right] = 1/2\).

Forecaster for \(\varPi \) . We are using a forecaster for \(\varPi \), guaranteed by the following theorem (proof readily follows from Haitner et al. [12, Theorem 3.8]).

Theorem 3.2

(Haitner et al. [12], existence of forecasters). Let \(\varDelta \) be a no-input, single-bit output two-party protocol. Then for any constant \(\rho >0\), there exists a ppt constant output-length algorithm \(\mathsf {F}\) (forecaster) mapping transcripts of \(\varDelta \) into (the binary description of) pairs in \([0,1] \times [0,1]\) and an infinite set such that the following holds: let \(C^{\mathsf {A},\kappa }\), \(C^{{\mathsf {B}},\kappa }\) and \(T^\kappa \) denote the parties’ outputs and protocol transcript, respectively, in a random execution of \(\varDelta (1^\kappa )\). Let \(m(\kappa )\in {\text {poly}}\) be a bound on the number of coins used by \(\mathsf {F} \) on transcripts in \(\mathrm {supp}(T^\kappa )\), and let \(S^\kappa \) be a uniform string of length \(m(\kappa )\). Then,

  • \((C^{\mathsf {A},\kappa },T^\kappa ,S^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}} (U_{p^{\mathsf {A}}},T^\kappa ,S^\kappa )_{(p^{\mathsf {A}},\cdot ) = \mathsf {F} (T^\kappa ;S^\kappa )}\), and

  • \((C^{{\mathsf {B}},\kappa },T^\kappa ,S^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}} (U_{p^{{\mathsf {B}}}},T^\kappa ,S^\kappa )_{(\cdot ,p^{{\mathsf {B}}}) = \mathsf {F} (T^\kappa ;S^\kappa )}\).

letting \(U_p\) be a Boolean random variable taking the value 1 with probability p.Footnote 8

Since we require a forecaster for all (intermediate) backup values of \(\varPi \), we apply Theorem 3.2 with respect to the following variant of protocol \(\varPi \), which simply stops the execution at a random round.

Protocol 3.3

(\({\widetilde{\varPi }}= \left( \widetilde{\mathsf {A}},\widetilde{{\mathsf {B}}}\right) \))

  • Common input: security parameter \(1^\kappa \).

  • Description:

  1. 1.

    \(\widetilde{\mathsf {A}}\) samples \(i\leftarrow [r]\) and sends it to \(\widetilde{{\mathsf {B}}}\).

  2. 2.

    The parties interact in the first i rounds of a random execution of \(\varPi (1^\kappa )\), with \(\widetilde{\mathsf {A}}\) and \(\widetilde{{\mathsf {B}}}\) taking the role of \(\mathsf {A} \) and \({\mathsf {B}}\) receptively.

    Let \(z_i^\mathsf {A} \) and \(z_i^{\mathsf {B}}\) be the \(i^\mathrm{th}\) backup values of \(\mathsf {A} \) and \({\mathsf {B}}\) as computed by the parties in the above execution.

  3. 3.

    \(\widetilde{\mathsf {A}}\) outputs \(z_i^\mathsf {A} \), and \(\widetilde{{\mathsf {B}}}\) outputs \(z_i^{\mathsf {B}}\).

Let \(\rho =10^{-6} \cdot r^{-5/2}\). Let and a ppt \(\mathsf {F} \) be the infinite set and ppt forecaster resulting by applying Theorem 3.2 with respect to protocol \({\widetilde{\varPi }}\) and \(\rho \), and let \(S^\kappa \) denote a long enough uniform string to be used by \(\mathsf {F} \) on transcripts of \({\widetilde{\varPi }}(1^\kappa )\). The following holds with respect to \(\varPi \).

Claim 3.4

For \(I \leftarrow [r]\), it holds that

  • \(( Z_I^{\mathsf {A},\kappa },M^{\kappa }_{\le I},S^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}} (U_{p^{\mathsf {A}}},M^\kappa _{\le I},S^\kappa )_{(p^{\mathsf {A}},\cdot ) = \mathsf {F} (M_{\le I};S^\kappa )}\), and

  • \( (Z_I^{{\mathsf {B}},\kappa },M^{\kappa }_{\le I},S^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}} (U_{p^{{\mathsf {B}}}},M^\kappa _{\le I},S^\kappa )_{(\cdot ,p^{{\mathsf {B}}}) = \mathsf {F} (M_{\le I};S^\kappa )}\),

letting \(\mathsf {F} (m_{\le i};r) = \mathsf {F} (i,m_{\le i};r)\).

Proof

Immediate, by Theorem 3.2 and the definition of \(\widetilde{\varPi }\). \(\square \)

We assume without loss of generality that the common output appears on the last message of \(\varPi \) (otherwise, we can add a final message that contains this value, which does not hurt the security of \(\varPi \)). Hence, without loss of generality it holds that \(\mathsf {F} (m_{\le r};\cdot ) = (b,b)\), where b is the output bit as implied by \(m_{\le r}\) (otherwise, we can change \(\mathsf {F} \) to do so without hurting its forecasting quality).

For , we define the random variables \(F_0^\kappa ,\ldots , F_r^\kappa \), by

$$\begin{aligned} F_i^\kappa = (F_i^{\mathsf {A},\kappa },F_i^{{\mathsf {B}},\kappa }) = \mathsf {F} (M_{\le i};S^\kappa ) \end{aligned}$$
(13)

The Expected Outcome Sequence. To attack the protocol, it is useful to evaluate at each round the expected outcome of the protocol conditioned on the forecasters’ outputs so far. To alleviate notation, we assume that the value of \(\kappa \) is determined by \(\left| S^\kappa \right| \).

Definition 3.1

(Expected outcome function). For \(\kappa \in \mathbb {N}\), \(i\in [r]\), \(f_{\le i} \in \mathrm {supp}(F^\kappa _{\le i})\) and \(s\in {\text {Supp}}(S^\kappa )\), let

$$\begin{aligned} g(f_{\le i},s)={{\mathbf {E}}}\left[ C^\kappa \mid F^\kappa _{\le i}=f_{\le i}, S^\kappa =s\right] . \end{aligned}$$

Namely, \(g(f_{\le i},s)\) is the probability that the output of the protocol in a random execution is 1, given that \(\mathsf {F} (M_{\le j};s)= f_j\) for every \(j\in (i)\) and \(M_1,\ldots ,M_r\) being the transcript of this execution.

Expected Outcome Sequence is Approximable. The following claim, proven in Sect. 3.1, yields that the expected outcome sequence can be approximated efficiently.

Claim 3.5

There exists pptm \(\mathsf {G} \) such that

$$\begin{aligned} {\mathrm {Pr}}\left[ \mathsf {G} (F^\kappa _{\le i},S^\kappa )\notin g(F^\kappa _{\le i} ,S^\kappa ) \pm \rho \right] \le \rho , \end{aligned}$$

for every and \(i \in [r]\).

Algorithm \(\mathsf {G} \) approximates the value of g on input \((f_{\le i},s)\in \mathrm {supp}(F^\kappa _{\le i} ,S^\kappa )\) by running multiple independent instances of protocol \(\varPi (1^\kappa )\) and keeping track of the number of times it encounters \(f_{\le i}\) and the protocol outputs one. Standard approximation techniques yield that, unless \(f_{\le i}\) is very unlikely, the output of \(\mathsf {G} \) is close to \(g(f_{\le i},s)\). Claim 3.5 follows by carefully choosing the number of iterations for \(\mathsf {G} \) and bounding the probability of encountering an unlikely \(f_{\le i}\).

Forecasted Backup Values are Close to Expected Outcome Sequence. The following claim bounds the probability that the expected outcome sequence and the forecaster’s outputs deviate by more than \(1/8\sqrt{r}\). The proof is given in Sect. 3.2.

Claim 3.6

Assuming \(\varPi \) is \(\frac{1}{6400\sqrt{r}}\)-fair, then

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i \in [r] \text { s.t.\ }\left| g(F^\kappa _{\le i} ,S^\kappa ) - F^{{\mathsf {P}},\kappa }_i\right| \ge 1/8\sqrt{r}\right] < 1/100 \end{aligned}$$

for both \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and large enough \(\kappa \in {\mathcal {K}}\).

Loosely speaking, Claim 3.6 states that the expected output sequence and the forecaster’s outputs are close for a fair protocol. If not, then either of the following attackers \({\mathsf {P}}^*_0\), \({\mathsf {P}}^*_1\) can bias the output of party \({\mathsf {P}}\): for fixed randomness \(s\in \mathrm {supp}(S^\kappa )\), attacker \({\mathsf {P}}^*_z\) computes \(f_{i}=\mathsf {F} (m_{\le i}, s)\) for partial transcript \(m_{\le i}\) at round \(i\in [r]\), and aborts as soon as \((-1)^{1-z}(\mathsf {G} (f^\kappa _{\le i} , s) - f_i) \ge 1/8\sqrt{r}-\rho \). The desired bias is guaranteed by the accuracy of the forecaster (Claim 3.4), the accuracy of algorithm \(\mathsf {G} \) (Claim 3.5) and the presumed frequency of occurrence of a suitable gap. The details of the proof are given in Sect. 3.2.

Expected Outcome Sequence has Large Gap. Similarly to [9], the success of our attack depends on the occurrence of large gaps in the expected outcome sequence. The latter is guaranteed by [9] and [3], since the expected outcome sequence is a suitable martingale.

Claim 3.7

For every , it holds that

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i \in [r] :\left| g(F^\kappa _{\le i} ,S^\kappa ) - g(F^\kappa _{\le i-1} ,S^\kappa )\right| \ge 1/4\sqrt{r}\right] >1/20. \end{aligned}$$

Proof

Consider the sequence of random variables \(G_0^\kappa ,\ldots ,G_r^\kappa \) defined by \(G_i^\kappa = g(F^\kappa _{\le i}, S^\kappa )\). Observe that this is a Doob (and hence, strong) martingale sequence, with respect to the random variables \(Z_0 = S^\kappa \) and \(Z_i = F^\kappa _{i}\) for \(i\in [r]\), and the function \(f(S^\kappa , F^\kappa _{\le r})= g(F^\kappa _{\le r}, S^\kappa ) = F^\kappa _r[0]\) (i.e., the function that outputs the actual output of the protocol, as implied by \(F^\kappa _r\)). Clearly, \(G_0^\kappa = 1/2\) and \(G_r^\kappa \in \{0,1\}\) (recall that we assume that \(\mathsf {F} (M_{\le r};\cdot ) = (b,b)\), where b is the output bit as implied by \(M_{\le r}\)). Thus, the proof follows by Theorem 2.1. \(\square \)

Independence of Attack Decision. Claim 3.4 immediately yields that the expected values of \(F_{ i }\) and \(Z_{i}^{\mathsf {P}}\) are close, for both \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and every \(i\in [r]\). Assuming io-key-agreement does not exist, the following claim essentially states that \(F_{ i }\) and \(Z_{i}^{\mathsf {P}}\) remain close in expectation, even if we condition on some event that depends on the other party’s next message. This observation will allow us to show that, when a large gap in the expected outcome is observed by one of the parties, the (expected value of the) backup value of the other party still lags behind. The following claim captures the core of the novel idea in our attack, and its proof is the most technical aspect towards proving our main result.

Claim 3.8

(Independence of attack decision). Let \({\mathsf {D}}\) be a single-bit output pptm. For and \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \), let \(E_1^{{\mathsf {P}},\kappa },\ldots ,E_{r}^{{\mathsf {P}},\kappa }\) be the sequence of random variables defined by \(E_i^{{\mathsf {P}},\kappa } = {\mathsf {D}}(F^\kappa _{\le i}, S^\kappa )\) if \({\mathsf {P}}\) sends the \(i^\mathrm{th}\) message in \(\varPi (1^\kappa )\), and \(E_i^{{\mathsf {P}},\kappa } =0\) otherwise.

Assume io-key-agreement protocols do not exist. Then, for any \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and infinite subset \({\mathcal {K}}' \subseteq {\mathcal {K}}\), there exists an infinite set \({\mathcal {K}}'' \subseteq {\mathcal {K}}'\) such that

$$\begin{aligned} {{\mathbf {E}}}\left[ E_{i+1}^{{\mathsf {P}},\kappa } \cdot (Z_{i}^{{\overline{{\mathsf {P}}}},\kappa } - F^{{\overline{{\mathsf {P}}}},\kappa }_{ i})\right] \in \pm 4r\rho \end{aligned}$$

for every \(\kappa \in {\mathcal {K}}''\) and \(i\in (r-1)\), where \({\overline{{\mathsf {P}}}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \setminus \left\{ {\mathsf {P}}\right\} \).

Since \({{\mathbf {E}}}\left[ E_{i+1}^{{\mathsf {P}},\kappa } \cdot ( Z_{i}^{{\overline{{\mathsf {P}}}},\kappa } - F^{{\overline{{\mathsf {P}}}},\kappa }_{ i} )\right] = {{\mathbf {E}}}\left[ E_{i+1}^{{\mathsf {P}},\kappa } \cdot {{\mathbf {E}}}\left[ Z_{i}^{{\overline{{\mathsf {P}}}},\kappa } - F_{i}^{{\mathsf {P}},\kappa } \mid E_{i+1}^{{\mathsf {P}},\kappa } =1\right] \right] \), Claim 3.8 yields that the expected values of \(F_{ i }\) and \(Z_{i}^{\mathsf {P}}\) remain close, even when conditioning on a likely-enough-event over the next message of \({\mathsf {P}}\).

The proof of Claim 3.8 is given in Sect. 3.3. In essence, we use the recent dichotomy of Haitner et al. [12] to show that if io-key-agreement does not exist, then the values of \(E_{i+1}^{{\mathsf {P}},\kappa }\) and \(Z^{{\overline{{\mathsf {P}}}},\kappa }_{ i}\) conditioned on \(M_{\le i}\) (which determines the value of \(F^{{\overline{{\mathsf {P}}}},\kappa }_{ i}\)), are (computationally) close to be in a product distribution.

Putting Everything Together. Equipped with the above observations, we prove Theorem 3.1.

Proof of Theorem 3.1. Let \(\varPi \) be an \(\varepsilon = \frac{1}{25600\sqrt{r}}\)-fair coin flipping protocol. By Claims 3.6 and 3.7, we can assume without loss of generality that there exists an infinite subset \({\mathcal {K}}' \subseteq {\mathcal {K}}\) such that

$$\begin{aligned}&{\mathrm {Pr}}\left[ \exists i \in [r] :\mathsf {A} \hbox { sends }i^\mathrm{th} \hbox { message in }\varPi (1^\kappa ) \wedge g(F^\kappa _{\le i} ,S^\kappa ) - F^{{\mathsf {B}},\kappa }_{i-1} \ge \frac{1}{8\sqrt{r}}\right] \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \ge \frac{1}{80} -\frac{1}{100} = \frac{1}{400} \end{aligned}$$
(14)

We define the following ppt fail-stop attacker \(\mathsf {\mathsf {A} ^*} \) taking the role of \(\mathsf {A} \) in \(\varPi \). We will show below that assuming io-key-agreement do not exist, algorithm \(\mathsf {\mathsf {A} ^*} \) succeeds in biasing the output of \({\mathsf {B}}\) towards zero by \(\varepsilon \) for all \(\kappa \in {\mathcal {K}}''\), contradicting the presumed fairness of \(\varPi \). In the following, let \(\mathsf {G} \) be the pptm guaranteed to exist by Claim 3.5.

Algorithm 3.9

(\(\mathsf {\mathsf {A} ^*}\))

  • Input: security parameter \(1^\kappa \).

  • Description:

  1. 1.

    Sample \(s\leftarrow S^\kappa \) and start a random execution of \(\mathsf {A} (1^\kappa )\).

  2. 2.

    Upon receiving the \((i-1)\) message \(m_{i-1}\), do

    1. (a)

      Forward \(m_{i-1}\) to \(\mathsf {A} \), and let \(m_i\) be the next message sent by \(\mathsf {A} \).

    2. (b)

      Compute \(f_{i}= (f_{i}^\mathsf {A},f_{i}^{\mathsf {B}}) =\mathsf {F} (m_{\le i}, s)\).

    3. (c)

      Compute \(\widetilde{g}_i = \mathsf {G} ( f_{\le i}, s)\).

    4. (d)

      If \(\widetilde{g}_i\ge f_{i-1}^{\mathsf {B}}+ 1/16\sqrt{r}\), abort (without sending further messages).

      Otherwise, send \(m_i\) to \({\mathsf {B}}\) and proceed to the next round.

It is clear that \(\mathsf {\mathsf {A} ^*} \) is a pptm. We conclude the proof showing that assuming io-key-agreement do not exist, \({\mathsf {B}}\)’s output when interacting with \(\mathsf {\mathsf {A} ^*} \) is biased towards zero by at least \(\varepsilon \).

The following random variables are defined with respect to a a random execution of \((\mathsf {\mathsf {A} ^*},{\mathsf {B}})(1^\kappa )\). Let \(S^\kappa \) and \(F^\kappa = (F_1^\kappa ,\ldots ,F^\kappa _r)\) denote the values of s and \(f_1,\ldots ,f_r\) sampled by \(\mathsf {\mathsf {A} ^*} \). Let \(Z^{{\mathsf {B}},\kappa } = (Z_1^{{\mathsf {B}},\kappa },\ldots ,Z^{{\mathsf {B}},\kappa }_r)\) denote the backup values computed by \({\mathsf {B}}\). For \(i\in [r]\), let \(E_i^\kappa \) be the event that \(\mathsf {\mathsf {A} ^*} \) decides to abort in round i. Finally, let \(J^\kappa \) be the index i with \(E_i^\kappa =1\), setting it to \(r+1\) if no such index exist. Below, if we do not quantify over \(\kappa \), it means that the statement holds for any .

By Claim 3.5 and Eq. (14),

$$\begin{aligned} {\mathrm {Pr}}\left[ J^\kappa \ne r+1\right] > \frac{1}{400} - \rho \ge \frac{1}{800} \end{aligned}$$
(15)

for every \(\kappa \in {\mathcal {K}}'\). Where since the events \(E_i^\kappa \) and \(E_j^\kappa \) for \(i\ne j\) are disjoint,

$$\begin{aligned} {{\mathbf {E}}}\left[ Z_{J^\kappa -1}^{{\mathsf {B}},\kappa } - F_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right]&= {{\mathbf {E}}}\left[ \sum _{i=1}^{r+1} E_i^\kappa \cdot (Z_{i-1}^{{\mathsf {B}},\kappa } - F_{i-1}^{{\mathsf {B}},\kappa } )\right] \nonumber \\&= \sum _{i=1}^{r+1} {{\mathbf {E}}}\left[ E_i^\kappa \cdot (Z_{i-1}^{{\mathsf {B}},\kappa } - F_{i-1}^{{\mathsf {B}},\kappa } )\right] \nonumber \\&= \sum _{i=1}^{r} {{\mathbf {E}}}\left[ E_i^\kappa \cdot (Z_{i-1}^{{\mathsf {B}},\kappa } - F_{i-1}^{{\mathsf {B}},\kappa } )\right] . \end{aligned}$$
(16)

The last inequality holds since the protocol’s output appears in the last message, by assumption, and thus without loss of generality \(Z_{r}^{{\mathsf {B}},\kappa } = F_{r}^{{\mathsf {B}},\kappa }\). Consider the single-bit output pptm \({\mathsf {D}}\) defined as follows: on input \((f_{\le i},s)\) where \(f_{\le i}\) is a sequence of pairs of values, i.e.,  \(f_{\le i}=(f_1^\mathsf {A},f_1^{\mathsf {B}}), \ldots ,(f_i^\mathsf {A},f_i^{\mathsf {B}}))\), it outputs 1 if \( \mathsf {G} (f_{\le i}, s) - f^{{\mathsf {B}}}_{ i-1} \ge 1/16\sqrt{r}\), and \( \mathsf {G} (f_{\le j}, s) - f^{{\mathsf {B}}}_{ j-1} < 1/16\sqrt{r}\) for all \(j<i \). Otherwise, it outputs zero. Observe that \(E_i^\kappa \) is the indicator of the event \(\mathsf {A} \) sends the \(i^\mathrm{th}\) message in \(\varPi (1^\kappa )\) and \({\mathsf {D}}(F^\kappa _{\le i}, S^\kappa )=1\), for any fixing of \((F^\kappa ,S^\kappa , Z^{{\mathsf {B}},\kappa })\). Thus, assuming io-key-agreement protocols do not exist, Claim 3.8 yields that that there exists an infinite set \({\mathcal {K}}'' \subset {\mathcal {K}}'\) such that

$$\begin{aligned} {{\mathbf {E}}}\left[ E_{i+1}^\kappa \cdot (Z_i^{{\mathsf {B}},\kappa } - F_{i}^{{\mathsf {B}},\kappa })\right] \in \pm 4r\rho \end{aligned}$$
(17)

for every \(\kappa \in {\mathcal {K}}''\) and \(i\in [r-1]\). Putting together Eqs. (16) and (17), we conclude that, for every \(\kappa \in {\mathcal {K}}''\),

$$\begin{aligned} {{\mathbf {E}}}\left[ Z_{J^\kappa -1}^{{\mathsf {B}},\kappa } - F_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right] \in \pm 4r^2\rho . \end{aligned}$$
(18)

Recall that our goal is to show that \({{\mathbf {E}}}\left[ Z_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right] \) is significantly smaller than 1 / 2. We do so by showing that it is significantly smaller than \({{\mathbf {E}}}\left[ g(F^\kappa _{\le J^\kappa }, S^\kappa )\right] \) which equals 1 / 2, since, by tower law (total expectation),

$$\begin{aligned} {{\mathbf {E}}}\left[ g(F^\kappa _{\le J^\kappa }, S^\kappa )\right]&= {{\mathbf {E}}}\left[ C^\kappa \right] =1/2 . \end{aligned}$$
(19)

Finally, let \(G_i\) be the value of \( \mathsf {G} (F_{\le i},S^\kappa )\) computed by \(\mathsf {\mathsf {A} ^*} \) in the execution of \((\mathsf {\mathsf {A} ^*},{\mathsf {B}})(1^\kappa )\) considered above, letting \(G_{r+1}= g(F^\kappa _{\le r+1},S^\kappa )\). Claim 3.5 yields that

$$\begin{aligned} {{\mathbf {E}}}\left[ g(F^\kappa _{\le J^\kappa }, S^\kappa )-G_{J^\kappa }\right] \le 2r\rho . \end{aligned}$$
(20)

Putting all the above observations together, we conclude that, for every \(\kappa \in {\mathcal {K}}''\),

$$\begin{aligned}&{{{\mathbf {E}}}\left[ Z_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right] }\\&= {{\mathbf {E}}}\left[ g(F^\kappa _{\le J^\kappa }, S^\kappa )\right] - {{\mathbf {E}}}\left[ G_{J^\kappa }- F_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right] \\&\qquad \qquad \qquad \qquad \qquad \qquad \,\,\,\,\, + {{\mathbf {E}}}\left[ Z_{J^\kappa -1}^{{\mathsf {B}},\kappa } - F_{J^\kappa -1}^{{\mathsf {B}},\kappa }\right] -{{\mathbf {E}}}\left[ g(F^\kappa _{\le J^\kappa }, S^\kappa )-G_{J^\kappa }\right] \\&\le \frac{1}{2} - {{\mathbf {E}}}\left[ G_{J^\kappa }-F_{J^\kappa -1}^{{\mathsf {B}},\kappa }\mid J^\kappa \ne r+1\right] \cdot {\mathrm {Pr}}\left[ J^\kappa \ne r+1\right] + 4r^2\rho +2r\rho \\&\le \frac{1}{2} - (1/16\sqrt{r}) \cdot (1/800) + 4r^2\rho +2r\rho \\&< \frac{1}{2} - \frac{1}{25600\sqrt{r}}. \end{aligned}$$

The first inequality holds by Eqs. (18) to (20). The second inequality holds by the definition of \(J^\kappa \) and Eq. (15). The last inequality holds by our choice of \(\rho \).   \(\square \)

3.1 Approximating the Expected Outcome Sequence

In this section we prove Claim 3.5, restated below.

Claim 3.10

(Claim 3.5, restated). There exists pptm \(\mathsf {G} \) such that

$$\begin{aligned} {\mathrm {Pr}}\left[ \mathsf {G} (F^\kappa _{\le i},S^\kappa )\notin g(F^\kappa _{\le i} ,S^\kappa ) \pm \rho \right] \le \rho , \end{aligned}$$

for every and \(i \in [r]\).

The proof of Claim 3.10 is straightforward. Since there are only constant number of rounds and \(\mathsf {F} \) has constant output-length, when fixing the randomness of \(\mathsf {F}\), the domain of \(\mathsf {G} \) has constant size. Hence, the value of of g can be approximated well via sampling. Details below.

Let c be a bound on the number of possible outputs of \(\mathsf {F} \) (recall that \(\mathsf {F} \) has constant output-length). We are using the following implementation for \(\mathsf {G} \). In the following, let \({\overline{\mathsf {F}}}((m_1,\ldots ,m_i);s) = (\mathsf {F} (m_1;s),\ldots , (\mathsf {F} (m_i;s))\) (i.e., \({\overline{\mathsf {F}}}(M_{\le i};S^\kappa ) = F_{\le i}\)).

Algorithm 3.11

(\(\mathsf {G} \))

  • Parameters: \(v = \left\lceil \frac{1}{2}\cdot \left( \frac{2c^{r}}{\rho }\right) ^4 \cdot \ln \left( \frac{8}{\rho }\right) \right\rceil \).

  • Input: \(f_{\le i}\in \mathrm {supp}(F_{\le i}^{\kappa })\) and \(s\in {\text {Supp}}(S^\kappa )\).

  • Description:

  1. 1.

    Sample v transcripts \(\left\{ m^j,c^j\right\} _{ j \in [v]}\) by taking the (full) transcripts and outputs of v independent executions of \(\varPi (1^{\kappa })\).

  2. 2.

    For every \(j\in [v]\) let \(f^j_i={\overline{\mathsf {F}}}(m_{\le i}^j;s)\).

  3. 3.

    Let \(q=\left| \left\{ j \in [v] :f_{\le i}^j=f_{\le i}\right\} \right| \) and \(p=\left| \left\{ j \in [v] :f_{\le i}^j=f_{\le i} \wedge c^j=1\right\} \right| \).

  4. 4.

    Set \(\widetilde{g}=p/q\). (Set \(\widetilde{g}=0\) if \(q=p = 0\).)

  5. 5.

    Output \(\widetilde{g}\).

Remark 3.1

(A more efficient approximator.). The running time of algorithm \(\mathsf {G} \) above is exponential in r. While this does not pose a problem for our purposes here, since r is constant, it might leave the impression that out approach cannot be extended to protocols with super-constant round complexity. So it is worth mentioning that the running time of \(\mathsf {G}\) can be reduced to be polynomial in r, by using the augmented weak martingale paradigm of Beimel et al. [3]. Unfortunately, we currently cannot benefit from this improvement, since the result of [12] only guarantees indistinguishability for constant \(\rho \), which makes it useful only for attacking constant-round protocols.

We prove Claim 3.10 by showing that the above algorithm approximates g well.

Proof of Claim 3.10. To prove the quality of \(\mathsf {G} \) in approximating g, it suffices to prove the claim for every every , \(i\in [r]\) and fixed \(s\in \mathrm {supp}(S^\kappa )\). That is

$$\begin{aligned} {\mathrm {Pr}}\left[ \left| g({\overline{\mathsf {F}}}(M_{\le i},s), s) - \mathsf {G} ({\overline{\mathsf {F}}}( M_{\le i},s), s) \right| \ge \rho \right] \le {\rho }, \end{aligned}$$
(21)

where the probability is also taken over the random coins of \(\mathsf {G} \).

Fix and omit it from the notation, and fix \(i\in [r]\) and \(s\in S^\kappa \). Let \({\mathcal {D}}_i = \left\{ f_{\le i} :{\mathrm {Pr}}\left[ {\overline{\mathsf {F}}}(M_{\le i},s)=f_{\le i}\right] \ge \rho /2c^r\right\} \). By Hoeffding’s inequality [16], for every \(f_{\le i} \in {\mathcal {D}}\), it holds that

$$\begin{aligned} {\mathrm {Pr}}\left[ \left| g(f_{\le i}, s)- \mathsf {G} ( f_{\le i}, s)\right| \ge \rho \right]&\le 4\cdot \exp \left( -2\cdot v\cdot \left( \rho /2c^r\right) ^4\right) \nonumber \\&\le 4\cdot \exp \left( -\frac{v \rho ^4}{8c^{4r}}\right) \nonumber \\&\le \rho /2. \end{aligned}$$
(22)

It follows that

$$\begin{aligned}&{{\mathrm {Pr}}\left[ \left| g({\overline{\mathsf {F}}}(M_{\le i},s), s)- \mathsf {G} ({\overline{\mathsf {F}}}(M_{\le i},s), s)\right| \ge \rho \right] }\\&\le {\mathrm {Pr}}\left[ ({\overline{\mathsf {F}}}(M_{\le j},s)\notin {\mathcal {D}}\right] + \rho /2\\&\le \left| {\text {Supp}}({\overline{\mathsf {F}}}(M_{\le j},s))\right| \cdot \rho /2c^r + \rho /2\\&\le c^r \cdot \rho /2c^r + \rho /2= \rho . \end{aligned}$$

   \(\square \)

3.2 Forecasted Backup Values Are Close to Expected Outcome Sequence

In this section, we prove Claim 3.6 (restated below).

Claim 3.12

(Claim 3.6, restated). Assuming \(\varPi \) is \(\frac{1}{6400\sqrt{r}}\)-fair, then

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i \in [r] \text { s.t.\ }\left| g(F^\kappa _{\le i} ,S^\kappa ) - F^{{\mathsf {P}},\kappa }_i\right| \ge 1/8\sqrt{r}\right] < 1/100 \end{aligned}$$

for both \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and large enough \(\kappa \in {\mathcal {K}}\).

Proof

Assume the claim does not holds for \({\mathsf {P}}= {\mathsf {B}}\) and infinitely many security parameters \({\mathcal {K}}\) (the case \({\mathsf {P}}=\mathsf {A} \) is proven analogously). That is, for all \(\kappa \in {\mathcal {K}}\) and without loss of generality, it holds that

$$\begin{aligned} {\mathrm {Pr}}\left[ \exists i \in [r] \text { s.t.\ }g(F^\kappa _{\le i} ,S^\kappa ) - F_i^{{\mathsf {B}}, \kappa } \ge \frac{1}{8\sqrt{r}}\right] \ge \frac{1}{200}. \end{aligned}$$
(23)

Consider the following ppt fail-stop attacker \(\mathsf {\mathsf {A} ^*} \) taking the role of \(\mathsf {A} \) in \(\varPi \) to bias the output of \({\mathsf {B}}\) towards zeros.

Algorithm 3.13

(\(\mathsf {\mathsf {A} ^*}\) )

  • Input: security parameter \(1^\kappa \).

  • Description:

  1. 1.

    Samples \(s\leftarrow S^\kappa \) and start a random execution of \(\mathsf {A} (1^\kappa )\).

  2. 2.

    For \(i=1\ldots r\):

    After sending (or receiving) the prescribed message \(m_i\):

    1. (a)

      Let \(f_i=\mathsf {F} (m_{\le i}; s)\) and \(\mu _i=\mathsf {G} (f_{\le i}, s)- f_i\).

    2. (b)

      Abort if \(\mu _i\ge \frac{1}{8\sqrt{r}} - \rho \) (without sending further messages).

      Otherwise, proceed to the next round.

In the following, we fix a large enough \(\kappa \in {\mathcal {K}}\) such that Eq. (23) holds, and we omit it from the notation when the context is clear. We show that algorithm \(\mathsf {\mathsf {A} ^*} \) biases the output of \({\mathsf {B}}\) towards zero by at least \(1/(6400\sqrt{r})\).

We associate the following random variables with a random execution of \((\mathsf {\mathsf {A} ^*},{\mathsf {B}})\). Let J denote the index where the adversary aborted, i.e., the smallest j such that \(\mathsf {G} (F_{\le j}, S)-F^{{\mathsf {B}}}_{j}\ge \frac{1}{8\sqrt{r}}-\rho \), or \({J}=r\) if no abort occurred. The following expectations are taken over \((F_{\le i}, S)\) and the random coins of \(\mathsf {G} \). We bound \({{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J}\right] \), i.e. the expected output of the honest party.

$$\begin{aligned}&{{{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J}\right] }\nonumber \\&= {{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J} \right] + {{\mathbf {E}}}\left[ g(F_{\le {J}}, S) \right] - {{\mathbf {E}}}\left[ g(F_{\le {J}}, S) \right] + {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) -F^{{\mathsf {B}}}_{{J}} \right] - {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) - F^{{\mathsf {B}}}_{{J}} \right] \nonumber \\&= {{\mathbf {E}}}\left[ g(F_{\le {J}}, S) \right] - {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) - F^{{\mathsf {B}}}_{{J}} \right] + {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) - g(F_{\le {J}}, S) \right] + {{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J} - F^{{\mathsf {B}}}_{{J}} \right] \nonumber \\&= \frac{1}{2}-{{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) - F^{{\mathsf {B}}}_{{J}} \right] + {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) - g(F_{\le {J}}, S) \right] + {{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J} - F^{{\mathsf {B}}}_{{J}} \right] . \end{aligned}$$
(24)

The last equation follows from \({{\mathbf {E}}}\left[ g(F_{\le {J}}, S)\right] ={{\mathbf {E}}}\left[ C\right] \) and thus \({{\mathbf {E}}}\left[ g(F_{\le {J}}, S) \right] =\frac{1}{2}\) (for a more detailed argument see Eq. (19) and preceding text). We bound each of the terms above separately. First, observe that

$$\begin{aligned}&{{\mathrm {Pr}}\left[ {J}\ne r\right] }\nonumber \\&\ge {\mathrm {Pr}}\left[ \left( \forall i\in [r]:\left| \mathsf {G} (F_{\le i}, S) -g(F_{\le i}, S)\right| \le \rho \right) \right. \nonumber \\&\quad \quad \wedge \left. \left( \exists j\in [r]:g(F_{\le j}, S)- F_j^{{\mathsf {B}}} \ge \frac{1}{8\sqrt{r}}\right) \right] \nonumber \\&\ge {\mathrm {Pr}}\left[ \exists j\in [r] :g(F_{\le j}, S)- F_j \ge \frac{1}{8\sqrt{r}}\right] \nonumber \\&\quad \quad - {\mathrm {Pr}}\left[ \exists i\in [r]:\left| \mathsf {G} (F_{\le i}, S) -g(F_{\le i}, S)\right| >\rho \right] \nonumber \\&\ge \frac{1}{200} - \rho \nonumber \\&\ge \frac{1}{400} . \end{aligned}$$
(25)

The penultimate inequality is by Eq. (24) and Claim 3.5. It follows that

$$\begin{aligned} {{\mathbf {E}}}\left[ g(F_{\le {J}}, S) - F^{{\mathsf {B}}}_{J} \right]&= {\mathrm {Pr}}\left[ {J}\ne r\right] \cdot {{\mathbf {E}}}\left[ g(F_{\le {J}}, S) - F^{{\mathsf {B}}}_{J} \mid J \ne r\right] \nonumber \\&\ge \frac{1}{400} \cdot \left( \frac{1}{8\sqrt{r}} - \rho \right) - {{\mathbf {E}}}\left[ \mathsf {G} (F_{\le {J}}, S) -g(F_{\le {J}}, S) \right] \nonumber \\&\ge \frac{1}{400} \cdot \frac{1}{8\sqrt{r}} - 3\rho . \end{aligned}$$
(26)

The penultimate inequality is by Claim 3.5. Finally, since we were taking \(\kappa \) large enough, Claim 3.4 and a data-processing argument yields that

$$\begin{aligned} {{\mathbf {E}}}\left[ Z^{{\mathsf {B}}}_{J} -F^{{\mathsf {B}}}_{J} \right] \le r\rho \end{aligned}$$
(27)

We conclude that \({{\mathbf {E}}}\left[ g(F_{\le {J}}, S) - F^{{\mathsf {B}}}_{J} \right] \ge \frac{1}{400} \cdot \frac{1}{8\sqrt{r}} - (r+ 3)\rho > 1/(6400\sqrt{r})\), in contradiction to the assumed fairness of \(\varPi \). \(\square \)

3.3 Independence of Attack Decision

In this section, we prove Claim 3.8 (restated below).

Claim 3.14

(Claim 3.8, restated). Let \({\mathsf {D}}\) be a single-bit output pptm. For and \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \), let \(E_1^{{\mathsf {P}},\kappa },\ldots ,E_{r}^{{\mathsf {P}},\kappa }\) be the sequence of random variables defined by \(E_i^{{\mathsf {P}},\kappa } = {\mathsf {D}}(F^\kappa _{\le i}, S^\kappa )\) if \({\mathsf {P}}\) sends the \(i^\mathrm{th}\) message in \(\varPi (1^\kappa )\), and \(E_i^{{\mathsf {P}},\kappa } =0\) otherwise.

Assume io-key-agreement protocols do not exist. Then, for any \({\mathsf {P}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \) and infinite subset \({\mathcal {K}}' \subseteq {\mathcal {K}}\), there exists an infinite set \({\mathcal {K}}'' \subseteq {\mathcal {K}}'\) such that

$$\begin{aligned} {{\mathbf {E}}}\left[ E_{i+1}^{{\mathsf {P}},\kappa } \cdot (Z_{i}^{{\overline{{\mathsf {P}}}},\kappa } - F^{{\overline{{\mathsf {P}}}},\kappa }_{ i})\right] \in \pm 4r\rho \end{aligned}$$

for every \(\kappa \in {\mathcal {K}}''\) and \(i\in (r-1)\), where \({\overline{{\mathsf {P}}}}\in \left\{ \mathsf {A},{\mathsf {B}}\right\} \setminus \left\{ {\mathsf {P}}\right\} \).

We prove for \({\mathsf {P}}= \mathsf {A} \). Consider the following variant of \(\varPi \) in which the party playing \(\mathsf {A} \) is outputting \(E^\mathsf {A} _i\) and the party playing \({\mathsf {B}}\) is outputting its backup value.

Protocol 3.15

(\({\widehat{\varPi }}= \left( {\widehat{\mathsf {A}}},{\widehat{{\mathsf {B}}}}\right) \))

  • Common input: security parameter \(1^\kappa \).

  • Description:

  1. 1.

    Party \({\widehat{\mathsf {A}}}\) samples \(i\leftarrow [r]\) and \(s\leftarrow S^\kappa \), and sends them to \({\widehat{{\mathsf {B}}}}\).

  2. 2.

    The parties interact in the first \(i-1\) rounds of a random execution of \(\varPi (1^\kappa )\), with \({\widehat{\mathsf {A}}}\) and \({\widehat{{\mathsf {B}}}}\) taking the role of \(\mathsf {A} \) and \({\mathsf {B}}\) respectively.

    Let \(m_1,\ldots ,m_{i-1}\) be the messages, and let \(z_{i-1}^{\mathsf {B}}\) be the \((i-1)\) backup output of \({\mathsf {B}}\) in the above execution.

  3. 3.

    \({\widehat{\mathsf {A}}}\) sets the value of \(e^\mathsf {A} _i\) as follows:

    If \(\mathsf {A} \) sends the \(i-1\) message above, then it sets \(e^\mathsf {A} _i =0\).

    Otherwise, it

    1. (a)

      Continues the above execution of \(\varPi \) to compute its next message \(m_i\).

    2. (b)

      Computes \(f_{i} =\mathsf {F} (m_{\le i}, s)\).

    3. (c)

      Let \(e^\mathsf {A} _i = {\mathsf {D}}(f_{\le i},s)\).

  4. (4)

    \({\widehat{\mathsf {A}}}\) outputs \(e^\mathsf {A} _i\) and \({\mathsf {B}}\) outputs \(z_{i-1}^{\mathsf {B}}\).

We apply the the following dichotomy result of Haitner et al. [12] on the above protocol.

Theorem 3.16

(Haitner et al.[12], Theorem 3.18, dichotomy of two-party protocols). Let \(\varDelta \) be an efficient single-bit output two-party protocol. Assume io-key-agreement protocol do not exist, then for any constant \(\rho >0\) and infinite subset , there exists a ppt algorithm \(\mathsf {Dcr}\) (decorelator) mapping transcripts of \(\varDelta \) into (the binary description of) pairs in \([0,1] \times [0,1]\) and an infinite set , such that the following holds: let \(C^{\mathsf {A},\kappa }\), \(C^{{\mathsf {B}},\kappa }\) and \(T^\kappa \) denote the parties’ output and protocol transcript in a random execution of \(\varDelta (1^\kappa )\). Let \(m(\kappa )\in {\text {poly}}\) be a bound on the number of coins used by \(\mathsf {Dcr} \) on transcripts in \(\mathrm {supp}(T^\kappa )\), and let \(S^\kappa \) be a uniform string of length \(m(\kappa )\). Then

$$\begin{aligned} (C^{\mathsf {A},\kappa },C^{{\mathsf {B}},\kappa },T^\kappa ,S^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}'} (U_{p^{\mathsf {A}}},U_{p^{\mathsf {A}}},T^\kappa ,S^\kappa )_{(p^{\mathsf {A}},p^{{\mathsf {B}}}) = \mathsf {Dcr} (T^\kappa ;S^\kappa )} \end{aligned}$$

letting \(U_p\) be a Boolean random variable taking the value 1 with probability p.

Proof of Claim 3.14. Assume io-key-agreement does not exits, and let \({\mathcal {K}}''\subseteq {\mathcal {K}}'\) and a ppt \(\mathsf {Dcr} \) be the infinite set and ppt decorrelator resulting by applying Theorem 3.16 with respect to protocol \({\widehat{\varPi }}\) and \(\rho \). Let \({\widehat{S}}^\kappa \) denote a long enough uniform string to be used by \(\mathsf {Dcr} \) on transcripts of \({\widehat{\varPi }}(1^\kappa )\). Then for \(I \leftarrow (r-1)\), letting \(\mathsf {Dcr} (m_{\le i},s;{\widehat{s}}) = \mathsf {Dcr} (i,s,m_{\le i};{\widehat{s}})\), it holds that

$$\begin{aligned} ( E_{I+1}^{\mathsf {A},\kappa },Z_{I}^{{\mathsf {B}},\kappa },M^{\kappa }_{\le i},S^\kappa ,{\widehat{S}}^\kappa ) \mathbin {{\mathop {\approx }\limits ^\mathrm{c}}}_{\rho ,{\mathcal {K}}''} (U_{p^{\mathsf {A}}},U_{p^{{\mathsf {B}}}},M^\kappa _{\le I}, S^\kappa ,{\widehat{S}}^\kappa )_{(p^{\mathsf {A}},p^{{\mathsf {B}}}) = \mathsf {Dcr} (M_{\le I},S^\kappa ;{\widehat{S}}^\kappa )}. \end{aligned}$$
(28)

For \(i\in [r]\), let \(W_i^\kappa = (W_i^{\mathsf {A},\kappa },W_i^{{\mathsf {B}},\kappa }) = \mathsf {Dcr} (M_{\le i},S^\kappa ;{\widehat{S}}^\kappa )\). The proof of Claim 3.17 follows by the following three observations, proven below, that hold for large enough \(\kappa \in {\mathcal {K}}''\).

Claim 3.17

\({{\mathbf {E}}}\left[ E_{I+1}^{\mathsf {A},\kappa }\cdot Z_{I}^{{\mathsf {B}},\kappa } - W_{I}^{\mathsf {A},\kappa }\cdot W_{I}^{{\mathsf {B}},\kappa }\right] \in \pm \rho \).

Claim 3.18

\({{\mathbf {E}}}\left[ W_{I}^{\mathsf {A},\kappa }\cdot F_{I}^{{\mathsf {B}},\kappa } - E_{I+1}^{\mathsf {A},\kappa }\cdot F_{I}^{{\mathsf {B}},\kappa }\right] \in \pm \rho \).

Claim 3.19

\({{\mathbf {E}}}\left[ W_{I}^{\mathsf {A},\kappa }\cdot W_{I}^{{\mathsf {B}},\kappa } - W_{I}^{\mathsf {A},\kappa }\cdot F_{I}^{{\mathsf {B}},\kappa }\right] \in \pm 2\rho \).

We conclude that \({{\mathbf {E}}}\left[ E_{I+1}^{{\mathsf {P}},\kappa } \cdot Z_{I}^{{\overline{{\mathsf {P}}}},\kappa } - E_{I+1}^{{\mathsf {P}},\kappa }\cdot F^{{\overline{{\mathsf {P}}}},\kappa }_{I} \right] \in \pm 4\rho \), and thus \({{\mathbf {E}}}\left[ E_{i+1}^{{\mathsf {P}},\kappa } \cdot Z_{i}^{{\overline{{\mathsf {P}}}},\kappa } - E_{i+1}^{{\mathsf {P}},\kappa }\cdot F^{{\overline{{\mathsf {P}}}},\kappa }_{ i} \right] \in \pm 4r\rho \) for every \(i\in (r-1)\).\(\square \)

Proving Claim 3.17.

Proof of Claim 3.17. Consider algorithm \(\mathsf {D} \) that on input \((z^\mathsf {A},z^{\mathsf {B}}, \cdot )\), outputs (the product) \(z^\mathsf {A} z^{\mathsf {B}}\). By definition,

  1. 1.

    \({\mathrm {Pr}}\left[ \mathsf {D} (U_{W_{I}^{\mathsf {A},\kappa }} ,U_{W_{I}^{{\mathsf {B}},\kappa } } , M^\kappa _{\le I},S^\kappa ) =1\right] = {{\mathbf {E}}}\left[ U_{W_{I}^{\mathsf {A},\kappa }} \cdot U_{W_{I}^{{\mathsf {B}},\kappa }}\right] = {{\mathbf {E}}}\left[ W_{I}^{\mathsf {A},\kappa } \cdot W_{I}^{{\mathsf {B}},\kappa }\right] \),

  2. 2.

    \({\mathrm {Pr}}\left[ \mathsf {D} (E_{I+1}^{\mathsf {A},\kappa }, Z_{I}^{{\mathsf {B}},\kappa } , M^\kappa _{\le I},S^\kappa )=1\right] = {{\mathbf {E}}}\left[ E_{I+1}^{\mathsf {A},\kappa }\cdot Z_{I}^{{\mathsf {B}},\kappa }\right] \).

Hence, the proof follows by Eq. (28).\(\square \)

Proving Claim 3.18.

Proof of Claim 3.18. Consider the algorithm \(\mathsf {D} \) that on input \((z^\mathsf {A},z^{\mathsf {B}}, ( m_{\le I},s))\): (1) computes \((\cdot , f^{\mathsf {B}})=\mathsf {F} (m_{\le I};s)\), (2) samples \(u\leftarrow U_{f^{\mathsf {B}}}\), and (3) outputs \(z^\mathsf {A} \cdot u\). By definition,

  1. 1.

    \({\mathrm {Pr}}\left[ \mathsf {D} (U_{W_{I}^{\mathsf {A},\kappa }} ,U_{W_{I}^{{\mathsf {B}},\kappa } } , M^\kappa _{\le I},S^\kappa ) =1\right] = {{\mathbf {E}}}\left[ U_{W_{I}^{\mathsf {A},\kappa }}\cdot U_{F_{I}^{{\mathsf {B}},\kappa }}\right] = {{\mathbf {E}}}\left[ W_{I}^{\mathsf {A},\kappa } \cdot F_{I}^{{\mathsf {B}},\kappa }\right] \),

  2. 2.

    \({\mathrm {Pr}}\left[ \mathsf {D} (E_{I+1}^{\mathsf {A},\kappa }, Z_{ I}^{{\mathsf {B}},\kappa } , M^\kappa _{\le I},S^\kappa )=1\right] = {{\mathbf {E}}}\left[ E_{I+1}^{\mathsf {A},\kappa }\cdot U_{F_{I}^{{\mathsf {B}},\kappa }}\right] = {{\mathbf {E}}}\left[ E_{I+1}^{\mathsf {A},\kappa }\cdot F_{I}^{{\mathsf {B}},\kappa }\right] \).

Hence, also in this case the proof follows by Eq. (28).\(\square \)

Proving Claim 3.19.

Proof of Claim 3.19. Since \(\left| W_{I}^{\mathsf {A},\kappa }\right| \le 1\), it suffices to prove \({{\mathbf {E}}}\left[ \left| W_{I}^{{\mathsf {B}},\kappa } - F_I^{{\mathsf {B}},\kappa }\right| \right] \le 2\rho \). We show that if \({{\mathbf {E}}}\left[ \left| W_I^{{\mathsf {B}},\kappa } - F_I^{{\mathsf {B}},\kappa }\right| \right] > 2\rho \), then there exists a distinguisher with advantage greater than \(\rho \) for either the real outputs of \({\widehat{\varPi }}\) and the emulated outputs of \(\mathsf {Dcr} \), or, the real outputs of \({\widetilde{\varPi }}\) and the emulated outputs of \(\mathsf {F} \), in contradiction with the assumed properties of \(\mathsf {Dcr} \) and \(\mathsf {F} \).

Consider algorithm \(\mathsf {D}\) that on input \((z^\mathsf {A},z^{\mathsf {B}}, m_{\le i},s)\) acts as follows: (1) samples \(\widehat{s}\leftarrow \widehat{S}^\kappa \), (2) computes \((\cdot , f^{\mathsf {B}})=\mathsf {F} (m_{\le i};s)\) and \((\cdot , w^{\mathsf {B}})=\mathsf {Dcr} (m_{\le i}, s;\widehat{s})\), (3) outputs \(z^{\mathsf {B}}\) if \(w^{\mathsf {B}}\ge f^{\mathsf {B}}\), and \(1-z^{\mathsf {B}}\) otherwise. We compute the difference in probability that \(\mathsf {D} \) outputs 1 given a sample from \(\mathsf {Dcr} (M^\kappa _{\le I})\) or a sample from \(\mathsf {F} (M^\kappa _{\le I})\) (we omit the superscript \(\kappa \) and subscript I below to reduce clutter)

$$\begin{aligned}&{{\mathrm {Pr}}\left[ \mathsf {D} (U_{W_I^{\mathsf {A},\kappa }},U_{W_I^{{\mathsf {B}},\kappa }} , M^\kappa _{\le I},S^\kappa )=1 \right] - {\mathrm {Pr}}\left[ \mathsf {D} (U_{F_I^{\mathsf {A},\kappa }},U_{F_I^{{\mathsf {B}},\kappa }} , M^\kappa _{\le I},S^\kappa )=1 \right] }\\&= {{\mathbf {E}}}\left[ U_{W ^{{\mathsf {B}}}}\mid W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \\&\quad \quad + {{\mathbf {E}}}\left[ 1-U_{W ^{{\mathsf {B}}}}\mid W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \\&\qquad - {{\mathbf {E}}}\left[ U_{F ^{{\mathsf {B}}}}\mid W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \\&\quad \quad - {{\mathbf {E}}}\left[ 1-U_{F ^{{\mathsf {B}}}}\mid W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \\&= {{\mathbf {E}}}\left[ W ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \\&\quad \quad - {{\mathbf {E}}}\left[ W ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \\&\qquad - {{\mathbf {E}}}\left[ F ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \\&\quad \quad +{{\mathbf {E}}}\left[ F ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \\&= {{\mathbf {E}}}\left[ W ^{{\mathsf {B}}}- F ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \cdot {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}\ge F ^{{\mathsf {B}}}\right] \\&\quad \quad + {{\mathbf {E}}}\left[ -W ^{{\mathsf {B}}} + F ^{{\mathsf {B}}} \mid W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] {\mathrm {Pr}}\left[ W ^{{\mathsf {B}}}< F ^{{\mathsf {B}}}\right] \\&= {{\mathbf {E}}}\left[ \left| W ^{{\mathsf {B}}} - F ^{{\mathsf {B}}}\right| \right] \\&> 2\rho . \end{aligned}$$

An averaging argument yields that either \(\mathsf {D} \) is a distinguisher for the tuples \((U_{F_I^{\mathsf {A},\kappa }},U_{F_I^{{\mathsf {B}},\kappa }} , M^\kappa _{\le I},S^\kappa )\) and \((Z_I^{\mathsf {A},\kappa } ,Z_I^{{\mathsf {B}},\kappa } , M^\kappa _{\le I},S^\kappa )\) with advantage greater than \(\rho \), in contradiction with Claim 3.4, or, algorithm \(\mathsf {D} \) is a distinguisher for the tuples \((U_{W_I^{\mathsf {A},\kappa }},U_{W_I^{{\mathsf {B}},\kappa }} , M^\kappa _{\le I},S^\kappa )\) and \((E_I^{\mathsf {A},\kappa },Z^{{\mathsf {B}},\kappa }_I , M^\kappa _{\le I},S^\kappa )\) with advantage greater than \(\rho \), in contradiction with Eq. (28).\(\square \)