Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We revisit a fundamental question in the foundations of cryptography: what is the communication overhead of privacy in computation? This question has been considered in several different models and settings [2, 12, 14, 41]. In this work, we focus on a very simple and natural model where non-private computation requires very little communication (just a single bit), whereas the best upper bound for private computation is exponential.

Namely, we consider two-party conditional disclosure of secrets (CDS) [19] (c.f. Fig. 2), a generalization of secret sharing [23, 44]: two parties want to disclose a secret to a third party if and only if their respective inputs satisfy some fixed predicate \(\mathsf {P} \). Concretely, Alice holds x, Bob holds y and they both share a secret \(\alpha \in \{0,1\}\) (along with some additional private randomness), whereas Carol knows xy but not \(\alpha \). Alice and Bob want to disclose \(\alpha \) to Carol iff \(\mathsf {P} (x,y)=1\). How many bits do Alice and Bob need to communicate to Carol? In the non-private setting, Alice or Bob can send \(\alpha \) to Carol, upon which Carol computes \(\mathsf {P} (x,y)\) and decides whether to output \(\alpha \) or \(\perp \). This trivial protocol with one-bit communication is not private because Carol learns \(\alpha \) even when the predicate is false; in fact, the best upper bound we have for CDS for general predicates requires that Alice and Bob each transmits \(2^{\Omega (|x| + |y|)}\) bits [7]. Here, we are interested not only in the total communication from Alice and Bob to Carol, but also in trade-offs between the length of Alice’s message \(\ell _\mathsf {A}\) and that of Bob’s message \(\ell _\mathsf {B}\).

Connection to Attribute-Based Encryption. Attribute-based encryption (ABE) [20, 43] is a new paradigm for public-key encryption that enables fine-grained access control for encrypted data. In attribute-based encryption, ciphertexts are associated with descriptive values x in addition to a plaintext, secret keys are associated with values y, and a secret key decrypts the ciphertext if and only if \(\mathsf {P} (x,y) = 1\) for some boolean predicate \(\mathsf {P} \). Note that x and y are public given the respective ciphertext and secret key. Here, y together with \(\mathsf {P} \) may express an arbitrarily complex access policy, which is in stark contrast to traditional public-key encryption, where access is all or nothing. The simplest example of ABE is that of identity-based encryption (IBE) [8, 13, 45] where \(\mathsf {P} \) corresponds to equality. The security requirement for attribute-based encryption enforces resilience to collusion attacks, namely any group of users holding secret keys for different values learns nothing about the plaintext if none of them is individually authorized to decrypt the ciphertext. This should hold even if the adversary adaptively decides which secret keys to ask for.

In [47], Waters introduced the powerful dual system encryption methodology for building adaptively secure IBE in bilinear groups; this has since been extended to obtain adaptively secure ABE for a large class of predicates [30, 31, 33, 35, 38, 40]. In recent works [3, 48], Attrapadung and Wee presented a unifying framework for the design and analysis of dual system ABE schemes, which decouples the predicate \(\mathsf {P} \) from the security proof. Specifically, the latter work puts forth the notion of predicate encoding, a private-key, one-time, information-theoretic primitive similar to conditional disclosure of secrets, and provides a compiler from predicate encoding for a predicate \(\mathsf {P} \) into an ABE for the same predicate using the dual system encryption methodology. Moreover, the parameters in the predicate encoding scheme and in CDS correspond naturally to ciphertext and key sizes in the ABE. In particular, Alice’s message corresponds to the ciphertext, and Bob’s message to the secret key. For these applications, we require that Alice’s and Bob’s messages are linear functions of the shared randomness, and also that Carol computes a linear function of the messages to reconstruct the secret \(\alpha \). These applications consider linear functions over \(\mathbb {Z}_p\) where p is the order of the underlying bilinear group; in this work, we focus on lower bounds for the case \(p=2\) although our techniques do hold for general p. Note that while the parameters for ABE schemes coming from predicate encodings are not necessarily the best known parameters, they do match the state-of-the-art in terms of ciphertext and secret key sizes for many predicates such as inner product, index, and read-once formula.

CDS Parameters. Unlike in traditional communication complexity where the primary measure is the total communication from Alice and from Bob, we make a more fine-grained distinction between the lengths of Alice’s and Bob’s messages \(\ell _\mathsf {A}\) and \(\ell _\mathsf {B}\). For instance, in the application to ABE, \(\ell _\mathsf {A}\) and \(\ell _\mathsf {B}\) correspond to ciphertext and secret key sizes respectively. Note that for ABE ciphertext and key sizes, we ignore the contributions from the descriptive values xy as well as multiplicative factors in the security parameter.Footnote 1 We are particularly interested in three regimes of parameters for \((\ell _\mathsf {A},\ell _\mathsf {B})\):

  • How small can \(\ell _\mathsf {B}\) be when \(\ell _\mathsf {A}\) is constant? This corresponding to minimizing key sizes for schemes with constant-size ciphertexts;

  • How small can \(\ell _\mathsf {A}\) be when \(\ell _\mathsf {B}\) is constant? This corresponding to minimizing ciphertext sizes for schemes with constant-size keys;

  • How small can \(\max (\ell _\mathsf {A}, \ell _\mathsf {B})\) be? This corresponds to minimizing the overall parameter sizes of the scheme.

We also care about the complexity of the reconstruction function as computed by Carol, as a function of the messages from Alice and Bob; as noted earlier, for ABE, we will require linear reconstruction.

Prior Works. There have been several works studying CDS protocols (and strengthenings thereof) for a large class of predicates [3, 19, 22, 48]: the best general upper bound achieves both linear reconstruction and communication that is linear in the size of the smallest (arithmetic) branching program computing the predicate [19, 22]. However, we basically do not have any techniques for proving lower bounds on the communication complexity of CDS protocols. Here, even the probabilistic method or a counting argument does not seem to yield meaningful lower bounds for a random function (in contrast, these techniques do yield meaningful lower bounds for circuit complexity of a random function).

Fig. 1.
figure 1

Summary of our upper and lower bounds for linear CDS, where \(\ell _\mathsf {A}\) and \(\ell _\mathsf {B}\) denote the length of the messages from Alice and Bob respectively. We marked the tight lower bounds with an asterisk \(^*\).

1.1 Our Results

We initiate a systematic treatment of the communication complexity of conditional disclosure of secrets (CDS). We present a general upper bound and the first non-trivial lower bounds for conditional disclosure of secrets, summarized in Fig. 1. Moreover, we achieve tight lower bounds for many interesting setting of parameters for CDS with linear reconstruction, the latter being a requirement in the application to attribute-based encryption; this addresses an open problem posed in [48]. Very informally, for CDS with linear reconstruction, we obtain lower bounds of the form:

$$ \ell _\mathsf {A}\cdot \ell _\mathsf {B}\ge ``\text {communication complexity of}~\mathsf {P} \!\!" $$

For example, for inner product on n-bit vectors, we have \(\ell _\mathsf {A}\cdot \ell _\mathsf {B}= \Omega (n)\). Our lower bounds partially explain the trade-off between ciphertext and secret key sizes of several existing attribute-based encryption schemes based on the dual system methodology, c.f. [3, 10, 31, 35, 39, 48].

Fig. 2.
figure 2

Pictorial representation of CDS and communication complexity.

Proof Techniques. Since we want to argue about the lengths of the messages of Alice and Bob to Carol, the first idea would be to look at the communication complexity of the predicate \(\mathsf {P} \) [29, 49]. Informally, communication complexity measures how many bits of information about x and y we need to transmit in order to compute \(\mathsf {P} (x,y)\) (c.f. Fig. 2). Namely, Alice holds x and Bob holds y and each of them sends a message to a third party Carol who wants to compute \(\mathsf {P} (x,y)\). We also allow all three parties to share public randomness w. The goal is to minimize the communication from Alice and Bob to Carol, and there is no privacy requirement. There is now a large body of works in communication complexity giving tight upper and lower bounds for a large class of predicates. For instance, a classic result from communication complexity tells us that to compute the inner product of two vectors \(\mathbf {x},\mathbf {y}\in \{0,1\}^n\), each of Alice and Bob must send \(n-\Omega (1)\) bits [11]. That is, we need to know essentially all of \(\mathbf {x}\) and all of \(\mathbf {y}\) in order to compute their inner product.

Our goal is to leverage the rich literature on lower bounds for communication complexity to obtain lower bounds for CDS. Namely, we want to transform any CDS \(\Pi _{\text {cds}}\) for a predicate \(\mathsf {P} \) into a communication complexity protocol \(\Pi _{\text {cc}}\) for \(\mathsf {P} \) with only a small blow-up in communication complexity. The crucial distinction between CDS and communication complexity is that Carol knows xy in \(\Pi _{\text {cds}}\) but not in \(\Pi _{\text {cc}}\) (as shown in Fig. 2).

The first attempt would be to show that a \(\Pi _{\text {cds}}\) for a predicate \(\mathsf {P} \) is also a \(\Pi _{\text {cc}}\) for \(\mathsf {P} \). Fix xy to denote the inputs to \(\Pi _{\text {cc}}\). That is, we would like to argue that Alice’s message together with Bob’s message in a CDS (even without xy) must completely determine \(\mathsf {P} (x,y)\). Intuitively, this ought to be the case because if the CDS messages are consistent with both values of \(\mathsf {P} (x,y)\), then they must simultaneously uniquely determine \(\alpha \) (via correctness) and hide \(\alpha \) (via privacy), a contradiction. Indeed, if this worked out, we would have a lower bound of the form

$$ \ell _\mathsf {A}+ \ell _\mathsf {B}\ge ``\text {communication complexity for}~\mathsf {P} \!\!" $$

Unfortunately, the above statement is false for inner product. The above statement implies a lower bound of \(2n-\Omega (1)\) bits for inner product, but we have a CDS for inner product with \(n+1\) bits! It is instructive to understand why the above attempt fails. The issue arises in using correctness of CDS to argue that Alice’s and Bob’s message must determine \(\alpha \): specifically, it is necessary for Carol to specify inputs \(x',y'\) in order to reconstruct \(\alpha \) from Alice’s and Bob’s messages. In fact, different inputs \((x',y')\) could yield different values for \(\alpha \). We need to fix this issue.

  • The first idea is to have Alice in \(\Pi _{\text {cc}}\) also send the secret \(\alpha \); Carol then tries all possible \((x',y')\) for which \(P(x',y') = 1\) and output 1 iff for some xy the reconstructed secret indeed equals \(\alpha \). By the correctness of CDS, Carol will output 1 when \(P(x,y)=1\). However, there could be false positives, since even when \(P(x,y)=0\), there could be inputs \((x',y')\) for which \(P(x',y')=1\) and the reconstructed secret matches \(\alpha \), upon which Carol will incorrectly output 1. In fact, privacy tells us that Carol will recover a random value for the secret for each choice of \((x',y')\), and with pretty good probability, at least one of them will match \(\alpha \).

  • The second idea is to avoid false positives by having Alice and Bob run the CDS protocol \(\Pi _{\text {cds}}\) N times, with fresh independent private randomness and secrets across the repetitions. As before, Carol will try all possible \((x',y')\) for which \(P(x',y') = 1\) and output 1 iff for some \(x',y'\) the reconstructed secret equals \(\alpha \) in all repetitions of the protocol. By the correctness of CDS, Carol will always output 1 when \(P(x,y)=1\). On the other hand, if \(P(x,y) = 0\), a straight-forward union bound over \((x',y') \in P^{-1}(1)\) tells us Carol outputs 1 with probability at most \(P^{-1}(1) \cdot 2^{-N}\), since Carol recovers a random value in each repetition. For inner product, we need to take a union bound over \(2^{2n-1}\) possible pairs, which requires running \(N = 2n-1\) copies of the CDS protocol \(\Pi _{\text {cds}}\); the communication complexity of \(\Pi _{\text {cc}}\) is then \(2n-1\) times that of \(\Pi _{\text {cds}}\). This does not yield any non-trivial lower bound for \(\Pi _{\text {cds}}\) since we have an upper bound of 2n for communication complexity.

Here comes our key observation: we can substantially reduce the number of repetitions needed if the CDS protocol \(\Pi _{\text {cds}}\) has small communication complexity! Suppose \(\Pi _{\text {cds}}\) has total communication \(\ell _A + \ell _B \ll n\) bits. Observe that the reconstruction function computed by Carol in \(\Pi _{\text {cds}}\) is a function from \(\{0,1\}^{\ell _A+\ell _B}\) to \(\{0,1\}\). Now, instead of having Carol in \(\Pi _{\text {cc}}\) enumerate over all possible (xy), she will instead enumerate over all functions from \(\{0,1\}^{\ell _A+\ell _B}\) to \(\{0,1\}\), and output 1 iff for some function the reconstructed secret equals \(\alpha \) in all N repetitions. By the correctness of CDS, Carol will always output 1 when \(P(x,y)=1\). Moreover, there are \(2^{2^{\ell _A+\ell _B}}\) possible functions, which means we will need to run \(2^{\ell _A+\ell _B}\) copies of \(\Pi _{\text {cds}}\) in \(\Pi _{\text {cc}}\); this already implies a \(\Omega (\log n)\) lower bound for inner product! Moreover, if the CDS \(\Pi _{\text {cds}}\) admits linear reconstruction, then Carol in \(\Pi _{\text {cc}}\) will also need to enumerate over all \(2^{\ell _A+\ell _B}\) linear functions from \(\{0,1\}^{\ell _A+\ell _B}\) to \(\{0,1\}\), which means we only need to run \(\ell _A+\ell _B\) copies of \(\Pi _{\text {cds}}\) in \(\Pi _{\text {cc}}\); this in turn yields a \(\Omega (\sqrt{n})\) lower bound for inner product.

We obtain our lower bounds on CDS for concrete predicates by instantiating the above argument with existing lower bounds in communication complexity [4, 11, 24, 28, 36, 42] (c.f. Sect. 5).

Implications for Dual System ABE. As observed in [3, 10, 48], underlying most “information-theoretic” dual system ABE schemes for a predicate \(\mathsf {P} \) is a CDS for the same predicate, and our lower bounds apply to ciphertext and secret key sizes for these dual system ABE schemes. On the other hand, we do have ABE schemes based on a “computational” dual system argument, such as those in [3, 9, 27, 32, 34], many of which are more efficient and do avoid the lower bounds in this work. Informally, underlying the “computational” dual system argument is a computational analogue of CDS, where the privacy requirement is computational rather than information-theoretic. As it turns out, formalizing the right notion of computational privacy in CDS is quite tricky.

Recall that CDS guarantees privacy of the secret \(\alpha \) whenever \(\mathsf {P} (x,y) = 0\), and in the application to ABE, we require that privacy holds even if xy are chosen adaptively, namely Alice’s input x may be chosen depending on Bob’s input y and Bob’s message, and vice versa. Now, if the privacy guarantee is information-theoretic and perfect, then privacy for non-adaptive choices of xy implies privacy for adaptive choicesFootnote 2; this equivalence dissipates as soon as we relax the privacy requirement to be statistical or computational. The “right” notion of computational privacy for use in ABE schemes is that of “doubly selective” security [3, 34], where “doubly” refers to the two possibilities depending on whether x or y is chosen first. Unsurprisingly, provingFootnote 3 and using doubly selective security require substantially more delicate security reductions, and in most cases, stronger and less desirable q-type assumptions. This raises the natural question of whether the increased complexity in these proofs and assumptions are inherent, or simply a failure to find more clever and efficient CDS with information-theoretic privacy. Our work rules out the latter option.

1.2 Discussion

Perspective. Note that our set-up is quite different from previous lower bounds for private computation in the literature; to the best of our knowledge, this is the first super-constant lower bound in a setting where the price of privacy in computation is always bounded. For instance, in interactive secure two-party computation, some functions are impossible to compute securely [12], so the cost of privacy is infinite for these functions (whereas ours is bounded for all predicates). For secure computation in the FKN model [14, 15], we do not have any techniques for super-constant gaps. For locally decodable codes, there is no gap for privacy in some ranges of parameters, for instance, when we want to minimize one-way communication from the client and communication from the server is essentially “free”; here, the server needs to send the entire database, whether or not we care about client privacy.

Additional Related Work. There is a large body of work on lower bounds on share sizes in secret-sharing (c.f. [5, Sect. 5]). Most of these works rely on Shannon-type inequalities on entropy of random variables, which do not seem applicable to our setting. Roughly speaking, in secret sharing, Carol either gets a share or not, whereas Alice and Bob in CDS can do more complex computations than simply computing shares and then deciding whether to send each share to Carol. The recent work of Data, Prabhakaran and Prabhakaran [14] draws upon tools from information theory to obtain new communication complexity lower bounds for secure computation in three-party setting. In their model which allows multiple rounds of interaction, the problem we consider admits a secure protocol with a single bit of communication, and their techniques do not yield better bounds in the non-interactive setting.

Open Problems. We conclude with a number of open problems:

  • explore the power of non-linear reconstruction in CDS (that is, positive results, c.f. [6, 46]);

  • tight lower bounds for inner product with linear reconstruction (which we conjecture to be \(\Omega (n)\));

  • obtain better lower bounds for multi-bit secrets (which is related to lower bounds for secret sharing for multi-bit secrets), or obtain upper bounds that are better than the naive “direct product” construction;

  • improve the upper or lower bounds in CDS for read-once span programs for constant \(\ell _\mathsf {A}\) or constant \(\ell _\mathsf {B}\). A related problem is to prove stronger communication complexity lower bounds for general span programs (which may not be read-once).

2 Preliminaries

Notations. We denote by \(s \leftarrow _{\textsc {r}}S\) the fact that s is picked uniformly at random from a finite set S or from a distribution. Throughout this paper, we denote by \(\log \) the logarithm of base 2.

2.1 Conditional Disclosure of Secrets

We recall the notion of conditional disclosure of secrets (CDS), c.f. Fig. 2. The definition we give here is for two parties Alice and Bob and a referee Carol, where Alice and Bob share randomness w and want to conditionally disclose a secret \(\alpha \) to Carol. The general notion of conditional disclosure of secrets has first been investigated in [19]. Two-party CDS is closely related to the notions of predicate encoding [10, 48] and pairing encoding [3]; in particular, the latter two notions imply two-party CDS with linear reconstruction.

Definition 1

(Conditional Disclosure of Secrets (CDS) [19, 48]). Fix a predicate \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\). A \((\ell _\mathsf {A},\ell _\mathsf {B})\)-conditional disclosure of secrets (CDS) for \(\mathsf {P} \) is a triplet of deterministic functions \((\mathsf {A}, \mathsf {B}, \mathsf {C})\)

$$\begin{aligned} \mathsf {A}: \mathcal {X}\times \mathcal {W}\times \mathcal {D}\rightarrow \{0,1\}^{\ell _A}, \quad \mathsf {B}: \mathcal {Y}\times \mathcal {W}\times \mathcal {D}\rightarrow \{0,1\}^{\ell _B}, \quad \mathsf {C}: \mathcal {X}\times \mathcal {Y}\times \{0,1\}^{\ell _\mathsf {A}} \times \{0,1\}^{\ell _\mathsf {B}} \rightarrow \mathcal {D}\end{aligned}$$

satisfying the following properties:

  • (reconstruction.) For all \((x,y) \in \mathcal {X}\times \mathcal {Y}\) such that \(\mathsf {P} (x,y) = 1\), for all \(w \in \mathcal {W}\), and for all \(\alpha \in \mathcal {D}\):

    $$\begin{aligned} \mathsf {C}(x,y,\mathsf {A}(x,w,\alpha ),\mathsf {B}(y,w,\alpha )) = \alpha \end{aligned}$$
  • (privacy.) For all \((x,y) \in \mathcal {X}\times \mathcal {Y}\) such that \(\mathsf {P} (x,y) = 0\), and for all \(\mathsf {C}^* : \{0,1\}^{\ell _\mathsf {A}} \times \{0,1\}^{\ell _\mathsf {B}} \rightarrow \mathcal {D}\),

    $$\mathop {\Pr }\limits _{w \leftarrow \mathcal {W}, \alpha \leftarrow _{\textsc {r}}\mathcal {D}}\Bigl [ \mathsf {C}^* \bigl (\mathsf {A}(x,w,\alpha ),\mathsf {B}(y,w,\alpha )\bigr ) = \alpha \Bigr ] \le \frac{1}{|\mathcal {D}|}$$

Note that the formulation of privacy above with uniformly random secrets is equivalent to standard indistinguishability-based formulations

A useful measure for the complexity of a CDS is the complexity of reconstruction as a function of the outputs of \(\mathsf {A},\mathsf {B}\), as captured by the function \(\mathsf {C}\), with (xy) hard-wired.

Definition 2

( \(\mathcal {C}\) -Reconstruction). Given a set \(\mathcal {C}\) of functions from \(\{0,1\}^{\ell _\mathsf {A}} \times \{0,1\}^{\ell _\mathsf {B}} \rightarrow \mathcal {D}\), we say that a CDS \((\mathsf {A},\mathsf {B},\mathsf {C})\) admits \(\mathcal {C}\)-reconstruction if for all (xy) such that \(\mathsf {P} (x,y)=1\), \(\mathsf {C}(x,y,\cdot ,\cdot ) \in \mathcal {C}\).

Two examples of \(\mathcal {C}\) of interest are:

  • \(\mathcal {C}_\text {all}\) is the set of all functions from \(\{0,1\}^{\ell _\mathsf {A}} \times \{0,1\}^{\ell _\mathsf {B}} \rightarrow \mathcal {D}\); that is, we do not place any restriction on the complexity of reconstruction. Note that \(|\mathcal {C}_\mathrm {all}| = |\mathcal {D}|^{2^{\ell _\mathsf {A}+\ell _B}}\).

  • \(\mathcal {C}_\mathrm {lin}\) is the set of all linear functions over \(\mathbb {Z}_2\) from \(\{0,1\}^{\ell _\mathsf {A}} \times \{0,1\}^{\ell _\mathsf {B}} \rightarrow \mathcal {D}\); that is, we require the reconstruction to be linear as a function of the outputs of \(\mathsf {A}\) and \(\mathsf {B}\) as bit strings (but may depend arbitrarily on xy). This is the analogue of linear reconstruction in linear secret sharing schemes and is a requirement for the applications to attribute-based encryption [3, 10, 48]. Note that \(|\mathcal {C}_\mathrm {linear}| \le |\mathcal {D}|^{{\ell _\mathsf {A}+\ell _B}}\) for \(|\mathcal {D}| \ge 2\).

Remark 1

Note that while looking at \(\mathcal {C}\), we consider \(\mathsf {C}(x,y,\cdot ,\cdot )\), which has (xy) hard-wired, and takes an input of total length \(\ell _\mathsf {A}+ \ell _\mathsf {B}\). In particular, it could be that \(\mathsf {C}\) runs in time linear in \(|x| = |y| = n\), and yet \(\ell _\mathsf {A}= \ell _\mathsf {B}= O(\log n)\) so \(\mathsf {C}\) has “exponential” complexity w.r.t. \(\ell _\mathsf {A}+ \ell _\mathsf {B}\).

Definition 3

(Linear CDS). We say that a CDS \((\mathsf {A},\mathsf {B},\mathsf {C})\) is linear if it admits \(\mathcal {C}_\mathrm {lin}\)-reconstruction.

2.2 Communication Complexity

The description of communication complexity in Fig. 2 actually refers to the “simultaneous message” model, where \(\mathsf {A}\) and \(\mathsf {B}\) each sends a message to \(\mathsf {C}\). For our actual proof, it suffices to consider one way communication complexity, where there is no \(\mathsf {C}\), but either \(\mathsf {A}\) sends a single message to \(\mathsf {B}\) or \(\mathsf {B}\) sends a single message to \(\mathsf {A}\). We now proceed to recall the basic definitions for communication complexity [29, 49], specifically one-way communication complexity with one-sided error [1, 28, 37].

Definition 4

([28, 49]). A one-way \((\mathsf {A}\rightarrow \mathsf {B})\) communication protocol for a predicate \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\) it is a pair of deterministic functions \((\mathsf {A},\mathsf {B})\) where

$$\begin{aligned} \mathsf {A}: \mathcal {X}\times \mathcal {W}\times \{0,1\}^{\ell } \rightarrow \{0,1\}, \quad \mathsf {B}: \mathcal {Y}\times \mathcal {W}\times \{0,1\}^\ell \rightarrow \{0,1\}, \quad \end{aligned}$$

and the following properties are satisfied for every \((x,y) \in \mathcal {X}\times \mathcal {Y}\):

  • If \(\mathsf {P} (x,y) = 1\), then \( \Pr _{w \leftarrow _{\textsc {r}}\mathcal {W}}[\mathsf {B}(y,w,\mathsf {A}(x,w)) = 1] = 1 \)

  • If \(\mathsf {P} (x,y) = 0\), then \( \Pr _{w \leftarrow _{\textsc {r}}\mathcal {W}}[\mathsf {B}(y,w,\mathsf {A}(x,w)) = 0] \ge 1/2.\)

The one-way communication complexity of \(\mathsf {P} \), denoted by \(\mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {})\), is the minimum \(\ell \) over all one-way communication protocols \((\mathsf {A},\mathsf {B})\) for \(\mathsf {P} \).

We also denote by \(\mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {})\) the minimum \(\ell \) over all one-way \((\mathsf {B}\rightarrow \mathsf {A})\) communication protocols \((\mathsf {A},\mathsf {B})\), where

$$ \mathsf {A}: \mathcal {X}\times \mathcal {W}\times \{0,1\}^{\ell } \rightarrow \{0,1\}, \quad \mathsf {B}: \mathcal {Y}\times \mathcal {W}\times \{0,1\}^\ell \rightarrow \{0,1\}, $$

and the following properties are satisfied for every \((x,y) \in \mathcal {X}\times \mathcal {Y}\):

  • If \(\mathsf {P} (x,y) = 1\), then \(\Pr _{w \leftarrow _{\textsc {r}}\mathcal {W}}[\mathsf {A}(x,w,\mathsf {B}(y,w)) = 1] = 1\)

  • If \(\mathsf {P} (x,y) = 0\), then \(\Pr _{w \leftarrow _{\textsc {r}}\mathcal {W}}[\mathsf {A}(x,w,\mathsf {B}(y,w)) = 0] \ge 1/2\).

3 CDS for General Predicates

We present a general upper bound for linear CDS for any predicate:

Theorem 1

(Generic Upper Bounds for Linear CDS). Given any predicate \(\mathsf {P} : \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}\), for any \(t \le 2^n\), there exists a linear \((t,2^n/t)\)-CDS for \(\mathsf {P} \) with \(\mathcal {D}= \{0,1\}\). In particular, there exists a \((1,2^{n})\)-CDS, a \((2^n,1)\)-CDS, a \((2^{n/2},2^{n/2})\)-CDS for \(\mathsf {P} \), all three of which are linear.

The result improves upon the \((2^{n/2},2^{n/2})\)-CDS (but not linear) given in [7]; our construction is also considerably simpler.

Proof

(sketch.) The construction follows from a standard reduction of any general predicate to the INDEX predicate on \(2^n\)-dimensional vectors: Alice treats the truth table \(\mathsf {P} (x,\cdot )\) as a vector of length \(2^n\) and Bob treats \(y \in \{0,1\}^n\) as an index, so that the INDEX predicate returns \(\mathsf {P} (x,y)\). Then, we can use the \((t,2^n/t)\)-linear CDS for the INDEX predicate on \(2^n\)-dimensional vectors in [10, 17]    \(\square \)

More generally, for any predicate \(\mathsf {P} :\mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\), we have a \((t,\min (|\mathcal {X}|,|\mathcal {Y}|)/t)\)-linear CDS, by treating either x or y as an index depending on whether \(|\mathcal {X}| \le |\mathcal {Y}|\) or not. This is essentially optimal for linear reconstruction, since we prove a tight lower bound for INDEX\(:\{0,1\}^n \times [n] \rightarrow \{0,1\}\) in Sect. 5.

4 Lower Bounds for CDS

In this section, we present our lower bounds on the communication complexity of CDS.

Theorem 2

(Lower Bounds for Linear CDS). Let \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\) be a predicate. For all linear \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS of \(\mathsf {P} \) with \(|\mathcal {D}| \ge 2\), we have

$$\begin{aligned} \ell _\mathsf {A}\cdot (\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1)\ge \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) \quad \text{ and } \quad \ell _\mathsf {B}\cdot (\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) \ge \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}). \end{aligned}$$

We then derive our lower bounds for linear CDS by using existing lower bounds on one-way communication complexity; see Sect. 5. In fact, our techniques are fairly general and also yield lower bounds on non-linear CDS.

Theorem 3

(Lower Bounds for General CDS). Let \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\) be a predicate. For all \((\ell _\mathsf {A},\ell _\mathsf {B})\)-predicate CDS of \(\mathsf {P} \) with \(|\mathcal {D}| \ge 2\), we have

$$\begin{aligned} \ell _\mathsf {A}+ \ell _\mathsf {B}\ge \frac{1}{2} \log \Bigl ( \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) + \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}) \Bigr ). \end{aligned}$$

While the lower bounds for general CDS are exponentially smaller than those for linear CDS, we still do obtain non-trivial logarithmic lower bounds for many concrete predicates.

4.1 Main Lemma

We obtain both lower bounds via a general reduction from CDS for a predicate \(\mathsf {P} \) to one-way communication protocols for the same predicate; the communication cost of the reduction depends crucially on the complexity of reconstruction (c.f. Definition 2):

Lemma 1

(Main Technical Lemma). Let \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\) be a predicate. Then, any \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS for \(\mathsf {P} \) with \(|\mathcal {D}| \ge 2\) and which admits \(\mathcal {C}\)-reconstruction satisfies

$$\begin{aligned} (\log |\mathcal {C}| + 1) \cdot \ell _\mathsf {A}\ge \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) \cdot \log |\mathcal {D}| \quad \text{ and } \quad (\log |\mathcal {C}| + 1) \cdot \ell _\mathsf {B}\ge \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}) \cdot \log |\mathcal {D}| \end{aligned}$$

Theorem 2 then follows from instantiating the lemma with \(\mathcal {C}:= \mathcal {C}_\mathrm {lin}\), where \(\log |\mathcal {C}_\mathrm {lin}| = (\ell _\mathsf {A}+ \ell _\mathsf {B}) \cdot \log |\mathcal {D}|\). Similarly, Theorem 3 uses \(\mathcal {C}:= \mathcal {C}_\mathrm {all}\) where \(\log |\mathcal {C}_\mathrm {all}| = 2^{\ell _\mathsf {A}+ \ell _\mathsf {B}} \cdot \log |\mathcal {D}|\).

Proof

(of Lemma 1 ). Let \(N := \frac{\log |\mathcal {C}| + 1}{\log |\mathcal {D}|}\). We build a one-way communication protocol \((\widetilde{\mathsf {A}},\widetilde{\mathsf {B}})\) for the predicate \(\mathsf {P} \) as follows:

  • Sample \(w_i \leftarrow _{\textsc {r}}\mathcal {W}, \alpha _i \leftarrow _{\textsc {r}}\mathcal {D}\) for \(i=1,\ldots ,N\) and set

    $$ w := (w_1,\alpha _1,\ldots ,w_N,\alpha _N) $$
  • Alice computes

    $$ \widetilde{\mathsf {A}}(x,w) := (\mathsf {A}(x,w_1,\alpha _1),\ldots ,\mathsf {A}(x,w_N,\alpha _N)) $$
  • Bob outputs 1 iff there exists a function \(\mathsf {C}^* \in \mathcal {C}\) such that

    $$ \mathsf {C}^*\bigl (\mathsf {A}(x,w_i,\alpha _i), \mathsf {B}(y,w_i,\alpha _i)\bigr ) = \alpha _i,\quad \forall \; i=1,\ldots ,N $$

We proceed to analyze the protocol \((\widetilde{\mathsf {A}},\widetilde{\mathsf {B}})\).

  • Completeness. Suppose \(\mathsf {P} (x,y) = 1\). Then, by the reconstruction property, the function \(\mathsf {C}^*(\cdot ) := \mathsf {C}(x,y,\cdot ) \in \mathcal {C}\) satisfies

    $$ \mathsf {C}^*(\mathsf {A}(x,w_i,\alpha _i), \mathsf {B}(y,w_i,\alpha _i)\bigr ) = \alpha _i,\quad \forall \; i=1,\ldots ,N $$

    for all \((w_1,\alpha _1,\ldots ,w_N,\alpha _N)\). Therefore, \(\widetilde{\mathsf {B}}\) outputs 1 with probability 1.

  • Soundness. Suppose \(\mathsf {P} (x,y) = 0\). Fix \(\mathsf {C}^* \in \mathcal {C}\). For each \(i=1,\ldots ,N\), \(\alpha \)-privacy implies that

    $$ \mathop {\Pr }\limits _{w_i,\alpha _i}\Bigl [ \mathsf {C}^*\bigl (\mathsf {A}(x,w_i,\alpha _i), \mathsf {B}(y,w_i,\alpha _i)\bigr ) = \alpha _i\Bigr ] \le \tfrac{1}{|D|}$$

    Since the \((w_i,\alpha _i)\) are chosen independently at random, we have

    $$ \mathop {\Pr }\limits _{w_1,\alpha _1,\ldots ,w_N,\alpha _N}\Bigl [ \mathsf {C}^*\bigl (\mathsf {A}(x,w_i,\alpha _i), \mathsf {B}(y,w_i,\alpha _i)\bigr ) = \alpha _i,\quad \forall \;i=1,\ldots ,N \Bigr ] \le \tfrac{1}{|\mathcal {D}|^N}$$

    By a union bound over all \(|\mathcal {C}|\) functions \(\mathsf {C}^* \in \mathcal {C}\), we have

    $$\begin{aligned} \Pr \Bigl [ \widetilde{\mathsf {B}} \text{ outputs } 1 \Bigr ] \le |\mathcal {C}| \cdot |\mathcal {D}|^{-N} \le 1/2 \end{aligned}$$

    by our choice of N.

It is straightforward to check that \(\widetilde{\mathsf {A}}\) sends \(\frac{\log |\mathcal {C}|+1}{\log |\mathcal {D}|} \cdot \ell _\mathsf {A}\) bits to \(\widetilde{\mathsf {B}}\). Similarly, we can build a \((\widetilde{\mathsf {B}},\widetilde{\mathsf {A}})\) protocol for \(\mathsf {P} \), where \(\widetilde{\mathsf {B}}\) sends \(\frac{\log |\mathcal {C}|+1}{\log |\mathcal {D}|}\cdot \ell _\mathsf {B}\) bits to \(\widetilde{\mathsf {A}}\). This completes the proof.      \(\square \)

Remark 2

(Extensions). It is easy to see that the reduction also works for CDS with imperfect reconstruction and weak privacy. If the gap between the probability of reconstructing \(\alpha \) when \(\mathsf {P} (x,y)=1\) and the probability of recovering \(\alpha \) when \(\mathsf {P} (x,y)=0\) is \(\delta \), then it suffices to take \(N := O\Bigl (\frac{1}{\delta } \log |\mathcal {C}|\Bigr )\) via a straightforward application of the Chernoff bound. The ensuing randomized protocol for communication complexity will then have a two-sided error.

Remark 3

(Beyond Linear CDS). Note that the bounds of Theorem 2 are much more general than just for linear CDS. For instance, if we require that reconstruction be carried out by circuits of size \(\ell ^c\) for some constant c (where \(\ell := \ell _\mathsf {A}+ \ell _\mathsf {B}\)), or by polynomials of degree c, then we get lower bounds of the form

$$\begin{aligned} \ell _\mathsf {A}+ \ell _B = \Omega \Bigl ((\mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) + \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}))^{1/(c+1)}\Bigr ) \end{aligned}$$

4.2 Lower Bounds for Multi-bit Secrets

We now look at CDS where the secret \(\alpha \) is a multi-bit string; that is, \(\mathcal {D}\) is of the form \(\{0,1\}^d\), for \(d \ge 1\). There is a trivial upper bound for d-bit secrets obtained by running d times a CDS for single-bit secrets. Note, of course, that hiding a secret of size \(d=1\) is the easiest case, since we can simply embed this secret to a larger d-bit string by randomly adding \(d-1\) bits and use the CDS for the secret of size d. Hence, the lower bounds on the message lengths of the CDS for a secret of size \(d=1\) still hold for the CDS of secret of size \(d\ge 1\). We would like a lower bound that grows with d.

Here, we prove that for any non-trivial predicate \(\mathsf {P} \), for any \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS of \(\mathsf {P} \), both \(\ell _\mathsf {A}\) and \(\ell _\mathsf {B}\) need to be at least d. A trivial predicate is one whose output is completely determined by either x or y (e.g. the output of the predicate is the first bit of x), for which there is a protocol with \(\ell _A + \ell _B = d\). The intuition is that in any non-trivial predicate, Alice’s message essentially serves as the secret key for a one-time pad, which is needed to “unlock” \(\alpha \in \{0,1\}^d\) from Bob’s message. This means that Alice’s message must itself be at least d bits.

It is easy to see that the lower bound is tight for the equality predicate. For all other non-trivial predicates, it remains an open problem to close the gap between lower and upper bounds for CDS of multi-bit secrets.

Theorem 4

Let \(\mathcal {D}:= \{0,1\}^d\), and let \(\mathsf {P} : \mathcal {X}\times \mathcal {Y}\rightarrow \{0,1\}\) be a non-trivial predicate that depends on both inputs x and y; that is, there exists \(x^* \in \mathcal {X}\), such that \(\mathsf {P} (x^*,\cdot )\) is not constant on \(\mathcal {Y}\), and there exists \(y^* \in \mathcal {Y}\) such that \(\mathsf {P} (\cdot ,y^*)\) is not constant on \(\mathcal {X}\). Then, for any \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS of \(\mathsf {P} \), we have

$$\begin{aligned} \ell _A \ge d \quad {and}\quad \ell _\mathsf {B}\ge d. \end{aligned}$$

Proof

We begin with the lower bound on \(\ell _A\). Let \(x_0,x_1 \in \mathcal {X}\) be such that

$$\begin{aligned} \mathsf {P} (x_0,y^*) = 0 \quad {and}\quad \mathsf {P} (x_1,y^*) = 1 \end{aligned}$$

Let \(\mathsf {C}^*: \{0,1\}^{\ell _\mathsf {A}+ \ell _\mathsf {B}} \rightarrow \{0,1\}^d\) be a randomized function defined as follows: on input \(m_\mathsf {A}\in \{0,1\}^{\ell _\mathsf {A}}\) and \(m_\mathsf {B}\in \{0,1\}^{\ell _\mathsf {B}}\),

  • picks a message \(m \leftarrow _{\textsc {r}}\{0,1\}^{\ell _\mathsf {A}}\) at random (and ignores \(m_\mathsf {A}\));

  • outputs \(\mathsf {C}(x_1,y^*,m,m_\mathsf {B})\).

By \(\alpha \)-reconstruction for \(\mathsf {P} (x_1,y^*)= 1\), for all \(\alpha \in \mathcal {D}\), \(w \in \mathcal {W}\), we have

$$\begin{aligned} \mathsf {C}\big (x_1,y^*,\mathsf {A}(x_1,w,\alpha ),\mathsf {B}(y^*,w,\alpha )\big ) = \alpha . \end{aligned}$$

Therefore, for all \(\alpha \in \mathcal {D}\), \(w \in \mathcal {W}\), we have

$$ \Pr _{m \leftarrow _{\textsc {r}}\{0,1\}^{\ell _A}} \Bigl [ \mathsf {C}\big (x_1,y^*,\mathsf {A}(x_1,w,\alpha ),\mathsf {B}(y^*,w,\alpha )\big ) = \alpha \text{ and } m = \mathsf {A}(x_1,w,\alpha ) \Bigr ] = 1/2^{\ell _\mathsf {A}} $$

Thus,

Since \(\mathsf {C}^*\) ignores \(m_\mathsf {A}\), this means that for all \(m_\mathsf {A}\), and in particular for \(m_\mathsf {A}= \mathsf {A}(x_0,w,\alpha )\), we have

$$\begin{aligned} \mathop {\Pr }\limits _{w \leftarrow \mathcal {W}, \alpha \leftarrow _{\textsc {r}}\mathcal {D}, \text{ coins } \text{ of } \mathsf {C}^*} \Bigl [ \mathsf {C}^*\big (\mathsf {A}(x_0,w,\alpha ),\mathsf {B}(y^*,w,\alpha )\big )= \alpha \Bigr ] \ge 1/2^{\ell _\mathsf {A}} \end{aligned}$$

On the other hand, by \(\alpha \)-privacy for \(\mathsf {P} (x_0,y^*) = 0\), we have

$$ \mathop {\Pr }\limits _{w \leftarrow \mathcal {W}, \alpha \leftarrow _{\textsc {r}}\mathcal {D}, \text{ coins } \text{ of } \mathsf {C}^*}\Bigl [ \mathsf {C}^* \bigl (\mathsf {A}(x_0,w,\alpha ),\mathsf {B}(y^*,w,\alpha )\bigr ) = \alpha \Bigr ] \le 1/2^d$$

Combining the two preceding inequalities, we have \(1/2^{\ell _\mathsf {A}} \le 1/2^d\) and thus,

$$\begin{aligned} \ell _\mathsf {A}\ge d . \end{aligned}$$

For the same reason,

$$\begin{aligned} \ell _\mathsf {B}\ge d . \end{aligned}$$

   \(\square \)

5 Concrete Predicates

In this section, we describe how we can combine the results in the previous section with lower bounds in one-way communication complexity to obtain the results in Fig. 1. Each of these predicates has been studied in prior works on attribute-based encryption. For each of these predicates, we obtain non-trivial lower bounds for general \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS of the form:

$$\begin{aligned} \ell _\mathsf {A}+ \ell _\mathsf {B}= \Omega (\log n). \end{aligned}$$

We focus hence-forth on lower bounds for linear \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS, where linearity is over \(\mathbb {Z}_2\). In the applications to ABE, we will typically work with linear functions over \(\mathcal {D}= \mathbb {Z}_p\) (where \(\log p\) is linear in the security parameter), in which case we lose a multiplicative \(\log p\) factor in the lower bounds.

Index, Prefix. We consider the following predicates:

  • Index: \(\mathcal {X}:= \{0,1\}^n, \mathcal {Y}:= [n]\) and

    $$\begin{aligned} \mathsf {P} _\mathrm {index}(\mathbf {x}, i) = 1 \text{ iff } x_i = 1 \end{aligned}$$

    That is, \(\mathbf {x}\) is the characteristic vector of a subset of [n]. In the context of ABE, this corresponds to broadcast encryption [16].

  • Prefix: \(\mathcal {X}:= \{0,1\}^n, \mathcal {Y}:= \{0,1\}^{\le n}\) and

    $$\begin{aligned} \mathsf {P} _\mathrm {prefix}(\mathbf {x}, \mathbf {y}) = 1 \text{ iff } \mathbf {y} \text{ is } \text{ a } \text{ prefix } \text{ of } \mathbf {x} \end{aligned}$$

    In the context of ABE, this corresponds to hierarchical identity-based encryption [18, 21].

For both predicates, we have tight bounds for one-way communication complexity:

$$\begin{aligned} \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) = \Theta (n) \quad {and}\quad \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}) = \Theta (\log n) \end{aligned}$$

By Theorem 2, this means that any linear \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS for any of the two predicates must satisfy

$$\begin{aligned} \ell _\mathsf {A}(\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) = \Omega (n) \end{aligned}$$

This immediately yields

  • \(\ell _\mathsf {B}= \Omega (n)\) if \(\ell _\mathsf {A}= O(1)\) and more generally, \(\ell _\mathsf {B}= \Omega (n/\ell _A)\) for any \(\ell _A = o(\sqrt{n})\);

  • \(\ell _\mathsf {A}= \Omega (\sqrt{n})\) if \(\ell _\mathsf {B}= O(1)\);

  • \(\max (\ell _\mathsf {A}, \ell _\mathsf {B}) = \Omega (\sqrt{n})\).

The first and third lower bounds are tight, as we have matching upper bounds in [3, 10, 48] exhibiting a linear (tn / t)-CDS for both predicates and any \(t \in [n]\).

Disjointness, Inner Product. We consider the following predicates:

  • Disjointness: \(\mathcal {X}= \mathcal {Y}:= \{ S \subseteq [n]\}\) and

    $$\begin{aligned} \mathsf {P} _\mathrm {disj}(X, Y) = 1 \text{ iff } X \cap Y = \emptyset \end{aligned}$$

    In the context of ABE, this is related to a special case of fuzzy IBE [43].

  • Inner Product [26]: \(\mathcal {X}= \mathcal {Y}:= \mathbb {Z}_p^n\) and

    $$\begin{aligned} \mathsf {P} _\mathrm {IP}(\mathbf {x}, \mathbf {y}) = 1 \text{ iff } \mathbf {x}^{\!\scriptscriptstyle {\top }}\mathbf {y}= 0 \end{aligned}$$

For both predicates, we have tight bounds for one-way communication complexity:

$$\begin{aligned} \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) = \Theta (n) \quad {and}\quad \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}) = \Theta (n) \end{aligned}$$

given in [4, 24, 42] for disjointness, in [11] for inner product. By Theorem 2, this means that any linear \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS for any of the two predicates must satisfy

$$ \ell _\mathsf {A}(\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) = \Omega (n) \quad {and}\quad \ell _\mathsf {B}(\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) = \Omega (n) $$

This immediately yields

  • \(\ell _\mathsf {B}= \Omega (n)\) if \(\ell _\mathsf {A}= O(1)\);

  • \(\ell _\mathsf {A}= \Omega (n)\) if \(\ell _\mathsf {B}= O(1)\);

  • \(\max (\ell _\mathsf {A}, \ell _\mathsf {B}) = \Omega (\sqrt{n})\).

The first and second lower bounds are tight, as we have matching upper bounds in [3, 10, 48] exhibiting a linear \((t, n-t+O(1))\)-CDS for these predicates and any \(t \in [n]\). It is open whether a CDS with overall parameter size of \(O(\sqrt{n})\) is possible.

Read-Once Monotone Span Programs. We consider the following predicate:

  • Read-once monotone span program: \(\mathcal {X}:= \{0,1\}^n\), \(\mathcal {Y}:= \mathbb {Z}_p^{n \times n}\) is a collection of read-once monotone span programs [25] specified by a matrix \(\mathbf {M}\) of height n and

    $$\begin{aligned} \mathsf {P} _\mathrm {MSP}(\mathbf {x}, \mathbf {M}) = 1 \text{ iff } \mathbf {x} \text{ satisfies } \mathbf {M} \end{aligned}$$

    Here, \(\mathbf {x}\) satisfies \(\mathbf {M}\) iff \((1,0,\ldots ,0)\) lies in the row span of \(\{ \mathbf {M}_j : x_j = 1 \}\) where \(\mathbf {M}_j\) is the j’th row of \(\mathbf {M}\). In the context of ABE, this corresponds to key-policy ABE for access structures [20].

$$\begin{aligned} \mathsf {R}^{\mathsf {A}\rightarrow \mathsf {B}}(\mathsf {P} _\mathrm {}) = \Theta (n) \quad {and}\quad \mathsf {R}^{\mathsf {B}\rightarrow \mathsf {A}}(\mathsf {P} _\mathrm {}) = \Theta (n^2) \end{aligned}$$

By Theorem 2, this means that any linear \((\ell _\mathsf {A},\ell _\mathsf {B})\)-CDS for both predicates must satisfy

$$ \ell _\mathsf {A}(\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) = \Omega (n) \quad {and}\quad \ell _\mathsf {B}(\ell _\mathsf {A}+ \ell _\mathsf {B}+ 1) = \Omega (n^2) $$

This immediately yields

  • \(\ell _\mathsf {B}= \Omega (n)\) if \(\ell _\mathsf {A}= O(1)\);

  • \(\ell _\mathsf {A}= \Omega (n^2)\) if \(\ell _\mathsf {B}= O(1)\);

  • \(\max (\ell _\mathsf {A}, \ell _\mathsf {B}) = \Omega (n)\).

The third lower bound is tight, as we have matching upper bounds in [3, 10, 48] exhibiting a linear (nn)-CDS for the predicate. It is open what the optimal parameters are when we keep either the key or the ciphertext size constant.