Skip to content
BY 4.0 license Open Access Published by De Gruyter July 3, 2020

Privacy-preserving verifiable delegation of polynomial and matrix functions

  • Liang Feng Zhang EMAIL logo and Reihaneh Safavi-Naini

Abstract

Outsourcing computation has gained significant popularity in recent years due to the development of cloud computing and mobile services. In a basic outsourcing model, a client delegates computation of a function f on an input x to a server. There are two main security requirements in this setting: guaranteeing the server performs the computation correctly, and protecting the client’s input (and hence the function value) from the server. The verifiable computation model of Gennaro, Gentry and Parno achieves the above requirements, but the resulting schemes lack efficiency. This is due to the use of computationally expensive primitives such as fully homomorphic encryption (FHE) and garbled circuits, and the need to represent f as a Boolean circuit. Also, the security model does not allow verification queries, which implies the server cannot learn if the client accepts the computation result. This is a weak security model that does not match many real life scenarios. In this paper, we construct efficient (i.e., without using FHE, garbled circuits and Boolean circuit representations) verifiable computation schemes that provide privacy for the client’s input, and prove their security in a strong model that allows verification queries. We first propose a transformation that provides input privacy for a number of existing schemes for verifiable delegation of multivariate polynomial f over a finite field. Our transformation is based on noisy encoding of x and keeps x semantically secure under the noisy curve reconstruction (CR) assumption. We then propose a construction for verifiable delegation of matrix-vector multiplication, where the delegated function f is a matrix and the input to the function is a vector. The scheme uses PRFs with amortized closed-form efficiency and achieves high efficiency. We outline applications of our results to outsourced two-party protocols.

MSC 2010: 11T71; 94A60

1 Introduction

Outsourcing computation has gained significant popularity in recent years due to the development of cloud computing and mobile devices. Computationally weak devices such as smartphones and netbooks can outsource expensive computations to powerful cloud servers.

The first security concern that arises in outsourcing is to guarantee that the cloud server correctly performs the delegated computation. Cloud servers may have incentives, such as saving in computation time or other malicious goals, to produce results that may be incorrect. The verifiable computation (VC) of Gennaro, Gentry and Parno [18] allows a client to outsource the computation of a function f on an input x and then verify the correctness of the server’s work. The outsourcing is especially meaningful as long as the client’s work spent on preparing x for delegation and verifying the server’s results are substantially less than computing f(x) locally. A second security concern is privacy of client’s data, including the input x and the output f(x).

Resolving both security issues simultaneously in an efficient way is a nontrivial problem. The proposals in [18] and several following works [2, 4, 13] address both security concerns but use expensive cryptographic primitives such as fully homomorphic encryption (FHE) and/or garbled circuits, and represent the function f as a Boolean circuit. The result is inefficient verifiable computation schemes. From a security view point, an important shortcoming is that these schemes can only tolerate adversaries that do not make verification queries, i.e., the adversary is not allowed to learn if the client has accepted the computation result.

1.1 Our work

In this paper, we develop the first verifiable computation schemes where the client’s input is kept private from the server; both the client and the server computations are free of FHE, garbled circuits and Boolean circuit representations, and the security is proved in a strong model that allows verification queries. We achieve these properties for two types of functions: multivariate polynomials and functions that are represented by a matrix over a finite field.

A transformation for polynomial delegation schemes

Our first contribution is a transformation 𝓣 that can be applied to a number of existing verifiable computation schemes, resulting in the input x and the output f(x), to remain private (semantic security) from the server. Our transformation works for all schemes in [6, 10, 16, 37] where the function f is a multivariate polynomial over finite fields, does not use FHE, garbled circuits and Boolean circuits and allows verification queries if so does the underlying scheme.

Verifiable delegation of high-degree polynomial computations on private inputs is highly nontrivial. On one hand, the client has to provide a semantically secure encryption of x (say σx) to the cloud server. On the other hand, the cloud server has to compute f on σx without knowing the decryption key and produce an encoding σy of the output y = f(x). A generic way to enable such computations is to use FHE, which however should be avoided in our schemes.

We resolve this difficulty using techniques from multivariate polynomial interpolation and reconstruction [12, 43]. Let f(x) = f(x1, …, xh) be an h-variate polynomial of degree ≤ d over a finite field 𝔽, and let a = (a1, …, ah) ∈ 𝔽h be any input to the function. We observe that f(a) can be learned from the restriction of f on a random (parametric) curve that passes through a. More precisely, let γ(z) = a + r1z + ⋯ + rkzkr1, …, rk ∈ 𝔽h) be a degree-k parametric curve passing through a, and let g(z) = f(γ(z)) be the restriction of f on the curve. Given any t > kd points {(zi,g(zi))}i=1t, one can interpolate g(z) and learn f(a) = g(0). The t points {(zi,γ(zi))}i=1t can be regarded as a noiseless encoding of a, where any ≤ k points perfectly hide a. Distributing the t points to t different servers, one to each server, would enable each server to return a value f(γ(zi)) = g(zi), and the t values jointly give f(a) in a way such that any ≤ k servers learn no information about a.

Unfortunately, we cannot use the noiseless encoding to attain input privacy in VC, where the single cloud server knows all t points {(zi,g(zi))}i=1t and so is able to learn a. To overcome this difficulty, we mix the t points of a noiseless encoding with nt random points {(zj,uj)}j=t+1n and form a noisy encoding of a that consists of n points (with locations randomly permuted). The values of f on these points suffice to compute f(a). The problem of decoding a from its noisy encoding (known as noisy curve reconstruction) has been extensively studied in [7, 15, 23, 26, 40]. There is no known polynomial-time algorithm for the problem for t ≤ (nkh)1/(h+1) + k + 1. The noisy curve reconstruction (CR) assumption [26] is that the noisy curve reconstruction problem is intractable when t = o((nkh)1/(h + 1) + k + 1).

While noisy encoding gives a “semantically secure encryption” of a, it results in the long encoding of the input and so significant efficiency loss. Fortunately, the noisy encoding can be extended to “encrypt” a polynomial number of inputs a1, …, as at the same time, resulting in an encoding of size O(n) elements for s = O(n/t). With this extended noisy encoding, one can delegate computation of f(a1), …, f(as) simultaneously and significantly reduce the average cost of delegation. This inspires our transformation 𝓣 that extends polynomial delegation schemes to have input (and output) privacy: the cloud server computes f on O(n) points of the noisy encoding of a1, …, as and gives the results to the client; the client computes f(a1), …, f(as) using polynomial interpolation. Using a verifiable polynomial delegation scheme Σ, the O(n) computations of f will be verifiable by the client, and this determines if the server has worked correctly.

Applying 𝓣 to the existing schemes [6, 10, 16, 37] (without data privacy), results in schemes with input (and output) privacy. Furthermore, 𝓣 keeps additional properties, such as private/public verifiability of the underlying schemes, unchanged. For example, applying 𝓣 to [37] results in a publicly verifiable scheme that allows efficient update of the function.

An input-private construction for matrix delegation

We interpret an n × n matrix M = (Mi,j) as a function that takes a vector x = (x1, …, xn) as input and outputs Mx′, where x′ is the transpose of x. We propose an input-private verifiable computation of matrix-vector multiplications of the form Mx′. The scheme provides security assuming an adversary with access to verification queries and provides very efficient verification by avoiding the second level of amortization as used above.

The construction uses three primitives: a somewhat homomorphic encryption (SHE) adapted from [8], a homomorphic hash from [17] and a PRF with closed-form efficiency from [17]. The input (and output) privacy is obtained by the client encrypting x using the SHE and giving it to the server. The server has the matrix M and a tag matrix T = (Ti,j) for the matrix elements, each computed using the PRF with closed-form efficiency. The SHE scheme allows the server to perform homomorphic scalar multiplications and additions on the elements of M (in clear) and the ciphertext of x, which gives an encrypted version of Mx′ for the client. The server is able to compute the homomorphic hash digests on the SHE ciphertexts of x and combines these digests with the tags of M to generate a proof of correctness for computation. The homomorphic property of the hash, and the amortized closed-form efficiency of the PRF, makes the client’s verification significantly faster than the computation of Mx′ from scratch. In particular, the verification can be done in constant time after a one-time computation that is substantially more efficient than computing Mx′.

Application

Our verifiable computation schemes could be used to outsourcing of two-party protocols. We show an example of such applications to the outsourcing of private information retrieval (PIR). PIR [12] allows a client to retrieve any block fi of a database f = (f1, f2, …, fN) from a server such that i ∈ [N] is not revealed to the server. Outsourced PIR [25, 34] has been suggested to offload the PIR server computation [5] to cloud. Both of our constructions give outsourced PIR with security against malicious cloud servers.

1.2 Related work

Securely outsourcing computation dates back to the work on interactive proofs [3, 22], PCP-based efficient arguments [29, 30], CS proofs [35] and the muggle proofs [21]. While these schemes are either interactive or in the random oracle model, the verifiable computation of Gennaro, Gentry and Parno [18] is non-interactive and in the standard model.

The verifiable computation schemes of [2, 4, 13, 18] attain input (and output) privacy and thus resolve both security issues simultaneously. However, they have to use the expensive cryptographic primitives such as FHE and/or garbled circuits and occasionally represent the function f as a Boolean circuit. As a result, these schemes are not efficient enough both in terms of server computation and in terms of client computation. Furthermore, these schemes are only secure against adversaries that do not make verification queries. Goldwasser et al. [20] show how to construct reusable garbled circuits and obtain private schemes but again make use of FHE. Ananth et al. [1] constructed a verifiable computation scheme achieving input (and output) privacy using multiple servers, where FHE is not used but the security requires at least one of the servers is honest.

Fiore, Gennaro and Pastro [17] consider verifiable computation schemes where the data on cloud server (the function f in our setting) is kept private. In our schemes, the data on server is not necessarily encrypted, but the client’s input x should be kept semantically secure in order to achieve input (and output) privacy. That is, we are studying a problem orthogonal to [17]. The schemes of [27, 32] consider the same problem as [17].

The verifiable computation schemes of [6, 9, 11, 14, 16, 21, 39] require the client to send its input to the cloud server in clear and thus attain no input (or output) privacy.

2 Preliminaries

Let λ be a security parameter. We denote by poly(λ) an arbitrary polynomial function in λ. We denote by negl(λ) an arbitrary negligible function in λ, i.e., any function ϵ(λ) from the natural numbers to the non-negative real numbers such that, for any c > 0, there is an integer λc > 0 such that ϵ(λ) < λc for all λλc. Let 𝓐( ⋅ ) be any probabilistic polynomial-time (p.p.t.) algorithm. We denote by “y ← 𝓐(x)” the procedure of running 𝓐 on input x and assigning the output to y. Let Ω be any finite set. We denote by “y ← Ω” the procedure of choosing an element y from Ω uniformly and at random. For every integer m > 0, we denote [m] = {1, 2, …, m}.

2.1 Verifiable computation

A verifiable computation scheme [6, 18] is a two-party protocol between a client and a server. The client provides a function f and an input x to the server. The server is expected to compute f(x) and respond with the (possibly encoded) output together with a proof that the output is correct. The client then verifies the output is indeed correct. The goal of verifiable computation is to make the client’s verification as efficient as possible, and in particular much faster than the computation of f(x) from scratch. In the amortized model of [6, 18], the client is allowed to do an expensive preprocessing on f to produce a key pair and then use the key pair to efficiently verify the server’s computation of f on many different inputs. The scheme is said to be outsourceable if each individual verification is much faster than the corresponding computation.

A verifiable computation scheme 𝓥𝓒 = (KeyGen, ProbGen, Compute, Verify) for an admissible function family 𝓕 consists of four polynomial-time algorithms defined below.

  • (PKf, SKf) ← KeyGen(1λ, f): Based on the security parameter λ, the randomized key generation algorithm generates a public key that encodes the target function f and the matching secret key. The public key is provided to the server, and the secret key is kept private by the client.

  • (σx, τx) ← ProbGen(SKf, x): The problem generation algorithm uses the secret key SKf to encode the function input x as a public value σx which is given to the server, and a secret value τx which is kept private by the client.

  • σy ← Compute(PKf, σx): Using the client’s public key and the encoded input, the server computes an encoded version (i.e., σy) of the function’s output y = f(x).

  • {y, ⊥} ← Verify(SKf, τx, σy): Using the secret key SKf and the secret “decoding” value τx, the verification algorithm converts the server’s encoded output into the output of the function, e.g., y = f(x), or outputs ⊥ indicating that σy does not represent the valid output of f on x.

We are interested in verifiable computation schemes that are correct, secure, private and outsourceable. The scheme is said to be correct if the problem generation algorithm produces values that allow an honest server to compute values that will verify successfully and be converted to the evaluation of f on the client’s input x.

Definition 1

(Correctness). The scheme 𝓥𝓒 is correct if, for any function f from the admissible function family 𝓕, the key generation algorithm produces keys (PKf, SKf) ← KeyGen(1λ, f) such that, for all x ∈ Domain(f), if (σx, τx) ← ProbGen(SKf, x) and σy ← Compute(PKf, σx), then Verify(SKf, τx, σy) = f(x).

Intuitively, a verifiable computation scheme is secure if a malicious server cannot persuade the verification algorithm to accept an incorrect output. In other words, for a given function f and input x, a malicious server should not be able to convince the verification algorithm to output a value ŷ such that ŷf(x). This intuition can be formalized by the following experiment.

Figure 1 Experiment ExpAVer$\begin{array}{}
{\rm Exp}_{\mathcal{A}}^{{\rm Ver}}
\end{array}$ (𝓥𝓒, f, λ).
Figure 1

Experiment ExpAVer (𝓥𝓒, f, λ).

In the experiment ExpAVer(𝓥𝓒, f, λ), the adversary 𝓐 is given a polynomial number (i.e., L) of opportunities to persuade the verification algorithm to accept the wrong output value for an input value. In each trial, the adversary is given oracle access to generate the encoding of a problem instance, and also oracle access to the result of the verification algorithm on an arbitrary string on that instance. The adversary succeeds if it ever convinces the verification algorithm in a trial to accept the wrong output value for the input value. The security of 𝓥𝓒 requires that the adversary succeeds only with negligible probability.

Definition 2

(Security). The scheme 𝓥𝓒 is secure if, for any function f ∈ 𝓕 and for any probabilistic polynomial-time adversary 𝓐, there is a negligible function negl such that

Pr[ExpAVer(VC,f,λ)=1]negl(λ).

Intuitively, a verifiable computation scheme is (input) private when the public outputs of the problem generation algorithm ProbGen for two different inputs are indistinguishable; i.e., nobody can decide which encoding is the correct one for a given input. The input privacy can be defined based on a typical indistinguishability argument and yields output privacy. Let PubProbGen(SKf, ⋅) be an oracle that computes (σx, τx) ← ProbGen(SKf, x) on any input x and returns only the public value σx. We formalize the intuition (on input privacy) with the following experiment.

Figure 2 Experiment ExpAPri$\begin{array}{}
{\rm Exp}_{\mathcal{A}}^{{\rm Pri}}
\end{array}$(𝓥𝓒, f, λ).
Figure 2

Experiment ExpAPri(𝓥𝓒, f, λ).

Definition 3

(Privacy). The scheme 𝓥𝓒 is private if, for any function f ∈ 𝓕 and for any probabilistic polynomial-time adversary 𝓐, there is a negligible function “negl” such that

|Pr[ExpAPri(VC,f,λ)=1]12|negl(λ).

Informally, a verifiable computation scheme is outsourceable if the time to encode the input and verify the output must be smaller than the time to compute the function from scratch.

Definition 4

(Outsourceable). The scheme 𝓥𝓒 is outsourceable if it permits efficient problem generation and output verification. That is, for any x and any σy, the time required for ProbGen(SKf, x) plus the time required for Verify(SKf, σy) is o(T), where T is the time required to compute f(x) from scratch.

We work in the amortized model of [6, 18], where the time required for KeyGen(1λ, f) is not included in the above definition. In this model, computing the key pair (PKf, SKf) is a one-time operation (per function) that can be amortized over the computation of on many (in fact, any poly(λ) number of) different inputs. Apart from this amortization, we also consider a second level of amortization occasionally, where a number of different inputs, say x1, …, xs, are processed by ProbGen together and the delegation and verification of the computations f(x1), …, f(xs) are done simultaneously.

3 Adding privacy to polynomial delegation

In this section, we show a transformation that can add (input and output) privacy to a verifiable computation scheme whose admissible function family consists of multivariate polynomials over a finite field. Our transformation is based on the noisy curve reconstruction assumption.

3.1 Noisy curve reconstruction assumption

The noisy curve reconstruction assumption generalizes the noisy polynomial reconstruction assumption [28, 36], which was widely used in protocol design [26, 42] and is based on the hardness of noisy polynomial list reconstruction problems.

Definition 5

(Noisy polynomial list reconstruction). Let 𝔽 be a finite field, and let n, k, t > 0 be integers. Let (z1, y1), …, (zn, yn) ∈ 𝔽2. The noisy polynomial list reconstruction problem with input (n,k,t,{(zi,yi)}i=1n) is the problem of finding all polynomials γ(z) of degree ≤ k such that γ(zi) = yi for ≥ t values of i ∈ [n].

When tn+k2, the noisy polynomial list reconstruction problem has a unique solution and can be solved in polynomial time by Berlekamp and Massey’s algorithm [33]. Goldreich, Rubinfeld and Sudan [19] showed that, for t > kn, the noisy polynomial list reconstruction problem has ≤ poly(n) solutions. Sudan [41] and Guruswami and Sudan [24] proposed polynomial-time algorithms for t2kn and tkn, respectively. For tkn, no polynomial-time algorithms are known. Naor and Pinkas [36] introduced the noisy polynomial reconstruction assumption, which asserts that, for appropriately chosen n = n(λ), k = k(λ), t = t(λ) and 𝔽 = 𝔽(λ), the output distribution of the following procedure keeps a ∈ 𝔽 semantically secure:

  1. randomly choose

    1. a polynomial γ(z) ∈ 𝔽[z] of degree ≤ k such that γ(0) = a,

    2. n nonzero field elements z1, z2, …, zn ∈ 𝔽 such that they are distinct,

    3. a subset T ⊆ [n] of cardinality t and set yi = γ(zi) for every iT,

    4. a field element yi ∈ 𝔽 for every i ∈ [n] ∖ T;

  2. output {(zi,yi)}i=1n.

Ishai, Kushilevitz, Ostrovsky and Sahai [26] considered a multi-dimensional variant of the noisy polynomial list reconstruction problem and introduced the noisy curve reconstruction (CR) assumption.

Definition 6

(CR assumption). Let k be a degree parameter, which will also serve as a security parameter. Given functions 𝔽(k) (field), h(k) (dimension), t(k) (the number of points on the curve) and n(k) (the total number of points), the CR assumption holds with parameters (𝔽, h, t, n) if the output distribution Dn,k,t,ha of the following procedure keeps a = (a1, …, ah) ∈ 𝔽(k)h(k) semantically secure:

  • randomly choose

    1. h polynomials γ1(z), …, γh(z) ∈ 𝔽[z] of degree ≤ k such that (γ1(0), …, γh(0)) = (a1, …, ah),

    2. n nonzero field elements z1, z2, …, zn ∈ 𝔽 ∖ {0} such that they are distinct,

    3. a subset T ⊆ [n] of cardinality t, and for every iT, set yi = (γ1(zi), …, γh(zi)),

    4. a vector yi ∈ 𝔽h for every i ∈ [n] ∖ T;

  • output {yi}i=1n.

Formally, the CR assumption holds if, for any points a0, a1 ∈ 𝔽(k)h(k), for any probabilistic polynomial-time algorithm 𝓐, there is a negligible function “negl” such that

Pr[A(Dn,k,t,ha0)=1]Pr[A(Dn,k,t,ha1)=1]negl(k).

An augmented version of the CR problem requires one to learn a from {(zi,yi)}i=1n (instead of {yi}i=1n) and was resolved in [15] when t > (nkh)1/(h+1) + k + 1. The problem remains hard when t = o((nkh)1/(h+1)), and the CR assumption remains plausible despite of the progress in list decoding [7, 15, 23, 26, 40].

3.2 Multivariate polynomial interpolation and noisy encoding

Multivariate polynomial interpolation allows one to learn the value of a multivariate polynomial at a point, given its restriction on a parametric curve passing through that point. Let h, d > 0 be integers. For any vector i = (i1, …, ih) of non-negative integers, we denote wt(i) = i1 + ⋯ + ih as the weight of i. Let 𝔽 be any finite field. We denote by 𝔽[x] = 𝔽[x1, …, xh] the ring of all polynomials in the h variables x = (x1, …, xh) and denote by xi=x1i1xhih the monomial of multidegreei (and degree wt(i)) in x. Let f(x) = ∑i:wt(i)≤dfixi be any h-variate polynomial of degree ≤ d over 𝔽, and let a = (a1, …, ah) ∈ 𝔽h. The multivariate polynomial interpolation technique of learning f(a) can be described as the following procedure:

  • choose r1, …, rk ← 𝔽h; define a parametric curve γ(z) = a + r1z + ⋯ + rkzk;

  • learn f(γ(zi)) for tkd + 1 distinct nonzero field elements z1, …, zt ∈ 𝔽 ∖ {0};

  • interpolate the polynomial g(z) = f(γ(z)) of degree ≤ kd with {(zi,f(γ(zi)))}i=1t;

  • output g(0) (which is equal to f(γ(0)) = f(a)).

This procedure allows one to hide a from a subset of the players in distributed protocols for evaluating a multivariate polynomial f(x), such as in the private information retrieval (PIR) protocols [12, 43], where a client gives t points γ(z1), …, γ(zt) to t servers such that no k or less servers can learn any information about a, the i-th server returns g(zi) and the client recovers f(a) from the t values g(z1), …, g(zt). We consider the t points γ(z1), …, γ(zt) as a noiseless encoding of a, which leaks absolutely no information about a to any adversary that observes ≤ k of the t points.

We shall construct verifiable computation schemes where the client’s input a is kept private from a single cloud server. While sending a noiseless encoding (γ(z1), …, γ(zt)) of a to the server simply reveals a to that server, the CR assumption allows us to develop a noisy encoding{yi}i=1n of a (as in the procedure of Definition 6) that keeps a semantically secure. Unfortunately, we cannot directly use this noisy encoding in the constructions due to efficiency loss. On one hand, the CR assumption requires that t ≤ (nkh)1/(h+1) + k + 1. On the other hand, one has to choose tkd + 1 to enable the interpolation of g(z) = f(γ(z)). As a result, n ≥ (d – 1)h+1k and is comparable to h+dd, the number coefficients of f. And the noisy encoding only yields a scheme that is not outsourceable.

We bypass this difficulty with a second level of amortization, i.e., by processing multiple function inputs a1, …, as together such that the average encoding length of each input is short and thus results in outsourceable schemes. In [26], it was shown that if nt noisy points suffice to keep one point semantically secure, then, for any s = poly(k), they suffice to keep s points semantically secure. With this observation, we describe an extended noisy encoding algorithm (pka⃗, rka⃗) ← NEnc(k, a⃗) that takes a⃗ = (a1, …, as) ∈ (𝔽h)s as input and outputs a public noisy encoding pka⃗ and a private value rka⃗ for reconstruction use as follows.

  • for every ∈ [s], randomly choose h polynomials γ,1(z), …, γ,s(z) ∈ 𝔽[z] of degree ≤ k such that a = (γ,1(0), …, γ,h(0));

  • randomly choose m = ts + nt nonzero distinct field elements z1, …, zm ∈ 𝔽 ∖ {0};

  • randomly choose s pairwise disjoint subsets T1, …, Ts ⊂ [m], each of cardinality t;

  • for every ∈ [s] and jT, set cj = (γ,1(zj), …, γ,h(zj));

  • set T0 = [m] ∖ (T1T2 ∪ ⋯ ∪ Ts); for every jT0, randomly choose a vector cj ∈ 𝔽h;

  • output pka={cj}j=1mandrka={Ti}i=0s.

3.3 The transformation

The algorithm NEnc allows one to hide s function inputs, say a⃗ = (a1, …, as), with a public noisy encoding pka⃗ such that no information about a⃗ will be leaked (under the CR assumption). Let Σ be a non-private verifiable computation scheme [6, 10, 16, 37] with an admissible function family of multivariate polynomials over a finite field. We shall present a transformation 𝓣 that adds (input and output) privacy to Σ. The idea of our transformation is letting the client encode a⃗ as pka⃗ and give pka⃗ to the server; the server runs Σ. Compute on every element (which is a point) of pka⃗ and provides the public values of evaluating the polynomial f on all points to the client; at last the client runs Σ. Verify to both verify the server’s work and recover the results f(a1), …, f(as). This idea gives a new scheme Π = 𝓣(Σ) as below.

  • (PKf, SKf) ← Π. KeyGen(1k, f): Given f = f(x) ∈ 𝔽[x1, …, xh], an h-variate polynomial of degree ≤ d, run Σ. KeyGen(1k, f) to generate a public key pkf and the matching secret key skf; output PKf = pkf and SKf = skf.

  • (σa⃗, τa⃗) ← Π. ProbGen(SKf, a⃗): Given a⃗ = (a1, …, as) ∈ (𝔽h)s, a set of s inputs from Domain(f), run NEnc(k, a⃗) to generate both a public noisy encoding pka⃗ of a⃗ and a private value rka⃗ for the reconstruction use. Parse pka⃗ as {cj}j=1mFh, a set of m points. For every j ∈ [m], run Σ. ProbGen(SKf, cj) to generate both a public encoding σcj of cj and a private value τcj for verification use. At last, output σa={σcj}j=1m as the public encoding of a⃗, and output τa=(rka,{τcj}j=1m), the private values for verification and reconstruction.

  • σy ← Π. Compute(PKf, σa⃗): Parse σa⃗ as {σcj}j=1m, the set of s public encodings, one for each element in pka⃗. For every j ∈ [m], run Σ. Compute(PKf, σcj) to compute an encoded version σf(cj) of the function’s output f(cj). At last, output σy={σf(cj)}j=1m as the encoded version of the function’s outputs on all s inputs, i.e., f(a1), …, f(as).

  • {y, ⊥} ← Π. Verify(SKf, τa⃗, σy): Parse τa⃗ as (rka⃗, {τcj}j=1m), where {τcj}j=1m is for verification use and rka⃗ is for reconstruction use. Parse σy as {σf(cj)}j=1m, an encoded version of the s function outputs f(a1), …, f(as). For every j ∈ [m], run Σ. Verify(SKf, τcj, σf(cj)) to verify the server’s work of computing f(cj) and output vj, where vj = f(cj) or vj = ⊥ (indicating that σf(cj) is not a valid encoding of f(cj)). If there exists j ∈ [m] such that vj = ⊥, then output ⊥ to indicate that σy is not a valid encoding of the s function outputs. Otherwise, parse rka⃗ as (T0, T1, …, Ts), where T1, …, Ts ⊆ [m] are pairwise disjoint t-subsets and T0 = [m] ∖ T1 ∪ ⋯ ∪ Ts; for every ∈ [s], interpolate a polynomial Q(z) = f(γ,1(z), …, γ,h(z)) of degree ≤ t from the t points {(zj, vj)}jT. At last, output y = (Q1(0), …, Qs(0)).

Correctness: The correctness of Σ implies that vj = f(cj) for every j ∈ [m]. For every ∈ [s], the points {cj}jT are on the parametric curve γ(z) = (γ,1(z), …, γ,h(z)). Then {vj}jT are values of Q(z) = f(γ(z)) at t = |T| distinct points {zj}jT, where deg(Q(z)) ≤ k ⋅ deg(f) ≤ kd. If the parameters t, k, d are chosen such that tkd + 1, then the t points {(zj, vj)}jT suffice to interpolate the univariate polynomial Q(z) of degree ≤ kd and give

Q(0)=f(γ(0))=f((γ,1(0),,γ,h(0)))=f(a).

Hence, the scheme Π = 𝓣(Σ) is correct when tkd + 1.

Privacy: In the scheme Π, a⃗ = (a1, …, as) is encoded with NEnc and then given to the server. The CR assumption implies that a⃗ will be kept semantically secure against the server as long as t = o((nkh)1/(h+1) + k + 1). Therefore, Π achieves input privacy (and thus output privacy) under the CR assumption.

Security: The security of Π requires that no adversary running in probabilistic polynomial-time should be able to persuade the verification algorithm to accept and output incorrect values on the input values. The proof of the following theorem is straightforward and left to Appendix A.

Theorem 1

If the scheme Σ is secure under Definition 2, then Π is a secure verifiable computation scheme under this security definition.

Efficiency: A verifiable computation scheme is outsourceable if the time to encode the input and verify the output is smaller than the time to compute the function from scratch. The existing verifiable computation schemes [6, 18] are in an amortized model, where the one-time cost of KeyGen(1λ, f) is amortized over many different inputs. And for each input x, the total time required for ProbGen(SKf, x) and Verify(SKf, τx, σy) is substantially less than the time required for computing f(x) from scratch.

The scheme Π works in two levels of amortization. At the first level, the one-time cost of Π. KeyGen(1λ, f) is amortized over the executions of the scheme on many different sets of inputs. At the second level, in every execution of Π. ProbGen(SKf, a⃗) and Π. Verify(SKf, τa⃗, σy), the client can process s function inputs a⃗ = (a1, …, as) together; and the total time required for both algorithms, when averaged over the s function inputs, allows Π to be outsourceable. More precisely, a⃗ is encoded as a set of m = nt + ts points. The time spent on Π. ProbGen(SKf, a⃗) is equal to the time spent on NEnc(k, a⃗), which is dominated by m computations of h-dimensional curve of degree k plus the total time spent on m executions of Σ. ProbGen. The average time spent on processing each function input is dominated by m/s curve computations and m/s executions of Σ. ProbGen. The time spent on Π. Verify(SKf, τa⃗, σy) is equal to the total time spent on m executions of Σ. Verify plus the total time spent on interpolations of s polynomials of degree ≤ t. Therefore, the average time spent on verifying each output is dominated by the time spent on m/s executions of Σ. Verify plus the time spent on interpolation of a polynomial of degree ≤ t.

There is a tradeoff between the number s of simultaneously delegated function inputs and the average time spent on processing each function input and verifying its output. If we choose s = O(n/t), then will be dominated by O(t) curve evaluations, O(t) executions of Σ. ProbGen, O(t) executions of Σ. Verify and interpolation of a degree ≤ t univariate polynomial. For Π to be correct and secure, n, k, t, h, d should be chosen such that kd < t ≤ (nkh)1/(h+1) + k + 1. As a result, one must have that n ≥ (d – 1)h+1k, which is comparable to N = h+dd, the number of coefficients of f. There are many ways to choose n, k, t, h, d such that = o(N). As an example, if we choose

h=O(1),d=poly(k),t=O(kdlogk)=o(N),n=O(kh+2dh+1logh+1k),

will be dominated by O(kd log k) curve computations, O(kd log k) executions of Σ. ProbGen, O(kd log k) executions of Σ. Verify and interpolation of a polynomial of degree ≤ t. Since Σ is outsourceable, Π is outsourceable as well.

Our transformation gives efficient verifiable computation schemes that enable the delegation of high-degree polynomial computations on private (encrypted) function inputs. In particular, our scheme neither relies on the expensive primitives such as fully homomorphic encryption (FHE) and garbled circuits nor has to represent the function f as a Boolean circuit. Even for very small k and d, our schemes are the first ensuring security and privacy without using expensive primitives. We can easily extend 𝓣 such that it is not only applicable to privately verifiable schemes [6] but also applicable to publicly verifiable schemes [10, 16, 37]. Furthermore, 𝓣 never changes the verifiability of the underlying scheme Σ.

Implementation: Applying our transformation to the privately delegatable and verifiable computation scheme Σbgv for multivariate polynomials of bounded total degree from Benabbas, Gennaro and Vahlis [6] gives a new scheme Πbgv that achieves input and output privacy for the client. We implemented Πbgv with a cyclic group of order ≥ 21024, where strong DDH [6] is supposed true. Let Tc(⋅) and Ts(⋅) denote the average client running time and server running time. Our implementation shows that Tcbgv) = O(Tcbgv)) and Tsbgv) = O(Tsbgv)), where the constants hidden in O depend on k and d. The moderate efficiency loss stems from 𝓣, which adds privacy to Σbgv. In contrast, the FHE-based schemes [2, 4, 13, 18] achieve input privacy but provide no implementations for polynomial computations. More precisely, we implemented Πbgv on a Dell Optiplex 9020 desktop with Intel Core i7-4790 Processor running at 3.6 GHz, on which we run Ubuntu 16.04.1 with 4 GB of RAM and the g++ compiler version 5.4.0. All our programs are single-threaded and built on top of NTL (and GMP). In order to achieve 128-bit security, the underlying scheme Πbgv requires a cyclic group of order ≥ 21024, where strong DDH assumption [6] is supposed to be true. We consider the computation of a 4-variate polynomial of total degree ≤ 6 at 504 points. The test shows that the client-side computations (Πbgv. ProbeGen and Πbgv. Verify) can be done very efficiently with total running time 27.616 seconds, and the average client’s work for each of the 504 delegated computations is ≤ 0.88 milliseconds; the one-time work of running Πbgv. KeyGen takes 0.636 seconds. On the other hand, the server’s work of running Πbgv. Compute takes 4444.24 seconds, which gives an amortized cost of 8.818 seconds for each of the 504 function inputs. Compared with the cost of 0.142 seconds in the non-private scheme, this high cost is the price of converting a non-private scheme to one that achieves privacy. This cost will become reasonable if the work of executing Πbgv. Compute is done in parallel. The performance of our implementation shows that the resulting schemes of our transformation in this section is potentially practical.

4 Private delegation of matrix-vector multiplication

We interpret any matrix M = (Mi,j) as a function that takes a vector x as input and outputs Mx, where x is the transpose of x. In this section, we present a verifiable computation scheme with an admissible function family of all matrix functions over a finite field, where the function input and output are kept private. Our construction is based on the somewhat homomorphic encryption, homomorphic hash and PRF with amortized closed-form efficiency.

4.1 Somewhat homomorphic encryption

A somewhat homomorphic encryption scheme allows one to evaluate low-degree polynomials on encrypted data. Fiore, Gennaro and Pastro [17] described a slight variation HE = (ParamGen, KeyGen, Eval, Enc, Dec) of the somewhat homomorphic encryption scheme by Brakerski and Vaikuntanathan [8], based on the hardness of the polynomial learning with error (LWE) problem. The variation is specialized to evaluate circuits of multiplicative depth 1 and sketched as below:

  • HE.ParamGen(λ): Given the security parameter λ, generate

    1. a message space 𝓜 = ℤp[X]/Φm(X), where Φm(X) ∈ ℤ[X] is the m-th cyclotomic polynomial of degree ϕ(m), where ϕ(⋅) is the Euler totient function,

    2. a ciphertext space 𝓒 ⊆ ℤq[X, Y] that consists of two kinds of elements:

      1. level-0 ciphertext: c = c0 + c1Y with c0, c1 ∈ ℤq[X]/Φm(X), where q > p, gcd(p, q) = 1 and degX(ci) ≤ ϕ(m) – 1 for i ∈ {0, 1},

      2. level-1 ciphertext: c = c0 + c1Y + c2Y2, where c0, c1, c2 ∈ ℤq[X] and degX(ci) ≤ 2(ϕ(m)–1) for i ∈ {0, 1, 2},

    3. two distributions: Dn,σ and ZOn.

  • HE.KeyGen(1λ): Choose a ← ℤq[X]/Φm(X) and s, eDn,σ; compute bas + pe; output dk = s and pk = (a, b).

  • HE.Encpk(m, r): Given m ∈ 𝓜 and r = (u, v, w) ← (ZOn, Dn,σ, Dn,σ), compute c0bu + pw + m and c1au + pv; output c = c0 + c1Y.

  • HE.Evalpk(f, a, b): Given a, b ∈ 𝓒, where a = a0 + a1Y + a2Y2, b = b0 + b1Y + b2Y2, homomorphic additions and multiplications (when a2 = b2 = 0) are done over ℤq[X, Y]:

    1. (a0 + a1Y + a2Y2) + (b0 + b1Y + b2Y2) = (a0 + b0) + (a1 + b1)Y + (a2 + b2)Y2,

    2. (a0 + a1Y) ⋅ (b0 + b1Y) = a0b0 + (a0b1 + a1b0)Y + a1b1Y2.

  • HE.Decdk(c): Given c = c0 + c1Y + c2Y2 ∈ 𝓒, compute ci = ci mod Φm(X) for i = 0, 1, 2; compute tRq as tc0sc1s2c2; output (t mod p).

4.2 Homomorphic hash

A keyed homomorphic hash (H.KeyGen, H, H.Eval) is defined by three algorithms, where H.KeyGen generates two keys K (public) and κ (private), H uses K or κ to map any input μ ∈ 𝓓 to a digest HK(μ) ∈ 𝓡 and H.Eval allows homomorphic computations (addition “+”, multiplication “∗” and scalar multiplication “⋅”) over 𝓡. Let bgpp = (q, 𝔾1, 𝔾2, 𝔾T, e, g, h) be a tuple of bilinear group parameters, and let

D={μZq[X,Y]:degX(μ)2(ϕ(m)1),degY(μ)2}.

The following homomorphic hash with domain 𝓓 and range 𝓡 = 𝔾1 × 𝔾2 (or 𝔾T) is from [17].

  • H.KeyGen(1λ): Choose α, β ← ℤq; output a public key

    K={(gαiβj,hαiβj):i{0,1,2},j{0,1,,2(ϕ(m)1)}}

    and a matching secret key κ = (α, β); both allow the computation of hash digest, and the latter usually makes the computation more efficient.

  • HK(μ): Given an input μ ∈ 𝓓, if degY(μ) ≤ 1, then output (gμ(β,α), hμ(β,α)) as the digest; if degY(μ) = 2, then output e(g, h)μ(β,α) as the digest. In particular, when degY(μ) ≤ 1, we denote [HK(μ)]1 = gμ(β,α) and [HK(μ)]2 = hμ(β,α).

  • H.Eval(f, ν1, ν2): This algorithm enables the homomorphic computations of arithmetic circuits f of degree ≤ 2 as below:

    1. ν1 = (t1, u1), ν2 = (t2, u2) ∈ 𝔾1 × 𝔾2, f = “+”: output (t1t2, u1u2);

    2. ν1 = (t1, u1) ∈ 𝔾1 × 𝔾2, ν2 = c ∈ ℤq, f = “⋅”: output (t1c,u1c);

    3. ν1 = (t1, u1), ν2 = (t2, u2) ∈ 𝔾1 × 𝔾2, f = “*”: output e(t1, u2) ∈ 𝔾T;

    4. ν1, ν2 ∈ 𝔾T, f = “+”: output ν1ν2 ∈ 𝔾T;

    5. ν1 ∈ 𝔾T, ν2 = c ∈ ℤq, f = “⋅”: output ν1c ∈ 𝔾T.

The homomorphic hash H was shown collision-resistant under the -BDHI assumption. That is, when ≥ max{2(ϕ(m) – 1), 2}, for any (K, κ) ← H.KeyGen(1λ), for any adversary 𝓐 running in probabilistic polynomial time,

Pr[(μμ)(HK(μ)=HK(μ)):(μ,μ)A(K)]negl(λ).

4.3 PRFs with amortized closed-form efficiency

A pseudorandom function (F.KG, F) is defined by two algorithms, where the key generation algorithm F.KG takes as input the security parameter 1λ and outputs a secret key k and some public parameters pp that specify domain 𝓧 and range 𝓡 of the function, and the function Fk(x) takes input x ∈ 𝓧 and uses the secret key k to compute a value R ∈ 𝓡. The PRF (F.KG, F) is said to be secure (satisfy the pseudorandomness property) if, for any p.p.t. adversary 𝓐,

Pr[AFk()(1λ,pp)=1]Pr[AΦ()(1λ,pp)=1]negl(λ),

where (k, pp) ← F.KG(1λ) and Φ: 𝓧 → 𝓡 is a random function.

Let C be a computation that takes as input n random values R1, …, Rn ∈ 𝓡 and a vector of m arbitrary values z = (z1, …, zm), and assume that the computation of C(R1, …, Rn; z1, …, zm) requires time t(n, m). Let L = ((ξ, η1), …, (ξ, ηn)) ∈ 𝓧n and η = (η1, …, ηn). The PRF (F.KG, F) is said to satisfy the amortized closed-form efficiency for (C, L) if there exist two polynomial-time algorithms CFEvalC,ηoffandCFEvalC,ξon such that

  1. for any ωCFEvalC,ηoff(k,z),CFEvalC,ξon(k,ω)=C({Fk(ξ,ηj)}j=1n;z),

  2. the running time of CFEvalC,ξon(k,ω) is o(t).

Let f(x1,,xn)=i,j=1nαi,jxixj+i=1nβixi be a degree-2 arithmetic circuit defined by f={αi,j,βi}i,j=1n. Let C:(G1×G2)n×Zqn2+nGT be a computation defined by

C({(Xi,Yi)}i=1n,f)=i,j=1ne(Xi,Yj)αi,ji=1ne(Xi,h)βi.

Let bgpp = (q, 𝔾1, 𝔾2, 𝔾T, e, g, h) be a tuple of bilinear group parameters, and let F be a PRF with domain {0, 1} and range Zq2. Fiore, Gennaro and Pastro [17] proposed a PRF with amortized closed-form efficiency for (C, L).

  • F.KG(1λ): Choose two secret keys k1, k2 for the PRF F; output k = (k1, k2) and pp, where pp defines the domain 𝓧 = ({0, 1})2 and the range 𝓡 = 𝔾1 × 𝔾2 (or 𝔾T).

  • Fk(ξ, η): Compute (u,v)Fk1(η),(a,b)Fk2(ξ); output (gua+vb, hua+vb); in particular, we denote

    Fk(ξ,η)]1=gua+vb,[Fk(ξ,η)]2=hua+vb.
  • CFEvalC,ηoff(k, f): Compute (ui,vi)Fk1(ηi) for all i ∈ [n]; let

    ω(z1,z2)=f(u1z1+v1z2,,unz1+vnz2);

    output the bivariate polynomial ω.

  • CFEvalC,ξon(k, ω): Compute (a,b)Fk2(ξ); output e(g, h)ω(a,b).

4.4 The construction

In this section, we present a private verifiable computation scheme Γ with an admissible function family of all matrix functions over a finite field. In this scheme, the function to be delegated is a square matrix M = (Mi,j) of order n, and the input is a vector x = (x1, …, xn) of dimension n; the server is required to compute and reply with an encoding of Mx, where x is the transpose of x. The input (and output) privacy of Γ is attained by the client encrypting x (as HE.Enc(x)) and then giving it to the server. The somewhat homomorphic encryption scheme used here allows the server to perform homomorphic scalar multiplications and additions on the elements of M (in clear) and the ciphertext of x, which gives an encrypted version of Mx for the client. The server is able to compute the homomorphic hash digests of HE.Enc(x) and combine these digests with the tags of M to generate a proof that its computation is correct. The homomorphic property of the hash and the amortized closed-form efficiency property of the PRF makes the client’s verification significantly faster than the computation of Mx from scratch. Below is the description of Γ.

  • (PKM, SKM) ← Γ.KeyGen(1λ, M):

    1. run HE.ParamGen(1λ) to choose message and ciphertext spaces 𝓜 = ℤp[X]/Φm(X) and 𝓒 ⊆ ℤq[X, Y].

    2. run HE.KeyGen(1λ) to generate an encryption key pk and a decryption key dk for the encryption scheme HE;

    3. choose a tuple bgpp = (q, 𝔾1, 𝔾2, 𝔾T, e, g, h) of bilinear map parameters;

    4. run H.KeyGen(1λ) to choose two keys K (public) and κ (private) for the homomorphic hash H;

    5. run F.KG(1λ) to generate a secret key k = (k1, k2) for the PRF F; choose a ← ℤq;

    6. compute Ti,j = gaMi,j ⋅ [Fk(i, j)]1 for all (i, j) ∈ [n]2;

    7. output PKM = (p, m, n, bgpp, pk, K, M, T = (Ti,j)), SKM = (dk, κ, k, a).

  • (σx, τx) ← Γ.ProbGen(SKM, x): let x = (x1, …, xn) ∈ Zqn;

    1. for every j∈ [n], compute μj ← HE.Encpk(xj);

    2. parse κ as (α, β) ∈ Zq2; compute ω=j=1nμj(β,α)Fk1(j)Zq2;

    3. let μ = (μ1, …, μn); output σx = μ and τx = ω.

  • σy ← Γ.Compute(PKM, σx): parse σx as μ = (μ1, …, μn) ∈ 𝓒n;

    1. for every i∈ [n], compute γi=j=1nMi,jμj;

    2. for every i∈ [n], compute δi=j=1ne(Ti,j,[HK(μj)]2);

    3. let γ = (γ1, …, γn); let δ = (δ1, …, δn); output σy = (γ, δ).

  • {y, ⊥} ← Γ.Verify(SKM, τx, σy):

    1. for every i∈ [n], let Wi=e(g,h)ω,Fk2(i) where ω,Fk2(i) is the dot product of ω and Fk2(i); check if the following identity holds:

      δi=e(g,h)aγi(β,α)Wi;(*)
    2. if there is an i∈ [n] such that (*) does not hold, then output ⊥; otherwise, output

      y=(HE.Decdk(γ1),,HE.Decdk(γn)).

Correctness: The correctness of Γ requires that, for any matrix function M, any key pair

(PKM,SKM)Γ.KeyGen(1λ,M),

any function input x, any (σx, τx) ← ProbGen(SKM, x), if σy is output by the algorithm Compute(PKM, σx), then Verify(SKM, τx, σy) will always accept and output Mx. For every i∈ [n], if Compute was honestly executed, then it is not hard to verify that

δi=j=1ne(gaMi,jgFk1(j),Fk2(i),hμj(β,α))=e(g,h)aγi(β,α)e(g,h)ω,Fk2(i).

Hence, the n equalities always hold, and σy will be accepted. Then the decryption correctness of HE gives (HE.Decdk(γ1), …, HE.Decdk(γn)) = Mx.

Privacy: In the scheme Γ, the client’s input x is encrypted using HE, the slight variation of the somewhat homomorphic encryption scheme by Brakerski and Vaikuntanathan [8]. The encryption scheme is semantically secure based on the hardness of the polynomial learning with error (LWE) problem. The input (and output) privacy of Γ follows from HE’s semantic security.

Security: The security of Γ requires that no adversary running in probabilistic polynomial time would be able to persuade the verification algorithm to accept and output wrong results.

Theorem 2

If F is a secure PRF and H is a collision-resistant homomorphic hash, then Γ is a secure verifiable computation scheme.

Proof

Let λ be any security parameter. Let M = (Mi, j) be any n × n matrix. Let 𝓐 be any p.p.t. adversary. We define the following security experiments.

  1. This is the standard security experiment ExpΓ,AVer(M,λ) of Definition 2.

  2. This experiment is identical to E0, except that, at step (d) of the standard security experiment, the Wi is computed as Wi=j=1ne(Fk(i,j),h)μj(β,α) for every i∈ [n], instead of using the key ω for efficient verification (and therefore avoid the use of CFEvalon).

  3. This is identical to E1, except that the Fk is replaced with a random function R: ({0, 1}*)2 → 𝔾1 × 𝔾2.

Below is the description of E2.

  • (PKM, SKM) ← KeyGen(1λ, M):

    1. run HE.ParamGen(1λ) to generate (p, q, 𝓜, 𝓒), where q > p, gcd(p, q) = 1, 𝓜 = ℤp[X]/Φm(X) and 𝓒 ⊆ ℤq[X, Y] is the ciphertext space;

    2. run HE.KeyGen(1λ) to generate (pk, dk);

    3. choose bgpp = (q, 𝔾1, 𝔾2, 𝔾T, e, g, h) ← 𝓖(1λ);

    4. run H.KeyGen to generate keys (K, κ) for the homomorphic hash H;

    5. choose a random function R: ({0, 1}*)2 → 𝔾1 × 𝔾2; choose a← ℤq;

    6. compute Ti,j = gaMi,j ⋅ [R(i, j)]1 for all (i, j) ∈ [n]2; let T= (Ti,j);

    7. output PKM = (p, m, n, bgpp, pk, K, M, T) and SKM = (dk, κ, R, a).

  • For = 1 to L = poly(λ):

    1. xA(PKM,{(xu,σxu,bu)}u=11): Based on the current view, 𝓐 chooses a new function input

      x=(x,1,,x,n)Zqn.
    2. (σx, τx) ← ProbGen(SKM, x): An encoding and the associated verification key for x are generated as below:

      1. for every j∈ [n], compute μ,j ← HE.Encpk(x,j);

      2. let μ = (μ,1, …, μ,n); output σx = μ and τx = ⊥.

      Note that τx is neither computed nor used in experiment E2.

    3. σ^yA(PKM,{(xu,σxu,bu)}u=11,x,σx): Based on the current view, 𝓐 provides a response σ̂y that consists of an encoding γ̂ = (γ̂,1, …, γ̂,n) of the result and a proof δ̂ = (δ̂,1, …, δ̂,n) in order to persuade Verify to accept and output a wrong result.

    4. ŷ ← Verify(SKM, τx, σ̂y): The response σ̂y is verified as below:

      1. parse SKM as (dk, κ, R, a); parse σ̂y as γ̂ = (γ̂,1, …, γ̂,n) ∈ ℤq[X, Y], and

        δ^=(δ^,1,,δ^,n)GTn;
      2. for every i∈ [n], compute Wi=j=1ne([R(i,j)]1,h)μ,j(β,α); check if

        δ^,i=e(g,h)aγ^,i(β,α)Wi;(*)
      3. if there is an i∈ [n] such that (*) is not true, then output ŷ = ⊥; otherwise, output

        y^=(HE.Decdk(γ^,1),,HE.Decdk(γ^,n)).

    5. Set b = 1 if ŷ ≠ ⊥; otherwise, set b = 0.

  • Output 1 if there exists ∈ [L] such that b = 1 but y^Mx; otherwise, output 0.

Let [E0 = 1], Pr[E1 = 1] and Pr[E2 = 1] be the probabilities that 𝓐 wins in E0, E1 and E2, respectively. We need to show that Pr[E0 = 1] ≤ negl(λ). The only difference between E1 and E0 is that, in E1, the algorithm CFEvalon is not used. This will not change the probability that 𝓐 wins. Therefore, Pr[E1 = 1] = Pr[E0 = 1]. The only difference between E2 and E1 is that, in E2, the function Fk is replaced with a random function R. The security of F implies that E1 and E2 are computationally indistinguishable. Therefore, we have |Pr[E1 − Pr[E2 = 1]| ≤ negl(λ). To prove Pr[E0 = 1] ≤ negl(λ), it suffices to show that Pr[E2 = 1] ≤ negl(λ).

For every ∈ [L], let E2, = 1 be the event that y{,Mx}, i.e., the event that 𝓐’s response

σ^y=((γ^,1,,γ^,n),(δ^,1,,δ^,n))

for x suffices to persuade Verify to accept and output a wrong result for the computation of Mx. More formally, E2, occurs if and only if

  • δ^,i=e(g,h)aγ^,i(β,α)j=1ne([R(i,j)]1,h)μ,j(β,α) for every i∈ [n],

  • but ŷ = (HE.Decdk(γ̂,1), …, HE.Decdk(γ̂,n)) ≠ Mx.

Then E2 = 1 occurs only if there is at least one ∈ [L] such that E2, occurs.

For xZqn and its encoding σx = μ = (μ,1, …, μ,n) ∈ 𝓒n, let

σy=(γ,δ)=((γ,1,,γ,n),(δ,1,,δ,n))

be the (correct) result, and proof which could be computed by faithfully running Compute(PKM, σx). Then the correctness of Γ guarantees that

  • δ,i=e(g,h)aγ,i(β,α)j=1ne([R(i,j)]1,h)μ,j(β,α) for every i∈ [n],

  • y = (HE.Decdk(γ,1), …, HE.Decdk(γ,n)) = Mx.

For every ∈ [L], let F be the event that

  • δ̂,i/δ,i = e(g, h)a⋅(γ̂,i(β,α)−γ,i(β,α)) for every i∈ [n],

  • but ŷy.

For every ∈ [L], the event E2, = 1 occurs only if the event F occurs. For every ∈ [L] and j∈ [n], let G,j be the event that

  • δ̂,i/δ,i = e(g, h)a⋅(γ̂,i(β,α)−γ,i(β,α)) for every i∈ [n],

  • but HE.Decdk(γ̂,j) ≠ HE.Decdk(γ,j).

Then F occurs only if there is at least one j∈ [n] such that G,j occurs. For every ∈ [L] and j∈ [n], let H,j be the event that

  • δ̂,i/δ,i = e(g, h)a⋅(γ̂,i(β,α)−γ,i(β,α)) for every i∈ [n],

  • but γ̂,jγ,j.

Then G,j occurs only if H,j occurs. For every ∈ [L] and j∈ [n], let H,j0 be the event that γ̂,jγ,j and γ̂,j(β, α) = γ,j(β, α). Let H,j1 be the event that δ̂,i/δ,i = e(g, h)a⋅(γ̂,i(β, α)−γ,i(β,α)) for every i∈ [n], but γ̂,j(β, α) ≠ γ,j(β, α). Then it is clear that H,j occurs only if at least one of H,j0 or H,j0 occurs.

Let Xc be a random variable that denotes the first index (, j) ∈ [L] × [n] such that H,jc occurs for every c∈ {0, 1}. In both cases, we rank the elements (, j) ∈ [L] × [n] as

(1,1)<<(1,n)<(2,1)<<(2,n)<<(L,n).

Due to the union bound, we have that

Pr[E2=1]=1LPr[E2,]=1LPr[F]=1Lj=1nPr[G,j]=1Lj=1nPr[H,j]=1Lj=1nc=01Pr[H,jc]=1Lj=1nc=01(ϕ,ψ)(,j)Pr[Xc=(ϕ,ψ)].

Note that X0 = (ϕ, ψ) means that (ϕ, ψ) is the first index such that a collision of HK is found by the adversary. As H is collision resistant, we must have that Pr[X0 = (ϕ, ψ)] ≤ negl(λ). On the other hand, X1 = (ϕ, ψ) means that (ϕ, ψ) is the first index such that an equation about a is determined by 𝓐 and thus gives a when 𝓐 is computationally unbounded. Note that a computationally unbounded 𝓐 can rule out one possibility of a via any one of the inequalities of the form δ̂,i/δ,ie(g, h)a⋅(γ̂,i(β,α)−γ,i(β,α)). Therefore,

Pr[X1=(ϕ,ψ)]1q((ϕ1)n+ψ1).

It follows that

Pr[E2=1]=1Li=1n(ϕ,ψ)(,i)(negl(λ)+1q((ϕ1)n+ψ1)),

which is negligible in λ as q ≈ 2λ, n = poly(λ) and L = poly(λ). □

Efficiency: A VC scheme is outsourceable if the time to encode the input and verify the output is smaller than the time to compute the function from scratch. For every xZqn, the time spent on Γ.ProbGen(SKM, x) is equal to the time of computing {HE.Encpk(xi)}i=1n plus the time of computing ω (n PRF computations and O(n) field operations). For every σy, the time cost of Γ.Verify(SKM, τx, σy) is dominated by O(n) group operations and n executions of HE.Decsk. Note that Mx requires O(n2) field operations. When n is large enough, the client’s cost of running Γ.ProbGen and Γ.Verify is o(n2) and substantially less than that of computing Mx from scratch. Hence, Γ is outsourceable.

Implementation: We implemented the scheme Γ on a Dell Optiplex 9020 desktop with Intel Core i7-4790 Processor running at 3.6 GHz, on which we run Ubuntu 16.04.1 with 4 GB of RAM and the g++ compiler version 5.4.0. All our programs are single-threaded and built on top of GMP. We consider the multiplication between a random square matrix of n rows (columns) over a finite field of order > 2256 and a random vector of dimension n over the same field, for n = 100, 200, …, 1000. We record the client’s time of running Γ.ProbGen and Γ.Verify, and get Figure 3.

Figure 3 Performance of the matrix delegation scheme.
Figure 3

Performance of the matrix delegation scheme.

The experiment shows that, for n = 100, the client-side computation can be done in 0.89 seconds. If we use the scheme Γ in a natural way to delegate the multiplication of two 100 × 100 matrices, then the client-side computation can be done in at most 89 seconds. Parno, Howell, Gentry and Raykova [38] implemented the scheme of [18] to delegate the same computation. Their experiment shows that the client in [18] has to spend at least 1011 seconds on problem generation and result verification. Compared with [18], the client in our scheme is faster with an order of 9. The performance of our implementation shows that the resulting schemes of our transformation in this section is nearly practical.

5 Application

Our verifiable computation schemes have interesting applications in the design of outsourced two-party protocols such as outsourced private information retrieval (PIR). PIR [12] allows a client to retrieve any block fi of a database f = (f1, f2, …, fN) from a server such that i ∈ [N] is not revealed to the server. PIR can be achieved by the client downloading f but that requires a communication cost of O(N). There are PIR schemes [12, 31] in the semi-honest server model which achieve nontrivial communication cost o(N). Recently, outsourced PIR [25, 34] has been suggested to offload the PIR server computation [5] to cloud. Outsourcing requires PIRs that are secure against untrusted cloud servers which may not faithfully execute the schemes.

Solution based on 𝓣: We model the database as a bit string f = (f1, f2, …, fN) ∈ {0, 1}N. Let E : [N] → {0, 1}h be an injection such that, for every i∈ [N], wt(E(i)) = d = t1k, where the integers h and d are chosen such that hdN. The database (bit string) f is interpreted as an h-variate polynomial

f(x)=f(x1,x2,,xh)=j=1Nfj:E(j)=1x.

It is trivial to see that f(E(j)) = fj for every j ∈ [N]. Suppose a number of clients are to retrieve s bits of the database, say fi1, …, fis, where i1, …, is ∈ [N]. The encodings a1 = E(i1), …, as = E(is) form a set of s function inputs. Our scheme 𝓣(Σ) from Section 3.3 allows the clients to produce a noisy encoding of a⃗ = (a1, …, as) and delegate the computations of {fij}j=1s={f(aj)}j=1s to a PIR server such that a⃗ is kept private and any incorrect responses from the server will be detected. The amortized communication cost for each of the s retrievals is dominated by O(t) vectors from 𝔽h, which gives a nontrivial outsourced PIR.

Solution based on Γ: We model the database as an n × n matrix M = (Mi, j). Suppose that the client is interested in Mi,j. Then it suffices for the client to retrieve the i-th row of M, i.e., (Mi,1, …, Mi,n). This retrieval can be captured by Mx with x = ei = (0, …, 1i, …, 0), the vector whose i-th component is 1 and all other components are 0. The scheme Γ allows the client to delegate the computation of Mx with x being kept private and then verify the server’s response efficiently. In particular, the client and the server only need to communicate O(n) = o(n2) HE ciphertexts, which gives a nontrivial outsourced PIR.

Extension: By applying 𝓣 to the publicly delegatable and verifiable schemes of [37], one would obtain schemes that are publicly delegatable and verifiable as well. These schemes would allow one to store f on a cloud server, and later, any client can freely retrieve a block of f on its own. Our schemes can be also used in other two-party protocols, such as oblivious polynomial evaluation and oblivious transfer [36], in order to obtain schemes against malicious parties.

6 Conclusion

In this paper, we proposed a transformation that adds privacy to a number of existing verifiable outsourcing schemes for the function family of multivariate polynomials over finite fields. The transformation is based on a noisy encoding of inputs and gives the first nearly practical verifiable computation scheme that has input (and output) privacy and does not limit the degree of the delegated polynomials. We also gave a verifiable computation scheme for delegation of matrix-vector multiplication which has very efficient verification. We show an application of our schemes to the outsourcing of PIR. Applications of the schemes to other problems such as oblivious polynomial evaluation, and oblivious transfer, are interesting directions for future work.

  1. Funding: This work was supported by National Natural Science Foundation of China (No. 61602304).

A Security proof for the transformation 𝓣

Let f(x1, …, xh) be any h-variate polynomial, and let k be a security parameter. Consider the security definition, Definition 2. Let 𝓐 be any p.p.t. adversary attacking Γ, and let ϵ=Pr[ExpΓ,AVer(f,k)=1]. We need to show that ϵ is a negligible function of k. This can be done by constructing a p.p.t adversary 𝓑 that executes 𝓐 as a subroutine and attacks Σ successfully at least with the same probability, i.e., Pr[ExpΣ,BVer(f,k)=1]ϵ. Given (f, k), the adversary 𝓑 simply works as below:

  • first of all, 𝓑’s challenger computes (pkf, skf) ← Σ.KeyGen(1k, f) and then gives pkf to 𝓑;

  • the adversary 𝓑 invokes 𝓐 with PKf = pkf;

  • for i = 1 to q = q(k), the adversaries 𝓑, 𝓐 and the challenger of 𝓑 proceed as below:

    1. based on its current view, i.e., (PKf,{aj,σaj,bj}j=1i1), the adversary 𝓐 produces a new set

      ai=(ai,1,,ai,s)(Fh)s

      of points and gives the set to 𝓑;

    2. the adversary 𝓑 computes (pka⃗i, rka⃗i) ← NEnc(1k, a⃗i) and parses pka⃗i as {ci,j}j=1m;

    3. for j = 1 to m: the adversary 𝓑 gives ci,j to its challenger; the challenger computes

      (σci,j,τci,j)Σ.ProbGen(skf,ci,j)

      and gives σci,j to 𝓑;

    4. the adversary 𝓑 gives σai={σci,j}j=1m to the adversary 𝓐;

    5. based on its current view, i.e., (PKf,{aj,σaj,bj}j=1i1,ai,σai), the adversary 𝓐 produces {σ^f(ci,j)}j=1m and gives it to 𝓑;

    6. for j = 1 to m : 𝓑 gives σ̂f(ci,j) to its challenger; the challenger gives v⃗i,j ← Σ.Verify(skf, τci,j, σ̂f(ci,j)) to 𝓑;

    7. if there exists j∈ [m] such that v⃗i,j = ⊥, then 𝓑 gives the bit bi = 0 to 𝓐; otherwise, 𝓑 gives the bit bi = 1 to 𝓐.

Adversary 𝓑 makes qm verification queries, i.e., {σ̂f(ci,j)}i∈[q],j∈[m], to its challenger. The event ExpΣ,BVer(f,k)=1 occurs if and only if there exist i∈ [q] and j∈ [m] such that v⃗i,j ∉ {f(ci,j), ⊥}. The latter event occurs if there exists i∈ [q] such that (1) bi = 1 and (2) there is an ∈ [s] such that Qi,(0) ≠ f(ai,), where Qi,(z) is the polynomial interpolated from {v⃗i,j}jTi, ((Ti,0, Ti,1, …, Ti,s) = rka⃗i form a partition of [m]). We denote by E the last event. What 𝓐 observes in the experiment above is exactly identical to what it should observe in the standard security experiment of Definition 2, i.e., ExpΓ,AVer(f,k). Therefore,

ϵ=Pr[ExpΓ,AVer(f,k)=1]=Pr[E]Pr[ExpΣ,BVer(f,k)=1].

Due to the security of Σ under Definition 2, the function ϵ must be negligible in k, the security parameter.

References

[1] P. Ananth, N. Chandran, V. Goyal, B. Kanukurthi and R. Ostrovsky, Achieving privacy in verifiable computation with multiple servers—without FHE and without pre-processing, in: Public-key Cryptography—PKC 2014, Lecture Notes in Comput. Sci. 8383, Springer, Heidelberg (2014), 149–166.10.1007/978-3-642-54631-0_9Search in Google Scholar

[2] B. Applebaum, Y. Ishai and E. Kushilevitz, From secrecy to soundness: Efficient verification via secure computation, in: Automata, Languages and Programming—ICALP 2010, Lecture Notes in Comput. Sci. 6198, Springer, Heidelberg (2010), 152–163.Search in Google Scholar

[3] L. Babai, Trading group theory for randomness, in: Proceedings of the 17th Annual ACM Symposium on Theory of Computing—STOC’85, ACM, New York (1985), 421–429.10.1145/22145.22192Search in Google Scholar

[4] M. Barbosa and P. Farshim, Delegatable homomorphic encryption with applications to secure outsourcing of computation, in: Topics in Cryptology—CT-RSA 2012, Lecture Notes in Comput. Sci. 7178, Springer, Heidelberg (2012), 296–312.10.1007/978-3-642-27954-6_19Search in Google Scholar

[5] A. Beimel, Y. Ishai and T. Malkin, Reducing the servers computation in private information retrieval: PIR with preprocessing, in: Advances in Cryptology—CRYPTO 2000, Lecture Notes in Comput. Sci. 1880, Springer, Berlin (2000), 55–73.10.1007/3-540-44598-6_4Search in Google Scholar

[6] S. Benabbas, R. Gennaro and Y. Vahlis, Verifiable delegation of computation over large datasets, in: Advances in Cryptology—CRYPTO 2011, Lecture Notes in Comput. Sci. 6841, Springer, Heidelberg, (2011) 111–131.10.1007/978-3-642-22792-9_7Search in Google Scholar

[7] D. Bleichenbacher, A. Kiayias and M. Yung, Decoding of interleaved Reed Solomon codes over noisy data, in: Automata, Languages and Programming, Lecture Notes in Comput. Sci. 2719, Springer, Berlin (2003), 97–108.10.1007/3-540-45061-0_9Search in Google Scholar

[8] Z. Brakerski and V. Vaikuntanathan, Fully homomorphic encryption from ring-LWE and security for key dependent messages, in: Advances in Cryptology—CRYPTO 2011, Lecture Notes in Comput. Sci. 6841, Springer, Heidelberg (2011), 505–524.10.1007/978-3-642-22792-9_29Search in Google Scholar

[9] R. Canetti, B. Riva and G. Rothblum, Practical delegation of computation using multiple servers, in: Proceedings of the 18th ACM Conference on Computer and Communications Security–CCS’11, ACM, New York (2011), 445–454.10.1145/2046707.2046759Search in Google Scholar

[10] D. Catalano, D. Fiore, R. Gennaro and K. Vamvourellis, Algebraic (trapdoor) one-way functions: Constructions and applications, in: Theory of Cryptography—TCC 2013, Lecture Notes in Comput. Sci. 7785, Springer, Heidelberg (2013), 680–699.Search in Google Scholar

[11] S. G. Choi, J. Katz, R. Kumaresan and C. Cid, Multi-client non-interactive verifiable computation, in: Theory of Cryptography—TCC 2013, Lecture Notes in Comput. Sci. 7785, Springer, Heidelberg (2013), 499–518.Search in Google Scholar

[12] B. Chor, O. Goldreich, E. Kushilevitz and M. Sudan, Private information retrieval, in: Proceedings of the 36th Annual Symposium on Foundations of Computer Science—FOCS’95, IEEE Press, Piscataway (1995), 41–50.Search in Google Scholar

[13] K.-M. Chung, Y. Kalai and S. Vadhan, Improved delegation of computation using fully homomorphic encryption, in: Advances in Cryptology—CRYPTO 2010, Lecture Notes in Comput. Sci. 6223, Springer, Berlin (2010), 483–501.10.1007/978-3-642-14623-7_26Search in Google Scholar

[14] K.-M. Chung, Y. T. Kalai, F.-H. Liu and R. Raz, Memory delegation, in: Advances in Cryptology—CRYPTO 2011, Lecture Notes in Comput. Sci. 6841, Springer, Heidelberg (2011), 151–168.10.1007/978-3-642-22792-9_9Search in Google Scholar

[15] D. Coppersmith and M. Sudan, Reconstructing curves in three (and higher) dimensional space from noisy data, in: Proceedings of the 35th Annual ACM Symposium on Theory of Computing, ACM, New York (2003), 136–142.10.1145/780542.780563Search in Google Scholar

[16] D. Fiore and R. Gennaro, Publicly verifiable delegation of large polynomials and matrix computations, with applications, in: Proceedings of the 2012 ACM Conference on Computer and Communications Security—CCS 2012, ACM, New York (2012), 501–512.10.1145/2382196.2382250Search in Google Scholar

[17] D. Fiore, R. Gennaro and V. Pastro, Efficiently verifiable computation on encrypted data, in: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security—CCS’14, ACM, New York (2014), 844–855.10.1145/2660267.2660366Search in Google Scholar

[18] R. Gennaro, C. Gentry and B. Parno, Non-interactive verifiable computing: Outsourcing computation to untrusted workers, in: Advances in Cryptology—CRYPTO 2010, Lecture Notes in Comput. Sci. 6223, Springer, Berlin (2010), 465–482.10.1007/978-3-642-14623-7_25Search in Google Scholar

[19] O. Goldreich, R. Rubinfeld and M. Sudan, Learning polynomials with queries: The highly noisy case, in: 36th Annual Symposium on Foundations of Computer Science, IEEE Press, Los Alamitos (1995), 294–303.Search in Google Scholar

[20] S. Goldwasser, Y. T. Kalai, R. A. Popa, V. Vaikuntanathan and N. Zeldovich, How to run Turing machines on encrypted data, in: Advances in Cryptology—CRYPTO 2013. Part II, Lecture Notes in Comput. Sci. 8043, Springer, Heidelberg (2013), 536–553.10.1007/978-3-642-40084-1_30Search in Google Scholar

[21] S. Goldwasser, Y. T. Kalai and G. N. Rothblum, Delegating computation: interactive proofs for muggles, in: Proceedings of the 40th Annual ACM Symposium on Theory of Computing—STOC’08, ACM, New York (2008), 113–122.10.1145/1374376.1374396Search in Google Scholar

[22] S. Goldwasser, S. Micali and C. Rackoff, The knowledge complexity of interactive proof systems, SIAM J. Comput. 18 (1989), no. 1, 186–208.10.1145/22145.22178Search in Google Scholar

[23] V. Guruswami and A. Rudra, Explicit capacity-achieving list-decodable codes, in: Proceedings of the 38th Annual ACM symposium on Theory of Computing—STOC’06, IEEE Press, Piscataway (2006), 1–10.10.1145/1132516.1132518Search in Google Scholar

[24] V. Guruswami and M. Sudan, Improved decoding of Reed–Solomon and algebraic-geometry codes, IEEE Trans. Inform. Theory45 (1999), no. 6, 1757–1767.10.1109/18.782097Search in Google Scholar

[25] Y. Huang and L. Goldberg, Outsourced Private Information Retrieval, in: Proceedings of the 12th ACM Workshop on Privacy in the Electronic Society—WPES ’13, IEEE Press, Piscataway (2013), 119–130.10.1145/2517840.2517854Search in Google Scholar

[26] Y. Ishai, E. Kushilevitz, R. Ostrovsky and A. Sahai, Cryptography from anonymity, in: Proceedings of the 47th Annual Symposium on Foundations of Computer Science—FOCS’06, IEEE Press, Piscataway (2006), 239–248.10.1109/FOCS.2006.25Search in Google Scholar

[27] C. Joo and A. Yun, Homomorphic authenticated encryption secure against chosen-ciphertext attack, in: Advances in Cryptology—ASIACRYPT 2014. Part II, Lecture Notes in Comput. Sci. 8874, Springer, Heidelberg (2014), 173–192.Search in Google Scholar

[28] A. Kiayias and M. Yung, Cryptographic hardness based on the decoding of Reed–Solomon codes, IEEE Trans. Inform. Theory54 (2008), no. 6, 2752–2769.10.1007/3-540-45465-9_21Search in Google Scholar

[29] J. Kilian, A note on efficient zero-knowledge proofs and arguments, in: Proceedings of the 24th Annual ACM Symposium on Theory of Computing—STOC’92, ACM, New York (1992), 723–732.10.1145/129712.129782Search in Google Scholar

[30] J. Kilian, Improved efficient arguments, in: Advances in Cryptology—CRYPT0’95, Lecture Notes in Comput. Sci. 963, Springer, Heidelberg (1995), 311–324.10.1007/3-540-44750-4_25Search in Google Scholar

[31] E. Kushilevitz and R. Ostrovsky, Replication is not needed: Single database, computationally-private information retrieval, in: Proceedings 38th Annual Symposium on Foundations of Computer Science—FOCS’97, IEEE Press, Piscataway (1997), 364–373.10.1109/SFCS.1997.646125Search in Google Scholar

[32] B. Libert, T. Peters, M. Joye and M. Yung, Linearly homomorphic structure-preserving signatures and their applications, in: Advances in Cryptology—CRYPTO 2013. Part II, Lecture Notes in Comput. Sci. 8043, Springer, Heidelberg (2013), 289–307.10.1007/978-3-642-40084-1_17Search in Google Scholar

[33] F. J. MacWilliams and N. J. A. Sloane, The theory of Error-correcting Codes. I, North-Holland, Amsterdam, 1977.Search in Google Scholar

[34] T. Mayberry, E.-O. Blass and A. H. Chan, PIRMAP: Efficient private information retrieval for mapreduce, in: Financial Cryptography and Data Security—FC’13, Lecture Notes in Comput. Sci. 7859, Springer, Heidelberg (2013), 371–385.10.1007/978-3-642-39884-1_32Search in Google Scholar

[35] S. Micali, CS proofs, in: Proceedings of the 35th Annual Symposium on Foundations of Computer Science—FOCS ’94, IEEE Press, Piscataway (1994), 436–453.10.1109/SFCS.1994.365746Search in Google Scholar

[36] M. Naor and B. Pinkas, Oblivious polynomial evaluation, SIAM J. Comput. 35 (2006), no. 5, 1254–1281.10.1137/S0097539704383633Search in Google Scholar

[37] C. Papamanthou, E. Shi and R. Tamassia, Signatures of correct computation, in: Theory of Cryptography—TCC 2013, Lecture Notes in Comput. Sci. 7785, Springer, Heidelberg (2013), 222–242.Search in Google Scholar

[38] B. Parno, J. Howell, C. Gentry and M. Raykova, Pinocchio: Nearly practical verifiable computation, in: IEEE Symposium on Security and Privacy, IEEE Press, Piscataway (2013), 238–25.10.1109/SP.2013.47Search in Google Scholar

[39] B. Parno, M. Raykova and V. Vaikuntanathan, How to delegate and verify in public: Verifiable computation from attribute-based encryption, in: Theory of Cryptography—TCC 2012, Lecture Notes in Comput. Sci. 7194, Springer, Heidelberg (2012), 422–439.Search in Google Scholar

[40] F. Parvaresh and A. Vardy, Correcting errors beyond the guruswami-sudan radius in polynomial time, in: Proceedings of the 46th Annual Symposium on Foundations of Computer Science—FOCS’05, IEEE Press, Piscataway (2005), 285–294.Search in Google Scholar

[41] M. Sudan, Decoding of Reed Solomon codes beyond the error-correction bound, J. Complexity13 (1997), no. 1, 180–193.10.1006/jcom.1997.0439Search in Google Scholar

[42] T. Tassa, A. Jarrous and Y. Ben-Ya’akov, Oblivious evaluation of multivariate polynomials, J. Math. Cryptol. 7 (2013), no. 1, 1–29.10.1515/jmc-2012-0007Search in Google Scholar

[43] D. Woodruff and S. Yekhanin, A geometric approach to information-theoretic private information retrieval, SIAM J. Comput. 37 (2007), no. 4, 1046–1056.10.1109/CCC.2005.2Search in Google Scholar

Received: 2018-08-12
Accepted: 2019-08-27
Published Online: 2020-07-03

© 2020 Liang Feng Zhang, Reihaneh Safavi-Naini, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 14.5.2024 from https://www.degruyter.com/document/doi/10.1515/jmc-2018-0039/html
Scroll to top button