1 Introduction

If a general-purpose quantum computer can be built, it will break most widely-deployed public-key cryptography. The cryptographic community is busily designing new cryptographic systems to prepare for this risk. These systems typically consist of an algebraic structure with cryptographic hardness properties, plus a symmetric cryptography layer which transforms the algebraic structure into a higher level primitive like a public-key encryption (PKE) scheme, a key encapsulation mechanism (KEM), or a signature scheme. The algebraic structures underlying these so-called “post-quantum” systems have new properties, and the quantum threat model requires changes in the way security is analyzed. Therefore the transformations turning the algebraic structures into cryptosystems have to be freshly examined.

In this work we focus on the construction of secure KEMs. In this setting the algebraic structures usually provide a PKE from which a KEM is derived via a generic transform. A new property of the algebraic structures used in many post-quantum PKEs and KEMs gives them malleable ciphertexts, so they are at risk from chosen-ciphertext attacks (CCA) [HNP+03]. The standard defenses against CCA are variants of the Fujisaki-Okamoto (FO) transform [FO99]. Known security proofs for the FO transform use the random oracle model (ROM) [BR93]. This is for two reasons. First, the FO transform has a circular structure–it chooses coins for encryption according to the message being encrypted. This leads to obstacles which we do not know how to overcome when proving security in the standard model. In the ROM, we circumvent this by re-programming. Second, in the ROM a reduction learns all the adversary’s queries to the random oracle. This allows us to formalize the intuition that an adversary must have known a challenge plaintext to extract said plaintext.

Since we are concerned with security against quantum attackers, we need to extend these proofs to the quantum-accessible random oracle model (QROM) [BDF+11]. This comes with two challenges for our setting. On the one hand, in the QROM the adversary can query all inputs in superposition. Hence, it is no longer trivial to break the circular dependency by re-programming, which results in security bounds that do not tightly match known attacks. On the other hand, a reduction cannot learn the adversarial queries by simple observation anymore. The reason is that observation of a quantum state requires a measurement which disturbs the state. Hence, more advanced techniques are required.

1.1 Our Contribution

QROM analysis of KEMs has advanced rapidly over the past several years. The initial solutions were loose by a factor of up to \(q^6\) [TU16, HHK17], where q is the number of times the adversary queries the random oracle. This has improved to \(q^2\) [SXY18, JZC+18] and finally to q [HKSU18, JZM19a, JZM19c]. Some works provide tight proofs under stronger assumptions [SXY18, XY19]. Our work provides a proof of IND-CCA security for KEMs constructed from deterministic PKEs (Theorem 2), which is tight except for a quadratic security loss which might be impossible to avoid [JZM19b]. For KEMs constructed from randomized PKEs our bound is still loose by a factor of up to q (Theorem 1). In this particular case, our bound does not essentially differ from the bound already given in [HKSU18]. In [HKSU18], the proof given is called “semi-modular”: it is first shown that derandomization and puncturing achieve the stronger notion that [SXY18] requires to achieve tight security, and the tight proof of [SXY18] is then applied to the derandomized and punctured scheme. The strategy of [HKSU18] was deliberately chosen to deal with correctness errors: The tight proof of [SXY18] could not trivially be generalized for non-perfect schemes in a way such that the result still would have been meaningful for most lattice-based encryption schemes. Our work deals with correctness errors in a modular way by introducing an additional intermediate notion (called \(\mathsf {FFC} \)).

At the heart of our bound is a new one-way to hiding (O2H) lemma which gives a tighter bound than previous O2H lemmas (Lemma 5). This comes at the cost of limited applicability. O2H lemmas allow to bound the difference in the success probability of an adversary when replacing its oracle function by a similar function. Previous lemmas lost a factor of roughly the number of the adversary’s queries to this oracle or its square-root. Our lemma does not incur any such loss. On the downside, our lemma only applies if the reduction has access to both oracle functions and if the functions only differ in one position. See Table 1 for a comparison.

Some post-quantum schemes feature an inherent probability of decryption failure, say \(\delta >0\). Such failures can be used in attacks, but they also complicate security proofs. As a result, previous bounds typically contain a term \(q\sqrt{\delta }\) which is not known to be tight. However, most of the obstacles that arise in our CCA security proof can be avoided by assuming that encryption with a particular public key is injective (after derandomization). This is generally the case, even for imperfectly-correct systems; see Appendix D for a rough analysis of LWE schemes. In that case, the adversary’s advantage is limited to the probability that it actually finds and submits a valid message that fails to decrypt. This means that our bounds apply to deterministic but failure-prone systems like certain earlier BIKE [ABB+19] variantsFootnote 1, but our result is limited by the assumption of injectivity.

Until today several variants of the FO-transform were proposed. We consider the four basic transforms \(U^{\bot },U^{\bot }_m,U^{\not \bot },U^{\not \bot }_m\) [HHK17] and, in addition, we study \(U^{\bot }_m\) in the presence of key confirmation. The two most notable differences reside in the use of implicit rejection (\(U^{\not \bot }, U^{\not \bot }_m\)) versus explicit rejection (\(U^{\bot }, U^{\bot }_m\)), and whether the derivation of the session key should depend on the ciphertext (\(U^{\bot }_m, U^{\not \bot }_m\)) or not (\(U^{\bot }, U^{\not \bot }\)). Another important decision is the use of key confirmation which we also partially analyze. We come to the following results. Security with implicit rejection implies security with explicit rejection (Theorem 3). The opposite holds if the scheme with explicit rejection also employs key confirmation (Theorem 4). Moreover, security is independent of the decision if the session key derivation depends on the ciphertext (Theorem 5).

Notation. We will use the following notation throughout the paper.

  • For two sets XY, we write \(Y^X\) to denote the set of functions from X to Y.

  • Let \(H:X \rightarrow Y\) be a (classical or quantum-accessible) random oracle. Then we denote the programming of H at \(x\in X\) to some \(y\in Y\) as \(H[x \rightarrow y]\).

  • Let \(\mathcal {A} \) be an algorithm. If \(\mathcal {A} \) has access to a classical (resp., quantum-accessible) oracle H, we write \(\mathcal {A} ^H\) and call \(\mathcal {A} \) an oracle (resp., quantum oracle) algorithm.

2 One-way to Hiding

ROM reductions typically simulate the random oracle in order to learn the adversary’s queries. In the classical ROM, the adversary cannot learn any information about H(x) without the simulator learning both x and H(x). In the QROM things are not so simple, because measuring or otherwise recording the queries might collapse the adversary’s quantum state and change its behavior. However, under certain conditions the simulator can learn the queries using “One-way to Hiding” (O2H) techniques going back to [Unr15]. We will use the O2H techniques from [AHU19], and introduce a novel variant that allows for tighter results.

Consider two quantum-accessible oracles \(G,H:X\rightarrow Y\). The oracles do not need to be random. Suppose that G and H differ only on some small set \(S\subset X\), meaning that \(\forall x\notin S, G(x) = H(x)\). Let \(\mathcal {A} \) be an oracle algorithm that takes an input z and makes at most q queries to G or H. Possibly \(\mathcal {A} \) makes them in parallel. Therefore, suppose that the query depth, i.e., the maximum number of sequential invocations of the oracle [AHU19], is at most \(d\le q\). If \(\mathcal {A} ^G(z)\) behaves differently from \(\mathcal {A} ^H(z)\), then the O2H techniques give a way for the simulator to find some \(x\in S\) with probability dependent on d and q.

We will use the following three O2H lemmas.

  • Lemma 1 (original O2H) is the most general: the simulator needs to provide only G or H but it has the least probability of success.

  • Lemma 3 (semiclassical O2H) has a greater probability of success, but requires more from the simulator: for each query x, the simulator must be able to recognize whether \(x\in S\), and if not it must return \(G(x)=H(x)\).

  • Lemma 5 (our new “double-sided” O2H) gives the best probability of success, but it requires the simulator to evaluate both G and H in superposition. It also can only extract \(x\in S\) if S has a single element. If S has many elements, but the simulator knows a function f such that \(\{f(x):x\in S\}\) has a single element, then it can instead extract that element f(x).

We summarize the three variants of O2H as shown in Table 1. In all cases, there are two oracles H and G that differ in some set S, and the simulator outputs \(x\in S\) with some probability \(\epsilon \). The lemma then shows an upper bound on the difference between \(\mathcal {A} ^{H}\) and \(\mathcal {A} ^{G}\) as a function of \(\epsilon \).

Table 1. Comparison of O2H variants

Arbitrary joint distribution. The O2H lemmas allow (GHSz) to be random with arbitrary joint distribution. This is stronger than (GHSz) being arbitrary fixed objects, because the probabilities in the lemma include the choice of (GHSz) in addition to \(\mathcal {A} \)’s coins and measurements. Also, the lemmas are still true if the adversary consults other oracles which are also drawn from a joint distribution with (GHSz).

2.1 Original O2H

We begin with the original O2H which first appeared in [Unr15]. We use the phrasing from [AHU19] as it is more general and more consistent with our other lemmata.

Lemma 1

(One-way to hiding; [AHU19] Theorem 3). Let \(G,H:X\rightarrow Y\) be random functions, let z be a random value, and let \(S\subset X\) be a random set such that \(\forall x\notin S, G(x)=H(x)\). (GHSz) may have arbitrary joint distribution. Furthermore, let \(\mathcal {A} ^H\) be a quantum oracle algorithm which queries H with depth at most d. Let \(\mathsf {Ev}\) be an arbitrary classical event. Define an oracle algorithm \(\mathcal {B} ^H(z)\) as follows: Pick . Run \(\mathcal {A} ^H(z)\) until just before its ith round of queries to H. Measure all query input registers in the computational basis, and output the set T of measurement outcomes. Let

$$\begin{aligned} P_\mathrm {left}&:= \Pr [\mathsf {Ev}: \mathcal {A} ^H(z)],\ \ P_\mathrm {right} := \Pr [\mathsf {Ev}: \mathcal {A} ^G(z)], \\P_\mathrm {guess}&:= \Pr [S\cap T\ne \varnothing : T\leftarrow \mathcal {B} ^H(z)]. \end{aligned}$$

Then

$$ {\left| {P_\mathrm {left}-P_\mathrm {right}}\right| } \le 2d\sqrt{ P_\mathrm {guess}} \qquad \text {and}\qquad {\left| {\sqrt{P_\mathrm {left}}-\sqrt{P_\mathrm {right}}}\right| } \le 2d\sqrt{ P_\mathrm {guess}}. $$

The same result holds with \(\mathcal {B} ^{G}(z)\) instead of \(B^H(z)\) in the definition of \(P_\mathrm {guess}\).

From this lemma we conclude the following result for pseudo-random functions (PRFs, see Definition 10). It intuitively states that a random oracle makes a good PRF, even if the distinguisher is given full access to the random oracle in addition to the PRF oracle.

Corollary 1 (PRF based on random oracle)

Let \(H:(K\times X)\rightarrow Y\) be a quantum-accessible random oracle. This function may be used as a quantum-accessible PRF \(F_k(x) := H(k,x)\) with a key . Suppose a PRF-adversary \(\mathcal {A} \) makes q queries to H at depth d, and any number of queries to \(F_k\) at any depth. Then

$$\mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{F_k}(\mathcal {A}) \le 2\sqrt{dq/{\left| {K}\right| }}.$$

Proof

The adversary’s goal is to distinguish \((F_k,H)\) from (FH), where F is an unrelated uniformly random function. This is the same as distinguishing \((F,H[(k,x)\rightarrow F(x)])\) from (FH), and the set of differences between these two H-oracles is \(S:=\{k\}\times X\). By Lemma 1, the distinguishing advantage is at most \(2d\sqrt{P_\mathrm {guess}}\), where \(P_\mathrm {guess} = \Pr [\exists (k',x) \in Q: k'=k]\), for a random round Q of parallel queries made by \(\mathcal {A} ^{F,H}\).

Since \(\mathcal {A} ^{F,H}\) has no information about k, and in expectation Q contains q/d parallel queries, we have \(P_\mathrm {guess} \le q/(d\cdot {\left| {K}\right| })\), so

$$\mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{F_k}(\mathcal {A}) \le 2d\sqrt{q/(d\cdot {\left| {K}\right| })} = 2\sqrt{dq/{\left| {K}\right| }}$$

as claimed.    \(\square \)

Note that Corollary 1 is the same as [SXY18] Lemma 2.2 and [XY19] Lemma 4, except that it takes query depth into account.

2.2 Semi-classical O2H

We now move on to semi-classical O2H. Here \(\mathcal {B} \) is defined in terms of punctured oracles [AHU19], which measure whether the input is in a set S as defined next.

Definition 1

(Punctured oracle). Let \(H:X\rightarrow Y\) be any function, and \(S\subset X\) be a set. The oracle \({H}\backslash {S} \) (“H punctured by S”) takes as input a value x. It first computes whether \(x\in S\) into an auxiliary qubit p, and measures p. Then it runs H(x) and returns the result. Let \(\mathsf {Find} \) be the event that any of the measurements of p returns 1.

The event is called \(\mathsf {Find} \) because if the simulator chooses to, it can immediately terminate the simulation and measure the value \(x\in S\) which caused the event. The oracle is called “punctured” because if \(\mathsf {Find} \) does not occur, \({H}\backslash {S} \) returns a result independent of H’s outputs on S, as shown by the following lemma.

Lemma 2

(Puncturing is effective; [AHU19] Lemma 1). Let \(G,H:X\rightarrow Y\) be random functions, let z be a random value, and let \(S\subset X\) be a random set such that \(\forall x\notin S, G(x)=H(x)\). (GHSz) may have arbitrary joint distribution. Let \(\mathcal {A} ^H\) be a quantum oracle algorithm. Let \(\mathsf {Ev}\) be an arbitrary classical event. Then

$$\Pr [\mathsf {Ev}\wedge \lnot \mathsf {Find}:\mathcal {A} ^{{H}\backslash {S}}(z)] = \Pr [\mathsf {Ev}\wedge \lnot \mathsf {Find}:\mathcal {A} ^{{G}\backslash {S}}(z)].$$

Also, puncturing only disturbs the adversary’s state when it is likely to \(\mathsf {Find}\).

Lemma 3

(Semi-classical O2H; [AHU19] Theorem 1). Let \(G,H:X\rightarrow Y\) be random functions, let z be a random value, and let \(S\subset X\) be a random set such that \(\forall x\notin S, G(x)=H(x)\). (GHSz) may have arbitrary joint distribution.

Let \(\mathcal {A} ^H\) be a quantum oracle algorithm which queries H with depth at most d. Let \(\mathsf {Ev}\) be an arbitrary classical event and let

$$\begin{aligned}&P_\mathrm {left} := \Pr [\mathsf {Ev} : \mathcal {A} ^H(z)], \ P_\mathrm {right} := \Pr [\mathsf {Ev} : \mathcal {A} ^G(z)], \\&P_\mathrm {find} := \Pr [\mathsf {Find}: \mathcal {A} ^{{G}\backslash {S}}(z)] \ \ \,\,{\mathop {=}\limits ^{\text {Lem. 2}}}\ \ \Pr [\mathsf {Find}: \mathcal {A} ^{{H}\backslash {S}}(z)]. \end{aligned}$$

Then

$$ {\left| {P_\mathrm {left}-P_\mathrm {right}}\right| } \le 2\sqrt{d P_\mathrm {find}} \qquad \text {and}\qquad {\left| {\sqrt{P_\mathrm {left}}-\sqrt{P_\mathrm {right}}}\right| } \le 2\sqrt{d P_\mathrm {find}}. $$

The theorem also holds with bound \(\sqrt{(d+1) P_\mathrm {find}}\) for the following alternative definitions of \(P_\mathrm {right}\):

$$\begin{aligned} P_\mathrm {right}&:= \Pr [\mathsf {Ev}: \mathcal {A} ^{{H}\backslash {S}}(z)] \\ P_\mathrm {right}&:= \Pr [\mathsf {Ev} \wedge \lnot \mathsf {Find}: \mathcal {A} ^{{H}\backslash {S}}(z)] \ \ {\mathop {=}\limits ^{\text {Lem. 2}}}\ \ \Pr [\mathsf {Ev} \wedge \lnot \mathsf {Find}: \mathcal {A} ^{{G}\backslash {S}}(z)] \\ P_\mathrm {right}&:= \Pr [\mathsf {Ev} \vee \,\,\,\, \mathsf {Find}: \mathcal {A} ^{{H}\backslash {S}}(z)] \ \ {\mathop {=}\limits ^{\text {Lem. 2}}}\ \ \Pr [\mathsf {Ev} \vee \,\,\,\, \mathsf {Find}: \mathcal {A} ^{{G}\backslash {S}}(z)] \\ \end{aligned}$$

We might expect that if the adversary has no information about S, then \(P_\mathrm {find}\) would be at most \(q{\left| {S}\right| }/{\left| {X}\right| }\). But this is not quite true: the disturbance caused by puncturing gives the adversary information about S. This increases \(\mathcal {A}\)’s chances, but only by a factor of 4, as explained next.

Lemma 4

(Search in semi-classical oracle; [AHU19] Theorem 2). Let \(H: X\rightarrow Y\) be a random function, let z be a random value, and let \(S\subset X\) be a random set. (HSz) may have arbitrary joint distribution. Let \(\mathcal {A} ^H\) be a quantum oracle algorithm which queries H at most q times with depth at most d.

Let \(\mathcal {B} ^H(z)\) and \(P_\mathrm {guess}\) be defined as in Lemma 1. Then

$$ \Pr [\mathsf {Find}: \mathcal {A} ^{{H}\backslash {S}}(z) ] \le 4dP_\mathrm {guess}. $$

In particular, if for each \(x\in X\), \(\Pr [x\in S]\le \epsilon \) (conditioned on z, on other oracles \(\mathcal {A}\) has access to, and on other outputs of H) then

$$\Pr [\mathsf {Find}: \mathcal {A} ^{{H}\backslash {S}}(z) ]\le 4q\epsilon .$$

2.3 Double-sided O2H

We augment these lemmas with a new O2H lemma which achieves a tighter bound focusing on a special case. This focus comes at the price of limited applicability. Our lemma applies when the simulator can simulate both G and H. It also requires that S is a single element; alternatively if some function f is known such that f(S) is a single element, it can extract f(S).

Lemma 5

(Double-sided O2H). Let \(G,H:X\rightarrow Y\) be random functions, let z be a random value, and let \(S\subset X\) be a random set such that \(\forall x\notin S, G(x)=H(x)\). (GHSz) may have arbitrary joint distribution. Let \(\mathcal {A} ^H\) be a quantum oracle algorithm. Let \(f:X\rightarrow W\subseteq \{0,1\}^n\) be any function, and let f(S) denote the image of S under f. Let \(\mathsf {Ev}\) be an arbitrary classical event.

We will define another quantum oracle algorithm \(\mathcal {B} ^{G,H}(z)\). This \(\mathcal {B}\) runs in about the same amount of time as \(\mathcal {A}\), but when \(\mathcal {A}\) queries H, \(\mathcal {B}\) queries both G and H, and also runs f twice. Let

$$\begin{aligned}&P_\mathrm {left} := \Pr [\mathsf {Ev} : \mathcal {A} ^H(z)], \ P_\mathrm {right} := \Pr [\mathsf {Ev} : \mathcal {A} ^G(z)],\ P_\mathrm {extract} := \Pr [\mathcal {B} ^{G,H}(z)\in f(S)]. \end{aligned}$$

If \(f(S) = \{w^*\}\) is a single element, then \(\mathcal {B} \) will only return \(\bot \) or \(w^*\), and furthermore

$$ {\left| {P_\mathrm {left}-P_\mathrm {right}}\right| } \le 2\sqrt{P_\mathrm {extract}} \qquad \text {and}\qquad {\left| {\sqrt{P_\mathrm {left}}-\sqrt{P_\mathrm {right}}}\right| } \le 2\sqrt{P_\mathrm {extract}}. $$

Proof

See Appendix B.

Note that if \(S=\{x^*\}\) is already a single element, then we may take f as the identity. In this case \(\mathcal {B} \) will return either \(\bot \) or \(x^*\).

3 KEM and PKE Security Proofs

We are now ready to get to the core of our work. All the relevant security notions are given in Appendix A. The implications are summarized in Fig. 1.

Fig. 1.
figure 1

Relations of our security notions using transforms T and \(U^{\not \bot }\) (above) and relations between the security of different types of U-constructions (below). The solid lines show implications which are tight with respect to powers of q and/or d, and the dashed line shows a non-tight implication. The hooked arrows indicate theorems with \(\epsilon \)-injectivity constraints.

3.1 Derandomization: \(\mathsf {IND}{\text {-}}\mathsf {CPA}\) \(\mathsf {P}\) \({\mathop {\Rightarrow }\limits ^{\text {QROM}}}\) \(\mathsf {OW}{\text {-}}\mathsf {CPA}\) \(T(\mathsf {P}, G)\)

The T transform [HHK17] converts a rPKE \(\mathsf {P} =(\mathrm {Keygen},\mathrm {Encr},\mathrm {Decr})\) to a dPKE \(T(\mathsf {P},G)=(\mathrm {Keygen},\mathrm {Encr} _1,\mathrm {Decr})\) by using a hash function \(G:\mathcal {M}\rightarrow \mathcal {R}\), modeled as random oracle, to choose encryption coins, where

$$\mathrm {Encr} _1(\mathrm {pk},m) := \mathrm {Encr} (\mathrm {pk},m;\ G(m)).$$

The following theorem shows that if a PKE \(\mathsf {P}\) is \(\mathsf {IND}{\text {-}}\mathsf {CPA}\) secureFootnote 2, then T\((\mathsf {P},G)\) is one-way secure in the quantum-accessible random oracle model.

Theorem 1

Let \(\mathsf {P} \) be an rPKE with messages in \(\mathcal {M}{}\) and random coins in \(\mathcal {R}{}\). Let \(G:\mathcal {M}{}\rightarrow \mathcal {R}{}\) be a quantum-accessible random oracle. Let \(\mathcal {A}\) be an \(\mathsf {OW}{\text {-}}\mathsf {CPA} \) adversary against \(\mathsf {P} ':=T(\mathsf {P},G)\). Suppose that \(\mathcal {A} \) queries G at most q times with depth at most d.

Then we can construct an \(\mathsf {IND}{\text {-}}\mathsf {CPA}\) adversary \(\mathcal {B}\) against \(\mathsf {P} \), running in about the same time and resources as \(\mathcal {A}\), such that

$$\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P} '}(\mathcal {A}) \le (d+2)\cdot \left( \mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B}) +\frac{8(q+1)}{{\left| {\mathcal {M}{}}\right| }}\right) . $$

Proof

See Appendix C.

Second preimages. In the traditional definition of one-way functions, the adversary wins by finding any \(\mathrm {m} '\) where \(\mathrm {Encr} (\mathrm {pk},\mathrm {m} ') = c ^*\), whereas in our definition (cf. Definition 7) of \(\mathsf {OW}{\text {-}}\mathsf {CPA}\) the adversary must find \(m^*\) itself. This only matters if there is a second preimage, and thus a decryption failure. If \(\mathsf {P}\) is \(\delta \)-correct and \(\epsilon \)-injective, it is easily shown that a definition allowing second preimages adds at most \(\min (\delta ,\epsilon )\) to the adversary’s \(\mathsf {OW}{\text {-}}\mathsf {CPA}\)-advantage.

Hashing the public key. Many KEMs use a variant of T which sets the coins to \(G(\mathrm {pk},m)\). This is a countermeasure against multi-key attacks. In this paper we only model single-key security, so we omit \(\mathrm {pk} \) from the hashes for brevity. The same also applies to the other transforms later in this paper, such as \(U^{\not \bot }\).

3.2 Deterministic \(\mathsf {P}\): OW-CPA \(\mathsf {P} \) \({\mathop {\Rightarrow }\limits ^{\text {QROM}}}\) IND-CCA \(U^{\not \bot }(\mathsf {P},\mathsf {F},H)\)

Our \(\mathsf {OW}{\text {-}}\mathsf {CPA}\) to \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) conversion is in the style of [JZM19d]. However, that bound is based on the failure probability \(\delta \) of a randomized encryption algorithm, whereas ours is based on the difficulty of finding a failure without access to the private key. This means our theorem applies to deterministic but imperfectly-correct algorithms, such as one of the three BIKE variants, BIKE-2 [ABB+19]. So instead we use injectivity and a game where the adversary tries to find ciphertexts which are valid but do not decrypt correctly.

Definition 2

(Valid ciphertext). Let \(\mathsf {P} =(\mathrm {Keygen}, \mathrm {Encr},\) \(\mathrm {Decr})\) be a dPKE. Call a ciphertext \(c \) “valid” for a public key \(\mathrm {pk} \) of \(\mathsf {P} \) if there exists \(\mathrm {m} \) such that \(c =\mathrm {Encr} (\mathrm {pk},\mathrm {m})\).

We introduce a new failure-finding experimentFootnote 3, to capture the probability that the adversary can find valid ciphertexts that cause a decryption failure.

Definition 3

(Finding Failing Ciphertext). The find-failing-ciphertexts experiment \((\mathsf {FFC})\) is shown in Fig. 2. The \(\mathsf {FFC}\)-advantage of an adversary \(\mathcal {A}\) is defined by

$$\mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{\mathsf {P}}(\mathcal {A}) := \Pr [\mathrm {Expt}^{\mathsf {FFC} {}}_{\mathsf {P}}(\mathcal {A})\rightarrow 1].$$
Fig. 2.
figure 2

\(\mathsf {FFC}\) experiment on a dPKE \(\mathsf {P}\). The instantiation of H generalizes to any number of random oracles, including zero.

The \(U^{\not \bot }\) transform [HHK17] converts a dPKE \(\mathsf {P} =(\mathrm {Keygen} _\mathsf {P},\mathrm {Encr},\mathrm {Decr})\) into a KEM \(\mathsf {K} =(\mathrm {Keygen},\mathrm {Encaps},\mathrm {Decaps})\) using a PRF \(\mathsf {F}:\mathcal {K}_\mathsf {F} \times \mathcal {C}\rightarrow \mathcal {K}\) and a hash function \(H:\mathcal {M}\times \mathcal {C}\rightarrow \mathcal {K}\), modeled as a random oracle. The PRF is used for implicit rejection, returning \(\mathsf {F} (\mathrm {prfk},c)\) in case of an invalid ciphertext using a secret \(\mathrm {prfk}\). The \(U^{\not \bot }\) transform is defined in Fig. 3. We also describe variants \(U^{\not \bot }_m, U^\bot , U^\bot _m\) of this transform from [HHK17], which make the following changes:

  • On \(\mathrm {Encaps} \) line 3 resp. \(\mathrm {Decaps} \) line 7, the transformations \(U^{\not \bot }_m\) and \(U^\bot _m\) compute H(m) resp. \(H(m')\) instead of \(H(m,c)\) resp. \(H(m',c)\).

  • On \(\mathrm {Decaps} \) lines 4 and 6, the transformations \(U^\bot \) and \(U^\bot _m\) return \(\bot \) instead of \(\mathsf {F} (\mathrm {prfk},c)\). These variants also don’t need \(\mathrm {prfk}\) as part of the private key.

The transforms \(U^\bot \) and \(U^\bot _m\) are said to use explicit rejection because they return an explicit failure symbol \(\bot \). \(U^{\not \bot }\) and \(U^{\not \bot }_m\) are said to use implicit rejection.

Fig. 3.
figure 3

Transform \(U^{\not \bot }(\mathsf {P}, \mathsf {F}):=(\mathrm {Keygen},\mathrm {Encaps},\mathrm {Decaps})\).

The next theorem states that breaking the \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) security of \(U^{\not \bot }(\mathsf {P},\mathsf {F}, H)\) requires either breaking the \(\mathsf {OW}{\text {-}}\mathsf {CPA}\) security of \(\mathsf {P}\), causing a decapsulation failure, or breaking the PRF used for implicit rejection. In particular, we need \(\mathsf {P}\) to be an \(\epsilon \)-injective dPKE as in Definition 6.

Theorem 2

Let \(H:\mathcal {M}\times \mathcal {C}\rightarrow \mathcal {K}\) be a quantum-accessible random oracle and \(\mathsf {F}:\mathcal {K}_\mathsf {F} \times \mathcal {C}\rightarrow \mathcal {K}\) be a PRF. Let \(\mathsf {P}\) be an \(\epsilon \)-injective dPKE which is independent of H. Let \(\mathcal {A}\) be an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) adversary against the KEM \(U^{\not \bot }(\mathsf {P},\mathsf {F})\), and suppose that \(\mathcal {A}\) makes at most \(q_\mathrm {dec}\) decryption queries. Then we can construct three adversaries running in about the same time and resources as \(\mathcal {A}\):

  • an \(\mathsf {OW}{\text {-}}\mathsf {CPA}\)-adversary \(\mathcal {B} _1\) against \(\mathsf {P}\)

  • a \(\mathsf {FFC}\)-adversary \(\mathcal {B} _2\) against \(\mathsf {P}\), returning a list of at most \(q_\mathrm {dec}\) ciphertexts

  • a \(\mathsf {PRF}\)-adversary \(\mathcal {B} _3\) against \(\mathsf {F} \)

such that

$$\mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{U^{\not \bot }(\mathsf {P})}(\mathcal {A}) \le 2\sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}+ \mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{\mathsf {P}}(\mathcal {B} _2) + 2\cdot \mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{\mathsf {F}}(\mathcal {B} _3) + \epsilon . $$

In the common case that \(\mathsf {F} (\mathrm {prfk},c)\) is implemented as \(H(\mathrm {prfk},c)\) it holds that if \(\mathcal {A} \) makes q queries at depth d, then

$$\mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{\mathsf {F}}(\mathcal {B} _3) {\mathop {\le }\limits ^{\text {cor. 1}}}2\sqrt{dq/{\left| {M}\right| }}.$$

Proof

Our proof is by a series of games. In some later games, we will define an outcome “draw” which is distinct from a win or loss. A draw counts as halfway between a win and a loss, as described by the adversary’s score \(w_i\):

$$\begin{aligned} w_i:= & {} \Pr [\mathcal {A}\ \text {wins: Game}\ i] + \frac{1}{2}\Pr [\text {Draw: Game}\ i]\\= & {} \frac{1}{2} \left( 1 + \Pr [\mathcal {A}\ \text {wins: Game}\ i] - \Pr [\mathcal {A}\ \text {loses: Game}\ i]\right) \end{aligned}$$

Game 0

\(\mathbf{(}{\mathsf {IND}{\text {-}}\mathsf {CCA}}{} \mathbf{).}\) This is the original \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) game against the KEM \(U^{\not \bot }(\mathsf {P},\mathsf {F},H)\), cf. Definition 12.

Game 1

(PRF is random). Game 1 is the same as Game 0, except the simulator replaces \(\mathsf {F} (\mathrm {prfk},\cdot )\) with a random function .

We construct a \(\mathsf {PRF}\)-adversary \(\mathcal {B} _3\) (cf. Definition 10) which replaces its calls to \(\mathsf {F} (\mathrm {prfk},\cdot )\) by calls to its oracle, runs \(\mathcal {A} \), and outputs 1 if \(\mathcal {A} \) wins and 0 otherwise. Now, by construction and . Hence,

$$\begin{aligned} {\left| {w_{1} - w_{0}}\right| } = \mathrm {Adv}^{\mathrm {PRF}}_{F}(\mathcal {A}). \end{aligned}$$

Game 2

(Draw on fail or non-injective \({\mathrm {pk}}{} \mathbf{).}\) Let \(\mathsf {Fail}\) be the event that one or more of \(\mathcal {A} \)’s decapsulation queries \(D(c)\) fails to decrypt, meaning that \(c = \mathrm {Encr} (\mathrm {pk},\mathrm {m})\) for some \(\mathrm {m} \), but \(\mathrm {Decr} (\mathrm {sk},c)\ne \mathrm {m} \). Let \(\mathsf {NonInj}\) be the event that \(\mathrm {Encr} (\mathrm {pk},\cdot )\) is not injective, and let \(\mathsf {Draw} := \mathsf {Fail} \vee \mathsf {NonInj}\). In Game 2 and onward, if \(\mathsf {Draw}\) occurs then the game continues, but at the end it is a draw instead of the adversary winning or losing.

Let \(d_i := \Pr [\mathsf {Draw}:\text {Game}\ i]\). Then \({\left| {w_{2}-w_{1}}\right| }\le \frac{1}{2} d_{2}\). It is important to note that the event \(\mathsf {Draw}\) is a well-defined classical event and does not depend on H, even though the simulator might not be able to determine efficiently whether it occurred.

Game 3

(Reprogram \(H(m,c)\) to \(R(c)\)). Game 3 is the same as Game 2, but the simulator reprograms \(H(m,c)\) where \(c =\mathrm {Encr} (\mathrm {pk},m)\) to return \(R(c)\).

This produces the same win and draw probabilities as Game 2 as explained next. For each m, the value \(H(m,\mathrm {Encr} (\mathrm {pk},m))\) is changed to a uniformly, independently random value, except when the game is already a draw:

  • It is uniformly random because R is uniformly random.

  • It is independent of \(H(m',c)\) for \(m'\ne m\) because \(\mathrm {Encr} (\mathrm {pk},\cdot )\) is injective or else the game is a draw.

  • H calls R(c) only for valid ciphertexts \(c=\mathrm {Encr} (\mathrm {pk},m')\). On the other hand, the decapsulation oracle only calls \(R(c')\) for rejected ciphertexts \(c'\), i.e. ones where \(c'\ne \mathrm {Encr} (\mathrm {pk},\mathrm {Decr} (\mathrm {sk},c'))\). If a valid ciphertext has been rejected and passed to R in this way, then \(\mathsf {Draw}\) has occurred and the return value of R does not affect \(w_i\) or \(d_i\).

Therefore \(w_{3} = w_{2}\) and \(d_{3} = d_{2}\).

Game 4

(Decapsulation oracle returns \(R(c)\)). Game 4 is the same as Game 3, but the simulated decapsulation oracle simply returns \(R(c)\) for all ciphertexts other than the challenge (for which it still returns \(\bot \)).

In fact, the decapsulation oracle was already doing this in Game 3: The original decapsulation returns either \(H(m,c)\) with \(c =\mathrm {Encr} (\mathrm {pk},m)\) or \(\mathsf {F} (\mathrm {prfk},c)\), but both of those have been reprogrammed to return \(R(c)\). Therefore \(w_{4} = w_{3}\) and \(d_{4} = d_{3}\). As of this game, the simulator does not use the private key anymore.

Bound draw. We now want to upper bound the draw probability. Let \(\mathcal {B} _2\) be the algorithm which, given a public key \(\mathrm {pk} \), simulates Game 4 for \(\mathcal {A} \) and outputs a list L of all of \(\mathcal {A} \)’s decapsulation queries. Then \(\mathcal {B} _2\) is a \(\mathsf {FFC}\)-adversary against \(\mathsf {P} \) which runs in about the same time as \(\mathcal {A}\) and succeeds whenever a draw occurred during the game. Consequently,

$$d_{2} = d_{3} = d_{4} \le \mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{\mathsf {P}}(\mathcal {B} _2) +\epsilon .$$

Game 5

(Change shared secret). In Game 5, the shared secret is changed to a uniformly random value r. If \(b=1\), then for all m such that \(\mathrm {Encr} (\mathrm {pk},m)=c^*\), the oracle H(m) is reprogrammed to return r. If \(b=0\), then H is not reprogrammed.

If \(\mathrm {Encr} (\mathrm {pk},\cdot )\) is injective, then this is the same distribution as Game 4, and otherwise the game is a draw. Therefore \({w_{5} = w_{4}}\).

It remains to bound \(\mathcal {A} \)’s advantage in Game 5. The simulation still runs in about the same time as \(\mathcal {A} \). Suppose at first that \(\mathrm {Encr} (\mathrm {pk},\cdot )\) is injective, so that the oracle H is reprogrammed only at \(m^*\). Then the \(b=0\) and \(b=1\) cases are now distinguished by a single return value from the H oracle. Hence, we can consider two oracles H and \(H' := H[m^*\rightarrow r]\) as required by Lemma 5. Then Lemma 5, states that there is an algorithm \(\mathcal {B} _1\), running in about the same time as \(\mathcal {A} \), such that for all H:

The same inequality holds if \(\mathrm {Encr} (\mathrm {pk},\cdot )\) is not injective, for then the game is always a draw and the left-hand side is zero. (The algorithm \(\mathcal {B} _1\) still runs with the same efficiency in that case; it just might not return \(m^*\).) The inequality also holds in expectation over H by Jensen’s inequality:

so that

$${\left| {\Pr [\text {Win}:b=0] - \Pr [\text {Lose}:b=1]}\right| } \le 2\sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}.$$

Likewise, for the same adversary \(\mathcal {B} _1\),

$${\left| {\Pr [\text {Win}:b=1] - \Pr [\text {Lose}:b=0]}\right| } \le 2\sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}.$$

Since b is either 0 or 1 each with probability \(\frac{1}{2}\), we have by the triangle inequality:

$${\left| {\Pr [\text {Win}]-\Pr [\text {Lose}]}\right| } \le 2\sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}$$

so that \({\left| {w_{5}- \frac{1}{2}}\right| } \le \sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}\).

Summing up the differences in the previous games, we have

$${\left| {w_0- \frac{1}{2}}\right| } \le \sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)}+ \frac{1}{2}\mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{\mathsf {P}}(\mathcal {B} _2) + \frac{\epsilon }{2} + \mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{\mathsf {F}}(\mathcal {B} _3) $$

and finally

$$\mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{U^{\not \bot }(\mathsf {P})}(\mathcal {A}) \le 2\sqrt{\mathrm {Adv}^{\mathrm {\mathsf {OW}{\text {-}}\mathsf {CPA}}}_{\mathsf {P}}(\mathcal {B} _1)} + 2\cdot \mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{\mathsf {F}}(\mathcal {B} _3) + \mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{\mathsf {P}}(\mathcal {B} _2) +\epsilon .$$

This completes the proof of Theorem 2.    \(\square \)

Tightness. This bound is essentially tight, since breaking the one-wayness of \(\mathsf {P}\) and finding decryption failures are both known to result in attacks. Breaking the PRF harms security if and only if implicit rejection is more secure than explicit rejection. For a correct \(\mathsf {P}\) the bound boils down to the first two terms of the sum. The square-root loss arises from OW being a weaker security notion than IND [MW18], i.e., harder to break, and recent results [JZM19b] suggest that the square-root loss might be unavoidable in the quantum setting.

3.3 Decryption Failures

When the dPKE is constructed by derandomizing an rPKE, we can also bound the \(\mathsf {FFC}\) advantage.

Lemma 6

Let \(\mathsf {P} =(\mathrm {Keygen},\mathrm {Encr},\mathrm {Decr})\) be a \(\delta \)-correct rPKE with messages in \(\mathcal {M}\) and randomness in \(\mathcal {R}\). Let \(G:\mathcal {M}\rightarrow \mathcal {R}\) be a random oracle, so that \(T(\mathsf {P},G) := (\mathrm {Keygen},\mathrm {Encr} _1,\mathrm {Decr})\) is a derandomized version of \(\mathsf {P} \). Suppose that \(T(\mathsf {P},G)\) is \(\epsilon \)-injective. Let \(\mathcal {A} \) be a \(\mathsf {FFC}\) adversary against \(T(\mathsf {P},G)\) which makes at most q queries at depth d to G and returns a list of at most \(q_\mathrm {dec}\) ciphertexts. Then

$$\mathrm {Adv}^{\mathrm {\mathsf {FFC}}}_{T(\mathsf {P},G)}(\mathcal {A}) \le ((4d+1)\delta +\sqrt{3\epsilon })\cdot (q+q_{\mathrm {dec}})+\epsilon .$$

Proof

See Appendix E.

Note that if \(\epsilon \) is negligible, and if the adversary can recognize which ciphertexts will fail, then this is a Grover bound.

4 Explicit Rejection and Key Confirmation

We now turn to systems with explicit rejection or key confirmation. The next theorem shows that the transform \(U^{\bot }\) (with explicit rejection) never yields KEMs that are more secure than KEMs constructed via \(U^{\not \bot }\) (with implicit rejection).

Theorem 3

(Explicit \(\rightarrow \) implicit). Let \(\mathsf {P}\) be a dPKE. Let \(\mathcal {A} \) be an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) adversary against \(U^{\not \bot }(\mathsf {P},\mathsf {F},H)\). Then there is an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) adversary \(\mathcal {B} \) against \(U^{\bot }(\mathsf {P},H)\), running in about the same time and resources as \(\mathcal {B} \), such that

$$\mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{U^{\not \bot }(\mathsf {P},\mathsf {F},H)}(\mathcal {A}) = \mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{U^{\bot }(\mathsf {P},H)}(\mathcal {B}).$$

Proof

The only difference between \(U^{\bot }(\mathsf {P},H)\) and \(U^{\not \bot }(\mathsf {P},\mathsf {F},H)\) is that where the former would reject a ciphertext c by returning \(\bot \), the latter instead returns \(\mathsf {F} (\mathrm {prfk},c)\). So the adversary \(\mathcal {B} \) can simply choose a random PRF key \(\mathrm {prfk}\), run \(\mathcal {A}\), and output \(\mathcal {A}\)’s result. \(\mathcal {B}\) forwards all of \(\mathcal {A}\)’s queries to its oracles and returns the responses with the only difference that in case the decapsulation oracle returns \(\bot \), \(\mathcal {B}\) returns \(\mathsf {F} (\mathrm {prfk},c)\). The algorithm \(\mathcal {B}\) perfectly simulates the \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) game for \(U^{\not \bot }(\mathsf {P},\mathsf {F},H)\) and hence \(\mathcal {A}\) succeeds with the same success probability as in the original game.    \(\square \)

On the other hand, explicit rejection is secure if key confirmation is used. Key confirmation refers to adding a hash of the message to the cipher text. Let \(\tau \) be the number of bits desired for the key-confirmation tag. For a PKE \(\mathsf {P} =(\mathrm {Keygen},\mathrm {Encr},\mathrm {Decr})\) define the transform \(C(\mathsf {P},H_t,\tau ) := (\mathrm {Keygen},\mathrm {Encr} _1,\mathrm {Decr} _1)\) using a random oracle \(H_t:\mathcal {M}\rightarrow \{0,1\}^\tau \) as in Fig. 4.

Fig. 4.
figure 4

Transform \(C(\mathsf {P},H_t,\tau ):=(\mathrm {Keygen},\mathrm {Encr} _1,\mathrm {Decr} _1)\).

Theorem 4

(Implicit \(\rightarrow \) explicit with key confirmation). Let \(\mathsf {P}\) be an \(\epsilon \)-injective dPKE. Consider the KEM \(\mathsf {K} _1 := U^{\bot }_m(C(\mathsf {P},H_t,\tau ),H_s)\) obtained from \(\mathsf {P}\) applying the \(C\)-transform with random oracle \(H_t:\mathcal {M}\rightarrow \{0,1\}^\tau \) and the \(U^{\bot }_m\)-transform with independent random oracle \(H_s:\mathcal {M}\rightarrow \{0,1\}^\varsigma \). Let \(\mathsf {K} _2 := U^{\not \bot }_m(\mathsf {P},\mathsf {F}, H)\) be the KEM obtained from \(\mathsf {P}\) applying the \(U^{\not \bot }_m\)-transform with random oracle \(H:\mathcal {M}\rightarrow \{0,1\}^{\varsigma +\tau }\).

If \(\mathcal {A} \) is an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\)-adversary against \(\mathsf {K} _1\) which makes \(q_\mathrm {dec}\) decapsulation queries, then it is also an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\)-adversary against \(\mathsf {K} _2\) and there is a \(\mathsf {PRF}\)-adversary \(\mathcal {B} \) against \(\mathsf {F} \) which uses about the same time and resources as \(\mathcal {A} \), such that:

$$\mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{\mathsf {K} _1}(\mathcal {A}) \le 2\cdot \mathrm {Adv}^{\mathrm {\mathsf {IND}{\text {-}}\mathsf {CCA}}}_{\mathsf {K} _2}(\mathcal {A}) + \frac{q_\mathrm {dec}}{2^{\tau -1}} + 2\cdot \mathrm {Adv}^{\mathrm {\mathsf {PRF}}}_{\mathsf {F}}(\mathcal {B}) + 2\epsilon .$$

Proof

Deferred to Appendix F.

Finally, we can show that hashing m is equivalent to hashing \((m,c)\) in the next theorem.

Theorem 5

(\(U_m\leftrightarrow U\)). Let \(\mathsf {P} \) be a dPKE. Let \(\mathsf {K} _1=U^\bot (\mathsf {P},H_1)\) and \(\mathsf {K} _2=U^\bot _m(\mathsf {P},H_2)\). Then \(\mathsf {K} _1\) is \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) secure if and only if \(\mathsf {K} _2\) is \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) secure. In other words, if there is an adversary \(\mathcal {A} \) against one, then there is an adversary \(\mathcal {B} \) against the other, running in about the same time and with the same advantage.

The same is true for \(U^{\not \bot }\) and \(U^{\not \bot }_m\).

Proof

This is a simple indifferentiability argument. In both the encapsulation and decapsulation functions, the \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) experiment against \(\mathsf {K} _1\) only calls \(H_1(m,c)\) when \(c =\mathrm {Encr} (\mathrm {pk},m)\). So to simulate the \(\mathsf {K} _1\)-experiment playing in an \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) experiment against \(\mathsf {K} _2\) (with oracle \(H_2: \mathcal {M}\rightarrow \mathcal {K}\)), sample fresh random oracle and set

$$H_1(m,c) := \left\{ \begin{array}{ll} H_2(m), &{} \text {if}\ c =\mathrm {Encr} (\mathrm {pk},m), \\ H(m,c), &{} \text {otherwise}. \end{array}\right. $$

This exactly simulates the \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) experiment against \(\mathsf {K} _1\). In the other direction, to simulate the \(\mathsf {IND}{\text {-}}\mathsf {CCA}\) experiment against \(\mathsf {K} _2\) it suffices to redirect \(H_2(m)\) to \(H_1(m,\mathrm {Encr} (\mathrm {pk},m))\).

The same technique works for \(U^{\not \bot }\) and \(U^{\not \bot }_m\). It also works for security notions other than \(\mathsf {IND}{\text {-}}\mathsf {CCA}\), such as \(\mathsf {OW}{\text {-}}\mathsf {CCA}\), \(\mathsf {OW}{\text {-}}\mathsf {qPVCA}\), etc. (see for example [JZC+18]).    \(\square \)