Background

For any natural number m, let H m denote the quantity

H m : = liminf n p n + m p n ,

where p n denotes the n th prime. The twin prime conjecture asserts that H1=2; more generally, the Hardy-Littlewood prime tuples conjecture [1] implies that H m =H(m+1) for all m≥1, where H(k) is the diameter of the narrowest admissible k-tuple (see the ‘Outline of the key ingredients’ section for a definition of this term). Asymptotically, one has the bounds

1 2 + o ( 1 ) ) k log k H ( k ) ( 1 + o ( 1 ) k log k

as k (see Theorem 17 below); thus, the prime tuples conjecture implies that H m is comparable to m logm as m.

Until very recently, it was not known if any of the H m were finite, even in the easiest case m=1. In the breakthrough work of Goldston et al. [2], several results in this direction were established, including the following conditional result assuming the Elliott-Halberstam conjecture EH[ 𝜗] (see Claim 8 below) concerning the distribution of the prime numbers in arithmetic progressions:

Theorem 1(GPY theorem).

Assume the Elliott-Halberstam conjecture EH[ 𝜗] for all 0<𝜗<1. Then, H1≤16.

Furthermore, it was shown in [2] that any result of the form EH 1 2 + 2 ϖ for some fixed 0<ϖ<1/4 would imply an explicit finite upper bound on H1 (with this bound equal to 16 for ϖ>0.229855). Unfortunately, the only results of the type EH[ 𝜗] that are known come from the Bombieri-Vinogradov theorem (Theorem 9), which only establishes EH[ 𝜗] for 0<𝜗<1/2.

The first unconditional bound on H1 was established in a breakthrough work of Zhang [3]:

Theorem 2(Zhang’s theorem).

H1≤70,000,000.

Zhang’s argument followed the general strategy from [2] on finding small gaps between primes, with the major new ingredient being a proof of a weaker version of EH 1 2 + 2 ϖ , which we call MPZ[ ϖ,δ] (see Claim 10) below. It was quickly realized that Zhang’s numerical bound on H1 could be improved. By optimizing many of the components in Zhang’s argument, we were able (Polymath, DHJ: New equidistribution estimates of Zhang type, submitted), [4] to improve Zhang’s bound to

H 1 4 , 680 .

Very shortly afterwards, a further breakthrough was obtained by Maynard [5] (with related work obtained independently in an unpublished work of Tao), who developed a more flexible ‘multidimensional’ version of the Selberg sieve to obtain stronger bounds on H m . This argument worked without using any equidistribution results on primes beyond the Bombieri-Vinogradov theorem, and among other things was able to establish finiteness of H m for all m, not just for m=1. More precisely, Maynard established the following results.

Theorem 3(Maynard’s theorem).

Unconditionally, we have the following bounds:

(i) H1≤600

(ii) H m C m3e4m for all m≥1 and an absolute (and effective) constant C

Assuming the Elliott-Halberstam conjecture EH[ 𝜗] for all 0<𝜗<1, we have the following improvements:

(iii) H1≤12

(iv) H2≤600

(v) H m C m3e2m for all m≥1 and an absolute (and effective) constant C

For a survey of these recent developments, see [6].

In this paper, we refine Maynard’s methods to obtain the following further improvements.

Theorem 4.

Unconditionally, we have the following bounds:

(i) H1≤246

(ii) H2≤398,130

(iii) H3≤24,797,814

(iv) H4≤1,431,556,072

(v) H5≤80,550,202,480

(vi) H m Cmexp 4 28 157 m for all m≥1 and an absolute (and effective) constant C

Assume the Elliott-Halberstam conjecture EH[ 𝜗] for all 0<𝜗<1. Then, we have the following improvements:

(vii) H2≤270

(viii) H3≤52,116

(ix) H4≤474,266.

(x) H5≤4,137,854.

(xi) H m C m e2m for all m≥1 and an absolute (and effective) constant C

Finally, assume the generalized Elliott-Halberstam conjecture GEH[ 𝜗] (see Claim 12 below) for all 0<𝜗<1. Then,

(xii) H1≤6

(xiii) H2≤252

In the ‘Outline of the key ingredients’ section, we will describe the key propositions that will be combined together to prove the various components of Theorem 4. As with Theorem 1, the results in (vii)-(xiii) do not require EH[ 𝜗] or GEH[ 𝜗] for all 0<𝜗<1, but only for a single explicitly computable 𝜗 that is sufficiently close to 1.

Of these results, the bound in (xii) is perhaps the most interesting, as the parity problem [7] prohibits one from achieving any better bound on H1 than 6 from purely sieve-theoretic methods; we review this obstruction in the ‘The parity problem’ section. If one only assumes the Elliott-Halberstam conjecture EH[ 𝜗] instead of its generalization GEH[ 𝜗], we were unable to improve upon Maynard’s bound H1≤12; however, the parity obstruction does not exclude the possibility that one could achieve (xii) just assuming EH[ 𝜗] rather than GEH[ 𝜗], by some further refinement of the sieve-theoretic arguments (e.g. by finding a way to establish Theorem 20(ii) below using only EH[ 𝜗] instead of GEH[ 𝜗]).

The bounds (ii)-(vi) rely on the equidistribution results on primes established in our previous paper. However, the bound (i) uses only the Bombieri-Vinogradov theorem, and the remaining bounds (vii)-(xiii) of course use either the Elliott-Halberstam conjecture or a generalization thereof.

A variant of the proof of Theorem 4(xii), which we give in ‘Additional remarks’ section, also gives the following conditional ‘near miss’ to (a disjunction of) the twin prime conjecture and the even Goldbach conjecture:

Theorem 5(Disjunction).

Assume the generalized Elliott-Halberstam conjecture GEH[ 𝜗] for all 0<𝜗<1. Then, at least one of the following statements is true:

(a) (Twin prime conjecture) H1=2.

(b) (near-miss to even Goldbach conjecture) If n is a sufficiently large multiple of 6, then at least one of n and n−2 is expressible as the sum of two primes, similarly with n−2 replaced by n+2. (In particular, every sufficiently large even number lies within 2 of the sum of two primes.)

We remark that a disjunction in a similar spirit was obtained in [8], which established (prior to the appearance of Theorem 2) that either H1 was finite or that every interval [x,x+xε] contained the sum of two primes if x was sufficiently large depending on ε>0.

There are two main technical innovations in this paper. The first is a further generalization of the multidimensional Selberg sieve introduced by Maynard and Tao, in which the support of a certain cutoff function F is permitted to extend into a larger domain than was previously permitted (particularly under the assumption of the generalized Elliott-Halberstam conjecture). As in [5], this largely reduces the task of bounding H m to that of efficiently solving a certain multidimensional variational problem involving the cutoff function F. Our second main technical innovation is to obtain efficient numerical methods for solving this variational problem for small values of the dimension k, as well as sharpened asymptotics in the case of large values of k.

The methods of Maynard and Tao have been used in a number of subsequent applications [9]-[21]. The techniques in this paper should be able to be used to obtain slight numerical improvements to such results, although we did not pursue these matters here.

1.1 Organization of the paper

The paper is organized as follows. After some notational preliminaries, we recall in the ‘Distribution estimates on arithmetic functions’ section the known (or conjectured) distributional estimates on primes in arithmetic progressions that we will need to prove Theorem 4. Then, in the section ‘Outline of the key ingredients’, we give the key propositions that will be combined together to establish this theorem. One of these propositions, Lemma 18, is an easy application of the pigeonhole principle. Two further propositions, Theorem 19 and Theorem 20, use the prime distribution results from the ‘Distribution estimates on arithmetic functions’ section to give asymptotics for certain sums involving sieve weights and the von Mangoldt function; they are established in the ‘Multidimensional Selberg sieves’ section. Theorems 22, 24, 26, and 28 use the asymptotics established in Theorems 19 and 20, in combination with Lemma 18, to give various criteria for bounding H m , which all involve finding sufficiently strong candidates for a variety of multidimensional variational problems; these theorems are proven in the ‘Reduction to a variational problem’ section. These variational problems are analysed in the asymptotic regime of large k in the ‘Asymptotic analysis’ section, and for small and medium k in the ‘The case of small and medium dimension’ section, with the results collected in Theorems 23, 25, 27, and 29. Combining these results with the previous propositions gives Theorem 16, which, when combined with the bounds on narrow admissible tuples in Theorem 17 that are established in the ‘Narrow admissible tuples’ section, will give Theorem 4. (See also Table 1 for more details of the logical dependencies between the key propositions.)

Table 1 Results used to prove various components of Theorem 16

Finally, in the ‘The parity problem’ section, we modify an argument of Selberg to show that the bound H1≤6 may not be improved using purely sieve-theoretic methods, and in the ‘Additional remarks’ section, we establish Theorem 5 and make some miscellaneous remarks.

1.2 Notation

The notation used here closely follows the notation in our previous paper.

We use |E| to denote the cardinality of a finite set E, and 1 E to denote the indicator function of a set E; thus, 1 E (n)=1 when nE and 1 E (n)=0 otherwise.

All sums and products will be over the natural numbers :={1,2,3,} unless otherwise specified, with the exceptions of sums and products over the variable p, which will be understood to be over primes.

The following important asymptotic notation will be in use throughout the paper.

Definition 6(Asymptotic notation).

We use x to denote a large real parameter, which one should think of as going off to infinity; in particular, we will implicitly assume that it is larger than any specified fixed constant. Some mathematical objects will be independent of x and referred to as fixed; but unless otherwise specified, we allow all mathematical objects under consideration to depend on x (or to vary within a range that depends on x, e.g. the summation parameter n in the sum x n 2 x f(n)). If X and Y are two quantities depending on x, we say that X=O(Y) or XY if one has |X|≤C Y for some fixed C (which we refer to as the implied constant), and X=o(Y) if one has |X|≤c(x)Y for some function c(x) of x (and of any fixed parameters present) that goes to zero as x (for each choice of fixed parameters). We use X⪻ ⪻Y to denote the estimate Xxo(1)Y, XY to denote the estimate YXY, and XY to denote the estimate Y⪻ ⪻X⪻ ⪻Y. Finally, we say that a quantity n is of polynomial size if one has n=O(xO(1)).

If asymptotic notation such as O() or ⪻ ⪻ appears on the left-hand side of a statement, this means that the assertion holds true for any specific interpretation of that notation. For instance, the assertion n = O ( N ) |α(n)|N means that for each fixed constant C>0, one has | n | CN |α(n)|N.

If q and a are integers, we write a|q if a divides q. If q is a natural number and a, we use a (q) to denote the residue class

a ( q ) : = a + nq : n

and let /qℤ denote the ring of all such residue classes a(q). The notation b=a (q) is synonymous to ba (q). We use (a,q) to denote the greatest common divisor of a and q, and [ a,q] to denote the least common multiplea. We also let

/ qℤ × : = a ( q ) : ( a , q ) = 1

denote the primitive residue classes of /qℤ.

We use the following standard arithmetic functions:

  1. (i)
    φ(q):=| ( / qℤ ) × |

    denotes the Euler totient function of q.

  2. (ii)
    τ(q):= d | q 1

    denotes the divisor function of q.

  3. (iii)

    Λ(q) denotes the von Mangoldt function of q; thus, Λ(q)= logp if q is a power of a prime p, and Λ(q)=0 otherwise.

  4. (iv)

    θ(q) is defined to equal logq when q is a prime, and θ(q)=0 otherwise.

  5. (v)

    μ(q) denotes the Möbius function of q; thus, μ(q)=(−1)k if q is the product of k distinct primes for some k≥0, and μ(q)=0 otherwise.

  6. (vi)

    Ω(q) denotes the number of prime factors of q (counting multiplicity).

We recall the elementary divisor bound

τ(n)1
(1)

whenever nxO(1), as well as the related estimate

n x τ ( n ) C n log O ( 1 ) x
(2)

for any fixed C>0 (see, e.g. [Lemma 1.5]).

The Dirichlet convolutionαβ: of two arithmetic functions α,β: is defined in the usual fashion as

α β ( n ) : = d | n α ( d ) β n d = ab = n α ( a ) β ( b ) .

Distribution estimates on arithmetic functions

As mentioned in the introduction, a key ingredient in the Goldston-Pintz-Yıldırım approach to small gaps between primes comes from distributional estimates on the primes, or more precisely on the von Mangoldt function Λ, which serves as a proxy for the primes. In this work, we will also need to consider distributional estimates on more general arithmetic functions, although we will not prove any new such estimates in this paper, relying instead on estimates that are already in the literature.

More precisely, we will need averaged information on the following quantity:

Definition 7(Discrepancy).

For any function α: with finite support (that is, α is non-zero only on a finite set) and any primitive residue class a (q), we define the (signed) discrepancy Δ(α;a (q)) to be the quantity

Δ(α;a(q)):= n = a ( q ) α(n) 1 φ ( q ) ( n , q ) = 1 α(n).
(3)

For any fixed 0<𝜗<1, let EH[ 𝜗] denote the following claim:

Claim 8(Elliott-Halberstam conjecture, EH[ 𝜗]).

If Q⪻ ⪻x𝜗 and A≥1 is fixed, then

q Q sup a ( / qℤ ) × Δ Λ 1 [ x , 2 x ] ; a ( q ) x log A x.
(4)

In [22], it was conjectured that EH[ 𝜗] held for all 0<𝜗<1. (The conjecture fails at the endpoint case 𝜗=1; see [23],[24] for a more precise statement.) The following classical result of Bombieri [25] and Vinogradov [26] remains the best partial result of the form EH[ 𝜗]:

Theorem 9(Bombieri-Vinogradov theorem).

[25],[26] EH[ 𝜗] holds for every fixed 0<𝜗<1/2.

In [2], it was shown that any estimate of the form EH[ 𝜗] with some fixed 𝜗>1/2 would imply the finiteness of H1. While such an estimate remains unproven, it was observed by Motohashi and Pintz [27] and by Zhang [3] that a certain weakened version of EH[ 𝜗] would still suffice for this purpose. More precisely (and following the notation of our previous paper), let ϖ,δ>0 be fixed, and let MPZ[ ϖ,δ] be the following claim:

Claim 10(Motohashi-Pintz-Zhang estimate, MPZ[ ϖ,δ]).

Let I⊂[1,xδ] and Q⪻ ⪻x1/2+2ϖ. Let P I denote the product of all the primes in I, and let S I denote the square-free natural numbers whose prime factors lie in I. If the residue class a (P I ) is primitive (and is allowed to depend on x), and A≥1 is fixed, then

q Q q S I Δ Λ 1 [ x , 2 x ] ; a ( q ) x log A x,
(5)

where the implied constant depends only on the fixed quantities (A,ϖ,δ), but not on a.

It is clear that EH 1 2 + 2 ϖ implies MPZ[ ϖ,δ] whenever ϖ,δ≥0. The first non-trivial estimate of the form MPZ[ ϖ,δ] was established by Zhang [3], who (essentially) obtained MPZ[ ϖ,δ] whenever 0ϖ,δ< 1 1 , 168 . In [Theorem 2.17], we improved this result to the following.

Theorem 11.

MPZ[ ϖ,δ] holds for every fixed ϖ,δ≥0 with 600ϖ+180δ<7.

In fact, a stronger result was established, in which the moduli q were assumed to be densely divisible rather than smooth, but we will not exploit such improvements here. For our application, the most important thing is to get ϖ as large as possible; in particular, Theorem 11 allows one to get ϖ arbitrarily close to 7 600 0.01167.

In this paper, we will also study the following generalization of the Elliott-Halberstam conjecture:

Claim 12(Generalized Elliott-Halberstam conjecture, GEH[ 𝜗]).

Let ε>0 and A≥1 be fixed. Let N,M be quantities such that xε⪻ ⪻N,M⪻ ⪻x1−ε with N Mx, and let α,β: be sequences supported on [ N,2N] and [ M,2M], respectively, such that one has the pointwise bound

|α(n)|τ ( n ) O ( 1 ) log O ( 1 ) x;|β(m)|τ ( m ) O ( 1 ) log O ( 1 ) x
(6)

for all natural numbers n,m. Suppose also that β obeys the Siegel-Walfisz type bound

Δ β 1 ( · , r ) = 1 ; a ( q ) τ ( qr ) O ( 1 ) M log A x
(7)

for any q,r≥1, any fixed A, and any primitive residue class a (q). Then for any Q⪻ ⪻x𝜗, we have

q Q sup a ( / qℤ ) × Δ α β ; a ( q ) x log A x.
(8)

In [28], Conjecture 1], it was essentially conjecturedb that GEH[ 𝜗] was true for all 0<𝜗<1. This is stronger than the Elliott-Halberstam conjecture:

Proposition 13.

For any fixed 0<𝜗<1, GEH[ 𝜗] implies EH[ 𝜗].

Proof.

(Sketch) As this argument is standard, we give only a brief sketch. Let A>0 be fixed. For n∈[ x,2x], we have Vaughan’s identityc[29]

Λ ( n ) = μ < L ( n ) μ < Λ < 1 ( n ) + μ Λ 1 ( n ) ,

where L(n):= log(n) and

Λ ( n ) : = Λ ( n ) 1 n x 1 / 3 , Λ < ( n ) : = Λ ( n ) 1 n < x 1 / 3
(9)
μ ( n ) : = μ ( n ) 1 n x 1 / 3 , μ < ( n ) : = μ ( n ) 1 n < x 1 / 3 .
(10)

By decomposing each of the functions μ<, μ, 1, Λ<, Λ into O(logA+1x) functions supported on intervals of the form [ N,(1+ log−A x)N], and discarding those contributions which meet the boundary of [ x,2x] (cf. [3],[28],[30],[31]), and using GEH[ 𝜗] (with A replaced by a much larger fixed constant A) to control all remaining contributions, we obtain the claim (using the Siegel-Walfisz theorem; see, e.g. [32], Satz 4] or [33], Th. 5.29]).

By modifying the proof of the Bombieri-Vinogradov theorem, Motohashi [34] established the following generalization of that theorem:

Theorem 14(Generalized Bombieri-Vinogradov theorem).

[34] GEH[ 𝜗] holds for every fixed 0<𝜗<1/2.

One could similarly describe a generalization of the Motohashi-Pintz-Zhang estimate MPZ[ ϖ,δ], but unfortunately, the arguments in [3] or Theorem 11 do not extend to this setting unless one is in the ‘Type I/Type II’ case in which N,M are constrained to be somewhat close to x1/2, or if one has ‘Type III’ structure to the convolution αβ, in the sense that it can refactored as a convolution involving several ‘smooth’ sequences. In any event, our analysis would not be able to make much use of such incremental improvements to GEH[ 𝜗], as we only use this hypothesis effectively in the case when 𝜗 is very close to 1. In particular, we will not directly use Theorem 14 in this paper.

Outline of the key ingredients

In this section, we describe the key subtheorems used in the proof of Theorem 4, with the proofs of these subtheorems mostly being deferred to later sections.

We begin with a weak version of the Dickson-Hardy-Littlewood prime tuples conjecture [1], which (following Pintz [35]) we refer to as [ k,j]. Recall that for any k, an admissible k-tuple is a tuple =( h 1 ,, h k ) of k increasing integers h1<…<h k which avoids at least one residue class a p (p):={ a p +np:n} for every p. For instance, (0,2,6) is an admissible 3-tuple, but (0,2,4) is not.

For any kj≥2, we let DHL[ k;j] denote the following claim:

Claim 15(Weak Dickson-Hardy-Littlewood conjecture, DHL[ k;j]).

For any admissible k-tuple =( h 1 ,, h k ), there exist infinitely many translates n+=(n+ h 1 ,,n+ h k ) of which contain at least j primes.

The full Dickson-Hardy-Littlewood conjecture is then the assertion that DHL[ k;k] holds for all k≥2. In our analysis, we will focus on the case when j is much smaller than k; in fact, j will be of the order of logk.

For any k, let H(k) denote the minimal diameter h k h1 of an admissible k-tuple; thus for instance, H(3)=6. It is clear that for any natural numbers m≥1 and km+1, the claim DHL[k;m+1] implies that H m H(k) (and the claim DHL[ k;k] would imply that Hk−1=H(k)). We will therefore deduce Theorem 4 from a number of claims of the form DHL[ k;j]. More precisely, we have

Theorem 16.

Unconditionally, we have the following claims:

(i) DHL[50;2].

(ii) DHL[35,410;3].

(iii) DHL[1,649,821;4].

(iv) DHL[75,845,707;5].

(v) DHL[3,473,955,908;6].

(vi) DHL[k;m+1] whenever m≥1 and kCexp 4 28 157 m for some sufficiently large absolute (and effective) constant C.

Assume the Elliott-Halberstam conjecture EH[ θ] for all 0<θ<1. Then, we have the following improvements:

(vii) DHL[54;3].

(viii) DHL[5,511;4].

(ix) DHL[41,588;5].

(x) DHL[309,661;6].

(xi) DHL[k;m+1] whenever m≥1 and kC exp(2m) for some sufficiently large absolute (and effective) constant C.

Assume the generalized Elliott-Halberstam conjecture GEH[ θ] for all 0<θ<1. Then

(xii) DHL[3;2].

(xiii) DHL[51;3].

Theorem 4 then follows from Theorem 16 and the following bounds on H(k) (ordered by increasing value of k):

Theorem 17(Bounds on H(k)).

(xii) H(3)=6.

(i) H(50)=246.

(xiii) H(51)=252.

(vii) H(54)=270.

(viii) H(5,511)≤52,116.

(ii) H(35,410)≤398,130.

(ix) H(41,588)≤474,266.

(x) H(309,661)≤4,137,854.

(iii) H(1,649,821)≤24,797,814.

(iv) H(75,845,707)≤1,431,556,072.

(v) H(3,473,955,908)≤80,550,202,480.

(vi), (xi) In the asymptotic limit k, one has H(k)≤k logk+k log logkk+o(k), with the bounds on the decay rate o(k) being effective.

We prove Theorem 17 in the ‘Narrow admissible tuples’ section. In the opposite direction, an application of the Brun-Titchmarsh theorem gives H(k) 1 2 + o ( 1 ) klogk as k (see [4], §3.9] for this bound, as well as with some slight refinements).

The proof of Theorem 16 follows the Goldston-Pintz-Yıldırım strategy that was also used in all previous progress on this problem (e.g. [2],[3],[5],[27]), namely that of constructing a sieve function adapted to an admissible k-tuple with good properties. More precisely, we set

w : = log log log x

and

W : = p w p ,

and observe the crude bound

Wlog log O ( 1 ) x.
(11)

We have the following simple ‘pigeonhole principle’ criterion for DHL[k;m+1] (cf. [Lemma 4.1], though the normalization here is slightly different):

Lemma 18(Criterion for DHL).

Let k≥2 and m≥1 be fixed integers and define the normalization constant

B:= φ ( W ) W logx.
(12)

Suppose that for each fixed admissible k-tuple (h1,…,h k ) and each residue class b (W)such that b+h i is coprime to W for all i=1,…,k, one can find a non-negative weight function ν: + and fixed quantities α>0 and β1,…,β k ≥0, such that one has the asymptotic upper bound

x n 2 x n = b ( W ) ν(n) α + o ( 1 ) B k x W ,
(13)

the asymptotic lower bound

x n 2 x n = b ( W ) ν(n)θ(n+ h i )( β i o(1)) B 1 k x φ ( W )
(14)

for all i=1,…,k, and the key inequality

β 1 + + β k α >m.
(15)

Then, DHL[ k;m+1] holds.

Proof.

Let (h1,…,h k ) be a fixed admissible k-tuple. Since it is admissible, there is at least one residue class b (W) such that (b+h i ,W)=1 for all h i . For an arithmetic function ν as in the lemma, we consider the quantity

N : = x n 2 x n = b ( W ) ν ( n ) i = 1 k θ ( n + h i ) m log 3 x .

Combining (13) and (14), we obtain the lower bound

N ( β 1 + + β k o ( 1 ) ) B 1 k x φ ( W ) ( + o ( 1 ) ) B k x W log 3 x.

From (12) and the crucial condition (15), it follows that N>0 if x is sufficiently large.

On the other hand, the sum

i = 1 k θ ( n + h i ) m log 3 x

can be positive only if n+h i is prime for at least m+1 indices i=1,…,k. We conclude that, for all sufficiently large x, there exists some integer n∈[ x,2x] such that n+h i is prime for at least m+1 values of i=1,…,k.

Since (h1,…,h k ) is an arbitrary admissible k-tuple, DHL[ k;m+1] follows.

The objective is then to construct non-negative weights ν whose associated ratio β 1 + + β k α has provable lower bounds that are as large as possible. Our sieve majorants will be a variant of the multidimensional Selberg sieves used in [5]. As with all Selberg sieves, the ν are constructed as the square of certain (signed) divisor sums. The divisor sums we will use will be finite linear combinations of products of ‘one-dimensional’ divisor sums. More precisely, for any fixed smooth compactly supported function F:[0,+), define the divisor sum λ F : by the formula

λ F (n):= d | n μ(d)F( log x d)
(16)

where logx denotes the base x logarithm

log x n:= log n log x .
(17)

One should think of λ F as a smoothed out version of the indicator function to numbers n which are ‘almost prime’ in the sense that they have no prime factors less than xε for some small fixed ε>0 (see Proposition 14 for a more rigorous version of this heuristic).

The functions ν we will use will take the form

ν(n)= j = 1 J c j λ F j , 1 ( n + h 1 ) λ F j , k ( n + h k ) 2
(18)

for some fixed natural number J, fixed coefficients c 1 ,, c J and fixed smooth compactly supported functions F j , i :[0,+) with j=1,…,J and i=1,…,k. (One can of course absorb the constant c j into one of the Fj,i if one wishes.) Informally, ν is a smooth restriction to those n for which n+h1,…,n+h k are all almost prime.

Clearly, ν is a (positive-definite) linear combination of functions of the form

n i = 1 k λ F i ( n + h i ) λ G i ( n + h i )

for various smooth functions F 1 ,, F k , G 1 ,, G k :[0,+). The sum appearing in (13) can thus be decomposed into linear combinations of sums of the form

x n 2 x n = b ( W ) i = 1 k λ F i (n+ h i ) λ G i (n+ h i ).
(19)

Also, since from (16) we clearly have

λ F (n)=F(0)
(20)

when nx is prime and F is supported on [ 0,1], the sum appearing in (14) can be similarly decomposed into linear combinations of sums of the form

x n 2 x n = b ( W ) θ(n+ h i ) 1 i k ; i i λ F i (n+ h i ) λ G i (n+ h i ).
(21)

To estimate the sums (21), we use the following asymptotic, proven in the ‘Multidimensional Selberg sieves’ section. For each compactly supported F:[0,+), let

S(F):=sup{x0:F(x)0}
(22)

denote the upper range of the support of F (with the convention that S(0)=0).

Theorem 19(Asymptotic for prime sums).

Let k≥2 be fixed, let (h1,…,h k ) be a fixed admissible k-tuple, and let b (W) be such that b+h i is coprime to W for each i=1,…,k. Let 1≤i0k be fixed, and for each 1≤ik distinct from i0, let F i , G i :[0,+) be fixed smooth compactly supported functions. Assume one of the following hypotheses:

(i) (Elliott-Halberstam) There exists a fixed 0<𝜗<1 such that EH[ 𝜗] holds and such that

1 i k ; i i 0 (S( F i )+S( G i ))<𝜗.
(23)

(ii) (Motohashi-Pintz-Zhang) There exists fixed 0≤ϖ<1/4 and δ>0 such that MPZ[ϖ,δ] holds and such that

1 i k ; i i 0 (S( F i )+S( G i ))< 1 2 +2ϖ
(24)

and

max 1 i k ; i i 0 S ( F i ) , S ( G i ) <δ.
(25)

Then, we have

x n 2 x n = b ( W ) θ(n+ h i 0 ) 1 i k ; i i 0 λ F i (n+ h i ) λ G i (n+ h i )=(c+o(1)) B 1 k x φ ( W )
(26)

where

c : = 1 i k ; i i 0 0 1 F i ( t i ) G i ( t i ) d t i .

Here of course F denotes the derivative of F.

To estimate the sums (19), we use the following asymptotic, also proven in the ‘Multidimensional Selberg sieves’ section.

Theorem 20(Asymptotic for non-prime sums).

Let k≥1 be fixed, let (h1,…,h k ) be a fixed admissible k-tuple, and let b (W) be such that b+h i is coprime to W for each i=1,…,k. For each fixed 1≤ik, let F i , G i :[0,+) be fixed smooth compactly supported functions. Assume one of the following hypotheses:

(i) (Trivial case) One has

i = 1 k (S( F i )+S( G i ))<1.
(27)

(ii) (Generalized Elliott-Halberstam) There exists a fixed 0<𝜗<1 and i0∈{1,…,k} such that GEH[ 𝜗] holds, and

1 i k ; i i 0 (S( F i )+S( G i ))<𝜗.
(28)

Then, we have

x n 2 x n = b ( W ) i = 1 k λ F i (n+ h i ) λ G i (n+ h i )=(c+o(1)) B k x W ,
(29)

where

c:= i = 1 k 0 1 F i ( t i ) G i ( t i ) d t i .
(30)

A key point in (ii) is that no upper bound on S( F i 0 ) or S( G i 0 ) is required (although, as we will see in the ‘The generalized Elliott-Halberstam case’ section, the result is a little easier to prove when one has S( F i 0 )+S( G i 0 )<1). This flexibility in the F i 0 , G i 0 functions will be particularly crucial to obtain part (xii) of Theorem 16 and Theorem 4.

Remark 21.

Theorems 19 and 20 can be viewed as probabilistic assertions of the following form: if n is chosen uniformly at random from the set {xn≤2x:n=b (W)}, then the random variables θ(n+h i ) and λ F j (n+ h j ) λ G j (n+ h j ) for i,j=1,…,k have mean (1+o(1)) W φ ( W ) and 0 1 F j ( t ) G j ( t ) dt + o ( 1 ) B 1 , respectively, and furthermore, these random variables enjoy a limited amount of independence, except for the fact (as can be seen from (20)) that θ(n+h i ) and λ F i (n+ h i ) λ G i (n+ h i ) are highly correlated. Note though that we do not have asymptotics for any sum which involves two or more factors of θ, as such estimates are of a difficulty at least as great as that of the twin prime conjecture (which is equivalent to the divergence of the sum n θ(n)θ(n+2)).

Theorems 19 and 20 may be combined with Lemma 18 to reduce the task of establishing estimates of the form DHL[ k;m+1] to that of establishing certain variational problems. For instance, in the ‘Proof of Theorem 22’ section, we reprove the following result of Maynard ([5], Proposition 4.2]):

Theorem 22(Sieving on the standard simplex).

Let k≥2 and m≥1 be fixed integers. For any fixed compactly supported square-integrable function F:[0,+ ) k , define the functionals

I(F):= [ 0 , + ) k F ( t 1 , , t k ) 2 d t 1 t k
(31)

and

J i (F):= [ 0 , + ) k 1 0 F ( t 1 , , t k ) d t i 2 d t 1 d t i 1 d t i + 1 d t k
(32)

for i=1,…,k, and let M k be the supremum

M k :=sup i = 1 k J i ( F ) I ( F )
(33)

over all square integrable functions F that are supported on the simplex

R k : = ( t 1 , , t k ) [ 0 , + ) k : t 1 + + t k 1

and are not identically zero (up to almost everywhere equivalence, of course). Suppose that there is a fixed 0<𝜗<1 such that EH[ 𝜗] holds and such that

M k > 2 m 𝜗 .

Then, DHL[ k;m+1] holds.

Parts (vii)-(xi) of Theorem 16 (and hence Theorem 4) are then immediate from the following results, proven in the ‘Asymptotic analysis’ and ‘The case of small and medium dimension’ sections, and ordered by increasing value of k:

Theorem 23(Lower bounds on M k ).

(vii) M54>4.00238.

(viii) M5,511>6.

(ix) M41,588>8.

(x) M309,661>10.

(xi) One has M k ≥ logkC for all kC, where C is an absolute (and effective) constant.

For the sake of comparison, in ([5], Proposition 4.3]), it was shown that M5>2, M105>4, and M k ≥ logk−2 log logk−2 for all sufficiently large k. As remarked in that paper, the sieves used on the bounded gap problem prior to the work in [5] would essentially correspond, in this notation, to the choice of functions F of the special form F(t1,…,t k ):=f(t1+⋯+t k ), which severely limits the size of the ratio in (33) (in particular, the analogue of M k in this special case cannot exceed 4, as shown in [36]).

In the converse direction, in Corollary 37, we will also show the upper bound M k k k 1 logk for all k≥2, which shows in particular that the bounds in (vii) and (xi) of the above theorem cannot be significantly improved. We remark that Theorem 23(vii) and the Bombieri-Vinogradov theorem also give a weaker version DHL[ 54;2] of Theorem 16(i).

We also have a variant of Theorem 22 which can accept inputs of the form MPZ[ ϖ,δ]:

Theorem 24(Sieving on a truncated simplex).

Let k≥2 and m≥1 be fixed integers. Let 0<ϖ<1/4 and 0<δ<1/2 be such that MPZ[ ϖ,δ] holds. For any α>0, let M k [ α ] be defined as in (33), but where the supremum now ranges over all square-integrable F supported in the truncated simplex

( t 1 , , t k ) [ 0 , α ] k : t 1 + + t k 1
(34)

and are not identically zero. If

M k δ 1 / 4 + ϖ > m 1 / 4 + ϖ ,

then DHL[ k;m+1] holds.

In the ‘Asymptotic analysis’ section, we will establish the following variant of Theorem 23, which when combined with Theorem 11, allows one to use Theorem 24 to establish parts (ii)-(vi) of Theorem 16 (and hence Theorem 4):

Theorem 25(Lower bounds on M k [ α ] ).

(ii) There exist δ,ϖ>0 with 600ϖ+180δ<7 and M 35 410 δ 1 / 4 + ϖ > 2 1 / 4 + ϖ .

(iii) There exist δ,ϖ>0 with 600ϖ+180δ<7 and M 1 649 821 δ 1 / 4 + ϖ > 3 1 / 4 + ϖ .

(iv) There exist δ,ϖ>0 with 600ϖ+180δ<7 and M 75 845 707 δ 1 / 4 + ϖ > 4 1 / 4 + ϖ .

(v) There exist δ,ϖ>0 with 600ϖ+180δ<7 and M 3 473 955 908 δ 1 / 4 + ϖ > 5 1 / 4 + ϖ .

(vi) For all kC, there exist δ,ϖ>0 with 600ϖ+180δ<7, ϖ 7 600 C log k , and M k δ 1 / 4 + ϖ logkC for some absolute (and effective) constant C.

The implication is clear for (ii)-(v). For (vi), observe that from Theorem 25(vi), Theorem 11, and Theorem 24, we see that DHL[ k;m+1] holds whenever k is sufficiently large and

m ( log k C ) 1 4 + 7 600 C log k

which is in particular implied by

m log k 4 28 157 C

for some absolute constant C, giving Theorem 16(vi).

Now we give a more flexible variant of Theorem 22, in which the support of F is enlarged, at the cost of reducing the range of integration of the J i .

Theorem 26(Sieving on an epsilon-enlarged simplex).

Let k≥2 and m≥1 be fixed integers, and let 0<ε<1 be fixed also. For any fixed compactly supported square-integrable function F:[0,+ ) k , define the functionals

J i , 1 ε ( F ) : = ( 1 ε ) · R k 1 0 F ( t 1 , , t k ) d t i 2 d t 1 d t i 1 d t i + 1 d t k

for i=1,…,k, and let Mk,ε be the supremum

M k , ε : = sup i = 1 k J i , 1 ε ( F ) I ( F )

over all square-integrable functions F that are supported on the simplex

( 1 + ε ) · R k = ( t 1 , , t k ) [ 0 , + ) k : t 1 + + t k 1 + ε

and are not identically zero. Suppose that there is a fixed 0<𝜗<1, such that one of the following two hypotheses hold:

(i) EH[𝜗] holds, and 1+ε< 1 𝜗 .

(ii) GEH[𝜗] holds, and ε< 1 k 1 .

If

M k , ε > 2 m 𝜗

then DHL[ k;m+1] holds.

We prove this theorem in the ‘Proof of Theorem 26’ section. We remark that due to the continuity of Mk,ε in ε, the strict inequalities in (i) and (ii) of this theorem may be replaced by non-strict inequalities. Parts (i) and (xiii) of Theorem 16, and a weaker version DHL[ 4;2] of part (xii), then follow from Theorem 9 and the following computations, proven in the ‘Bounding Mk,ε for medium k’ and ‘Bounding M4,ε’ sections:

Theorem 27(Lower bounds on Mk,ε).

(i) M50,1/25>4.0043.

(xii’) M4,0.168>2.00558.

(xiii) M51,1/50>4.00156.

We remark that computations in the proof of Theorem 27(xii’) are simple enough that the bound may be checked by hand, without use of a computer. The computations used to establish the full strength of Theorem 16(xii) are however significantly more complicated.

In fact, we may enlarge the support of F further. We give a version corresponding to part (ii) of Theorem 26; there is also a version corresponding to part (i), but we will not give it here as we will not have any use for it.

Theorem 28(Going beyond the epsilon enlargement).

Let k≥2 and m≥1 be fixed integers, let 0<𝜗<1 be a fixed quantity such that GEH[ 𝜗] holds, and let 0<ε< 1 k 1 be fixed also. Suppose that there is a fixed non-zero square-integrable function F:[0,+ ) k supported in k k 1 · R k , such that for i=1,…,k, one has the vanishing marginal condition

0 F( t 1 ,, t k )d t i =0
(35)

whenever t1,…,ti−1,ti+1,…,t k ≥0 are such that

t 1 + + t i 1 + t i + 1 + + t k > 1 + ε.

Suppose that we also have the inequality

i = 1 k J i , ε ( F ) I ( F ) > 2 m 𝜗 .

Then DHL[ k;m+1] holds.

This theorem is proven in the ‘Proof of Theorem 28’ section. Theorem 16(xii) is then an immediate consequence of Theorem 28 and the following numerical fact, established in the ‘Three-dimensional cutoffs’ section.

Theorem 29(A piecewise polynomial cutoff).

Set ε:= 1 4 . Then, there exists a piecewise polynomial function F:[0,+ ) 3 supported on the simplex

3 2 · R 3 = ( t 1 , t 2 , t 3 ) [ 0 , + ) 3 : t 1 + t 2 + t 3 3 2

and symmetric in the t1,t2,t3 variables, such that F is not identically zero and obeys the vanishing marginal condition

0 F ( t 1 , t 2 , t 3 ) d t 3 = 0

whenever t1,t2≥0 with t1+t2>1+ε and such that

3 t 1 + t 2 1 ε 0 F ( t 1 , t 2 , t 3 ) d t 3 2 d t 1 d t 2 [ 0 , ) 3 F ( t 1 , t 2 , t 3 ) 2 d t 1 d t 2 d t 3 > 2 .

There are several other ways to combine Theorems 19 and 20 with equidistribution theorems on the primes to obtain results of the form DHL[k;m+1], but all of our attempts to do so either did not improve the numerology or else were numerically infeasible to implement.

Multidimensional Selberg sieves

In this section, we prove Theorems 19 and 20. A key asymptotic used in both theorems is the following:

Lemma 30(Asymptotic).

Let k≥1 be a fixed integer, and let N be a natural number coprime to W with logN=O(logO(1)x). Let F 1 ,, F k , G 1 ,, G k :[0,+) be fixed smooth compactly supported functions. Then,

d 1 , , d k , d 1 , , d k d 1 , d 1 , , d k , d k , W , N coprime j = 1 k μ d j μ d j F j log x d j G j log x d j d j , d j =(c+o(1)) B k N k φ ( N ) k
(36)

where B was defined in (12), and

c : = j = 1 k 0 F j ( t j ) G j ( t j ) d t j .

The same claim holds if the denominators d j , d j are replaced by φ d j , d j .

Such asymptotics are standard in the literature (see, e.g. [37] for some similar computations). In older literature, it is common to establish these asymptotics via contour integration (e.g. via Perron’s formula), but we will use the Fourier analytic approach here. Of course, both approaches ultimately use the same input, namely the simple pole of the Riemann zeta function at s=1.

Proof.

We begin with the first claim. For j=1,…,k, the functions tetF j (t), tetG j (t) may be extended to smooth compactly supported functions on all of , and so we have Fourier expansions

e t F j (t)= e itξ f j (ξ)
(37)

and

e t G j ( t ) = e itξ g j ( ξ )

for some fixed functions f j , g j : that are smooth and rapidly decreasing in the sense that f j (ξ),g j (ξ)=O((1+|ξ|)A) for any fixed A>0 and all ξ (here the implied constant is independent of ξ and depends only on A).

We may thus write

F j log x d j = f j ( ξ j ) d j 1 + i ξ j log x d ξ j

and

G j log x d j = g j ξ j d j 1 + i ξ j log x d ξ j

for all d j , d j 1. We note that

d j , d j | μ d j μ d j | d j , d j d j 1 / log x d j 1 / log x = p 1 + 2 p 1 + 1 / log x + 1 p 1 + 2 / log x exp ( O ( log log x ) ) .

Therefore, if we substitute the Fourier expansions into the left-hand side of (36), the resulting expression is absolutely convergent. Thus, we can apply Fubini’s theorem, and the left-hand side of (36) can thus be rewritten as

K ξ 1 , , ξ k , ξ 1 , , ξ k j = 1 k f j ξ j g j ξ j d ξ j d ξ j ,
(38)

where

K ( ξ 1 , , ξ k , ξ 1 , , ξ k ) : = d 1 , , d k , d 1 , , d k d 1 , d 1 , , d k , d k , W , N coprime j = 1 k μ d j μ d j d j , d j d j 1 + i ξ j log x d j 1 + i ξ j log x .

This latter expression factorizes as an Euler product

K = p WN K p ,

where the local factors K p are given by

K p ξ 1 , , ξ k , ξ 1 , , ξ k : = 1 + 1 p d 1 , , d k , d 1 , , d k d 1 , , d k , d 1 , , d k = p d 1 , d 1 , , d k , d k coprime j = 1 k μ d j μ d j d j 1 + i ξ j log x d j 1 + i ξ j log x .
(39)

We can estimate each Euler factor as

K p ξ 1 , , ξ k , ξ 1 , , ξ k = 1 + O 1 p 2 j = 1 k 1 p 1 1 + i ξ j log x 1 p 1 1 + i ξ j log x 1 p 1 2 + i ξ j + i ξ j log x .
(40)

Since

p : p > w 1 + O 1 p 2 = 1 + o ( 1 ) ,

we have

K ξ 1 , , ξ k , ξ 1 , , ξ k = ( 1 + o ( 1 ) ) j = 1 k ζ WN 1 + 2 + i ξ j + i ξ j log x ζ WN 1 + 1 + i ξ j log x ζ WN 1 + 1 + i ξ j log x

where the modified zeta function ζ WN is defined by the formula

ζ WN ( s ) : = p WN 1 1 p s 1

for ℜ(s)>1.

For (s)1+ 1 log x , we have the crude bounds

| ζ WN ( s ) | , | ζ WN ( s ) | 1 p 1 + 1 p 1 + 1 / log x + O 1 p 2 exp p 1 p 1 + 1 / log x exp ( log log x + O ( 1 ) ) log x.

Thus,

K ξ 1 , , ξ k , ξ 1 , , ξ k = O log 3 k x .

Combining this with the rapid decrease of f j ,g j , we see that the contribution to (38) outside of the cube max ξ 1 , , ξ k , ξ 1 , , ξ k log x (say) is negligible. Thus, it will suffice to show that

log x log x log x log x K ξ 1 , , ξ k , ξ 1 , , ξ k j = 1 k f j ξ j g j ξ j d ξ j d ξ j = ( c + o ( 1 ) ) B k N k φ ( N ) k .

When | ξ j | log x , we see from the simple pole of the Riemann zeta function ζ(s)= p 1 1 p s 1 at s=1 that

ζ 1 + 1 + i ξ j log x = ( 1 + o ( 1 ) ) log x 1 + i ξ j .

For log x ξ j log x , we see that

1 1 p 1 + 1 + i ξ j log x = 1 1 p + O log p p log x .

Since logW N≪ logO(1)x, this gives

p | WN 1 1 p 1 + 1 + i ξ j log x = φ ( WN ) WN exp O p | WN log p p log x = ( 1 + o ( 1 ) ) φ ( WN ) WN ,

since the sum is maximized when WN is composed only of primes p≪ logO(1)x. Thus,

ζ WN 1 + 1 + i ξ j log x = ( 1 + o ( 1 ) ) ( N ) ( 1 + i ξ j ) N ,

similarly with 1+i ξ j replaced by 1+i ξ j or 2+i ξ j +i ξ j . We conclude that

K ξ 1 , , ξ k , ξ 1 , , ξ k =(1+o(1)) B k N k φ ( N ) k j = 1 k 1 + i ξ j 1 + i ξ j 2 + i ξ j + i ξ j .
(41)

Therefore, it will suffice to show that

j = 1 k 1 + i ξ j 1 + i ξ j 2 + i ξ j + i ξ j f j ( ξ j ) g j ξ j d ξ j d ξ j = c ,

since the errors caused by the 1+o(1) multiplicative factor in (41) or the truncation | ξ j |,| ξ j | log x can be seen to be negligible using the rapid decay of f j ,g j . By Fubini’s theorem, it suffices to show that

( 1 + ) ( 1 + i ξ ) 2 + + i ξ f j ( ξ ) g j ( ξ ) dξd ξ = 0 + F j ( t ) G j ( t ) dt

for each j=1,…,k. But from dividing (37) by et and differentiating under the integral sign, we have

F j ( t ) = ( 1 + ) e t ( 1 + ) f j ( ξ ) ,

and the claim then follows from Fubini’s theorem.

Finally, suppose that we replace d j , d j with φ d j , d j . An inspection of the above argument shows that the only change that occurs is that the 1 p term in (39) is replaced by 1 p 1 ; but this modification may be absorbed into the 1+O 1 p 2 factor in (40), and the rest of the argument continues as before.

4.1 The trivial case

We can now prove the easiest case of the two theorems, namely case (i) of Theorem 20; a closely related estimate also appears in ([5], Lemma 6.2]). We may assume that x is sufficiently large depending on all fixed quantities. By (16), the left-hand side of (29) may be expanded as

d 1 , , d k , d 1 , , d k i = 1 k μ ( d i ) μ d i F i log x d i G i log x d i S d 1 , , d k , d 1 , , d k
(42)

where

S d 1 , , d k , d 1 , , d k : = x n 2 x n = b ( W ) n + h i = 0 ( [ d i , d i ] ) i 1 .

By hypothesis, b+h i is coprime to W for all i=1,…,k, and |h i h j |<w for all distinct i,j. Thus, S d 1 , , d k , d 1 , , d k vanishes unless the d i , d i are coprime to each other and to W. In this case, S d 1 , , d k , d 1 , , d k is summing the constant function 1 over an arithmetic progression in [ x,2x] of spacing W d 1 , d 1 d k , d k , and so

S d 1 , , d k , d 1 , , d k = x W d 1 , d 1 d k , d k + O ( 1 ) .

By Lemma 30, the contribution of the main term x W d 1 , d 1 d k , d k to (29) is (c+o(1)) B k x W ; note that the restriction of the integrals in (30) to [ 0,1] instead of [ 0,+) is harmless since S(F i ),S(G i )<1 for all i. Meanwhile, the contribution of the O(1) error is then bounded by

O d 1 , , d k , d 1 , , d k i = 1 k | F i ( log x d i ) | | G i ( log x d i ) | .

By the hypothesis in Theorem 20(i), we see that for d 1 ,, d k , d 1 ,, d k contributing a non-zero term here, one has

d 1 , d 1 d k , d k x 1 ε

for some fixed ε>0. From the divisor bound (1), we see that each choice of d 1 , d 1 d k , d k arises from ⪻ ⪻1 choices of d 1 ,, d k , d 1 ,, d k . We conclude that the net contribution of the O(1) error to (29) is ⪻ ⪻x1−ε, and the claim follows.

4.2 The Elliott-Halberstam case

Now we show case (i) of Theorem 19. For the sake of notation, we take i0=k, as the other cases are similar. We use (16) to rewrite the left-hand side of (26) as

d 1 , , d k 1 , d 1 , , d k 1 i = 1 k 1 μ ( d i ) μ d i F i log x d i G i log x d i S ~ d 1 , , d k 1 , d 1 , , d k 1
(43)

where

S ~ d 1 , , d k 1 , d 1 , , d k 1 : = x n 2 x n = b ( W ) n + h i = 0 d i , d i i = 1 , , k 1 θ ( n + h k ) .

As in the previous case, S ~ d 1 , , d k 1 , d 1 , , d k 1 vanishes unless the d i , d i are coprime to each other and to W, and so the summand in (43) vanishes unless the modulus q W , d 1 , , d k 1 defined by

q W , d 1 , , d k 1 :=W d 1 , d 1 d k 1 , d k 1
(44)

is square-free. In that case, we may use the Chinese remainder theorem to concatenate the congruence conditions on n into a single primitive congruence condition

n + h k = a W , d 1 , , d k 1 q W , d 1 , , d k 1

for some a W , d 1 , , d k 1 depending on W, d 1 ,, d k 1 , d 1 ,, d k 1 , and conclude using (3) that

S ~ d 1 , , d k 1 , d 1 , , d k 1 = 1 φ q W , d 1 , , d k 1 x + h k n 2 x + h k θ ( n ) + Δ 1 x + h k , 2 x + h k θ ; a W , d 1 , , d k 1 q W , d 1 , , d k 1 .
(45)

From the prime number theorem, we have

x + h k n 2 x + h k θ ( n ) = ( 1 + o ( 1 ) ) x

and this expression is clearly independent of d 1 ,, d k 1 . Thus, by Lemma 30, the contribution of the main term in (45) is (c+o(1)) B 1 k x φ ( W ) . By (11) and (12), it thus suffices to show that for any fixed A we have

d 1 , , d k 1 , d 1 , , d k 1 i = 1 k 1 F i log x d i G i log x d i Δ 1 x + h k , 2 x + h k θ ; a ( q ) x log A x,
(46)

where a= a W , d 1 , , d k 1 and q= q W , d 1 , , d k 1 . For future reference, we note that we may restrict the summation here to those d 1 ,, d k 1 for which q W , d 1 , , d k 1 is square-free.

From the hypotheses of Theorem 19(i), we have

q W , d 1 , , d k 1 x 𝜗

whenever the summand in (43) is non-zero, and each choice q of q W , d 1 , , d k 1 is associated to O(τ(q)O(1)) choices of d 1 ,, d k 1 , d 1 ,, d k 1 . Thus, this contribution is

q x 𝜗 τ ( q ) O ( 1 ) sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) .

Using the crude bound

Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) x q log O ( 1 ) x

and (2), we have

q x 𝜗 τ ( q ) C sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) x log O ( 1 ) x

for any fixed C>0. By the Cauchy-Schwarz inequality, it suffices to show that

q x 𝜗 sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) x log A x

for any fixed A>0. However, since θ only differs from Λ on powers pj of primes with j>1, it is not difficult to show that

Δ 1 x + h k , 2 x + h k θ ; a ( q ) Δ 1 x + h k , 2 x + h k Λ ; a ( q ) x q ,

so the net error in replacing θ here by Λ is ⪻ ⪻x1−(1−𝜗)/2, which is certainly acceptable. The claim now follows from the hypothesis EH[ 𝜗], thanks to Claim 8.

4.3 The Motohashi-Pintz-Zhang case

Now we show case (ii) of Theorem 19. We repeat the arguments from the ‘The Elliott-Halberstam case’ section, with the only difference being in the derivation of (46). As observed previously, we may restrict q W , d 1 , , d k 1 to be square-free. From the hypotheses in Theorem 19(ii), we also see that

q W , d 1 , , d k 1 x 𝜗

and that all the prime factors of q W , d 1 , , d k 1 are at most xδ. Thus, if we set I:= [ 1,xδ], we see (using the notation from Claim 10) that q W , d 1 , , d k 1 lies in S I and is thus a factor of P I . If we then let A/ P I denote all the primitive residue classes a (P I ) with the property that a=b (W), and such that for each prime w<pxδ, one has a+h i =0 (p) for some i=1,…,k, then we see that a W , d 1 , , d k 1 lies in the projection of to / q W , d 1 , , d k 1 . Each q S I is equal to q W , d 1 , , d k 1 for O(τ(q)O(1)) choices of d 1 ,, d k 1 . Thus, the left-hand side of (46) is

q S I : q x 𝜗 τ ( q ) O ( 1 ) sup a A Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) .

Note from the Chinese remainder theorem that for any given q, if one lets a range uniformly in , then a (q) is uniformly distributed among O(τ(q)O(1)) different moduli. Thus, we have

sup a A Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) τ ( q ) O ( 1 ) | A | a A Δ 1 [ x + h k , 2 x + h k ] θ ; a ( q ) ,

and so it suffices to show that

q S I : q x 𝜗 τ ( q ) O ( 1 ) | A | a A Δ ( 1 [ x + h k , 2 x + h k ] θ ; a ( q ) ) x log A x

for any fixed A>0. We see it suffices to show that

q S I : q x 𝜗 τ ( q ) O ( 1 ) Δ ( 1 [ x + h k , 2 x + h k ] θ ; a ( q ) ) x log A x

for any given aA. But this follows from the hypothesis MPZ[ ϖ,δ] by repeating the arguments of the ‘The Elliott-Halberstam case’ section.

4.4 Crude estimates on divisor sums

To proceed further, we will need some additional information on the divisor sums λ F (defined in (16)), namely that these sums are concentrated on ‘almost primes’; results of this type have also appeared in [38].

Proposition 14(Almost primality).

Let k≥1 be fixed, let (h1,…,h k ) be a fixed admissible k-tuple, and let b (W)be such that b+h i is coprime to W for each i=1,…,k. Let F 1 ,, F k :[0,+) be fixed smooth compactly supported functions, and let m1,…,m k ≥0 and a1,…,a k ≥1 be fixed natural numbers. Then,

x n 2 x : n = b ( W ) j = 1 k | λ F j ( n + h j ) | a j τ ( n + h j ) m j B k x W .
(47)

Furthermore, if 1≤j0k is fixed and p0 is a prime with p 0 x 1 10 k , then we have the variant

x n 2 x : n = b ( W ) j = 1 k | λ F j ( n + h j ) | a j τ ( n + h j ) m j 1 p 0 | n + h j 0 log x p 0 p 0 B k x W .
(48)

As a consequence, we have

x n 2 x : n = b ( W ) j = 1 k | λ F j ( n + h j ) | a j τ ( n + h j ) m j 1 p ( n + h j 0 ) x ε ε B k x W ,
(49)

for any ε>0, where p(n) denotes the least prime factor of n.

The exponent 1 10 k can certainly be improved here, but for our purposes, any fixed positive exponent depending only on k will suffice.

Proof.

The strategy is to estimate the alternating divisor sums λ F j (n+ h j ) by non-negative expressions involving prime factors of n+h j , which can then be bounded combinatorially using standard tools.

We first prove (47). As in the proof of Proposition 30, we can use Fourier expansion to write

F j log x d = f j ( ξ ) d 1 + log x

for some rapidly decreasing f j : and all natural numbers d. Thus,

λ F j ( n ) = d | n μ ( d ) d 1 + log x f j ( ξ ) ,

which factorizes using Euler products as

λ F j ( n ) = p | n 1 1 p 1 + log x f j ( ξ ) dξ.

The function s p s log x has a magnitude of O(1) and a derivative of O(logx p) when ℜ(s)>1, and thus

1 1 p 1 + log x = O min ( ( 1 + | ξ | ) log x p , 1 ) .

From the rapid decrease of f j and the triangle inequality, we conclude that

| λ F j ( n ) | p | n O min ( ( 1 + | ξ | ) log x p , 1 ) ( 1 + | ξ | ) A

for any fixed A>0. Thus, noting that p | n O(1)τ ( n ) O ( 1 ) , we have

| λ F j ( n ) | a j τ ( n ) O ( 1 ) p | n l = 1 a j min ( ( 1 + | ξ l | ) log x p , 1 ) d ξ 1 d ξ a j ( 1 + | ξ 1 | ) A ( 1 + | ξ a j | ) A

for any fixed a j ,A. However, we have

i = 1 a j min 1 + | ξ i | log x p , 1 min 1 + | ξ 1 | + + | ξ a j | log x p , 1 ,

and so

| λ F j ( n ) | a j τ ( n ) O ( 1 ) p | n min 1 + | ξ 1 | + + | ξ a j | log x p , 1 d ξ 1 d ξ a j 1 + | ξ 1 | + + | ξ a j | A .

Making the change of variables σ:=1+| ξ 1 |++| ξ a j |, we obtain

| λ F j ( n ) | a j τ ( n ) O ( 1 ) 1 p | n min ( σ log x p , 1 ) σ A

for any fixed A>0. In view of this bound and the Fubini-Tonelli theorem, it suffices to show that

x n 2 x : n = b ( W ) j = 1 k τ ( n + h j ) O ( 1 ) p | n min ( σ j log x p , 1 ) B k x W ( σ 1 + + σ k ) O ( 1 )

for all σ1,…,σ k ≥1. By setting σ:=σ1+⋯+σ k , it suffices to show that

x n 2 x : n = b ( W ) j = 1 k τ n + h j O ( 1 ) p | n + h j min σ log x p , 1 B k x W σ O ( 1 )
(50)

for any σ≥1.

To proceed further, we factorize n+h j as a product

n + h j = p 1 p r

of primes p1≤⋯≤p r in increasing order and then write

n + h j = d j m j

where d j := p 1 p i j and i j is the largest index for which p 1 p i j < x 1 10 k , and m j := p i j + 1 p r . By construction, we see that 0≤i j <r, d j x 1 10 k . Also, we have

p i j + 1 p 1 p i j + 1 1 i j + 1 x 1 10 k i j + 1 .

Since n≤2x, this implies that

r = O ( i j + 1 )

and so

τ ( n + h j ) 2 O 1 + Ω ( d j ) ,

where we recall that Ω(d j )=i j denotes the number of prime factors of d j , counting multiplicity. We also see that

p ( m j ) x 1 10 k 1 + Ω ( d j ) x 1 10 k 1 + Ω d 1 d k = : R ,

where p(n) denotes the least prime factor of n. Finally, we have that

p | n + h j min σ log x p , 1 p | d j min σ log x p , 1 ,

and we see that the d1,…,d k ,W are coprime. We may thus estimate the left-hand side of (50) by

j = 1 k 2 O ( 1 + Ω ( d j ) p | d j min ( σ log x p , 1 ) 1

where the outer sum is over d 1 ,, d k x 1 10 k with d1,…,d k ,W coprime, and the inner sum is over xn≤2x with n=b (W) and n+h j =0 (d j ) for each j, with p n + h j d j R for each j.

We bound the inner sum 1 using a Selberg sieve upper bound. Let G be a smooth function supported on [ 0,1] with G(0)=1, and let d=d1d k . We see that

1 x n 2 x n + h i 0 ( d i ) n b ( W ) i = 1 k e | n + h i ( e , dW ) = 1 μ ( e ) G ( log R e ) 2 ,

since the product is G(0)2k=1 if p n + h j d j R, and non-negative otherwise. The right-hand side may be expanded as

e 1 , , e k , e 1 , , e k e i e i , dW = 1 i i = 1 k μ ( e i ) μ e i G log R e i G log R e i x n 2 x n + h i 0 ( d i [ e i , e i ] ) n b ( W ) 1 .

As in the ‘The trivial case’ section, the inner sum vanishes unless the e i e i are coprime to each other and dW, in which case it is

x dW [ e 1 , e 1 ] [ e k , e k ] + O ( 1 ) .

The O(1) term contributes ⪻ ⪻Rk⪻ ⪻x1/10, which is negligible. By Lemma 30, if Ω(d)≪ log1/2x, then the main term contributes

d φ ( d ) k x dW ( log R ) k 2 Ω ( d ) B k x dW .

We see that this final bound applies trivially if Ω(d)≫ log1/2x. The bound (50) thus reduces to

j = 1 k 2 O ( 1 + Ω ( d j ) ) d j p | d j min ( σ log x p , 1 ) σ O ( 1 ) .
(51)

Ignoring the coprimality conditions on the d j for an upper bound, we see this is bounded by

w < p x 1 10 k 1 + O ( min ( σ log x ( p ) , 1 ) ) p j 0 O ( 1 ) j p j k exp O p x ( min ( σ log x ( p ) , 1 ) ) p .

But from Mertens’ theorem, we have

p x min ( σ log x p , 1 ) p = O log 1 σ ,

and the claim (47) follows.

The proof of (48) is a minor modification of the argument above used to prove (47). Namely, the variable d j 0 is now replaced by [ d0,p0]<x1/5k, which upon factoring out p0 has the effect of multiplying the upper bound for (51) by O σ log x p 0 p 0 (at the negligible cost of deleting the prime p0 from the sum p x , giving the claim; we omit the details.

Finally, (49) follows immediately from (47) when ε> 1 10 k , and from (48) and Mertens’ theorem when ε 1 10 k .

Remark 32.

As in [38], one can use Proposition 14, together with the observation that the quantity λ F (n) is bounded whenever n=O(x) and p(n)≥xε, to conclude that whenever the hypotheses of Lemma 18 are obeyed for some ν of the form (18), then there exists a fixed ε>0 such that for all sufficiently large x, there are x log k x elements n of [x,2x] such that n+h1,…,n+h k have no prime factor less than xε, and that at least m of the n+h1,…,n+h k are prime.

4.5 The generalized Elliott-Halberstam case

Now we show case (ii) of Theorem 20. For the sake of notation, we shall take i0=k, as the other cases are similar; thus, we have

i = 1 k 1 (S( F i )+S( G i ))<𝜗.
(52)

The basic idea is to view the sum (29) as a variant of (26), with the role of the function θ now being played by the product divisor sum λ F k λ G k , and to repeat the arguments in the ‘The Elliott-Halberstam case’ section. To do this, we rely on Proposition 14 to restrict n+h i to the almost primes.

We turn to the details. Let ε>0 be an arbitrary fixed quantity. From (49) and Cauchy-Schwarz, one has

x n 2 x n = b ( W ) i = 1 k λ F i ( n + h i ) λ G i ( n + h i ) 1 p ( n + h k ) x ε = O ε B k x W

with the implied constant uniform in ε, so by the triangle inequality and a limiting argument as ε→0, it suffices to show that

x n 2 x n = b ( W ) i = 1 k λ F i ( n + h i ) λ G i ( n + h i ) 1 p ( n + h k ) > x ε =( c ε +o(1)) B k x W
(53)

where c ε is a quantity depending on ε but not on x, such that

lim ε 0 c ε = i = 1 k 0 1 F i ( t ) G i ( t ) dt.

We use (16) to expand out λ F i , λ G i for i=1,…,k−1, but not for i=k, so that the left-hand side of (29) becomes

d 1 , , d k 1 , d 1 , , d k 1 i = 1 k μ ( d i ) μ d i F i log x d i G i log x d i S d 1 , , d k 1 , d 1 , , d k 1
(54)

where

S d 1 , , d k 1 , d 1 , , d k 1 : = x n 2 x n = b ( W ) n + h i = 0 ( [ d i , d i ] ) i = 1 , , k 1 λ F k ( n + h k ) λ G k ( n + h k ) 1 p ( n + h k ) > x ε .

As before, the summand in (54) vanishes unless the modulusd q W , d 1 , , d k 1 defined in (44) is square-free, in which case we have the analogue

S d 1 , , d k 1 , d 1 , , d k 1 = 1 φ ( q ) x + h k n 2 x + h k ( n , q ) = 1 λ F k ( n ) λ G k ( n ) 1 p ( n ) > x ε + Δ 1 [ x + h k , 2 x + h k ] λ F k λ G k 1 p ( · ) > x ε ; a ( q )
(55)

of (45). Here we have put q= q W , d 1 , , d k 1 and a= a W , d 1 , , d k 1 for convenience. We thus split

S = S 1 S 2 + S 3 ,

where,

S 1 d 1 , , d k 1 , d 1 , , d k 1 = 1 φ ( q ) x + h k n 2 x + h k λ F k ( n ) λ G k ( n ) 1 p ( n ) > x ε ,
(56)
S 2 d 1 , , d k 1 , d 1 , , d k 1 = 1 φ ( q ) x + h k n 2 x + h k ; ( n , q ) > 1 λ F k ( n ) λ G k ( n ) 1 p ( n ) > x ε ,
(57)
S 3 d 1 , , d k 1 , d 1 , , d k 1 = Δ 1 [ x + h k , 2 x + h k ] λ F k λ G k 1 p ( · ) > x ε ; a ( q ) ,
(58)

when q= q W , d 1 , , d k 1 is square-free, with S 1 = S 2 = S 3 =0 otherwise.

For j∈{1,2,3}, let

Σ j = d 1 , , d k 1 , d 1 , , d k 1 i = 1 k μ d i μ d i F i log x d i G i log x d i S j d 1 , , d k 1 , d 1 , , d k 1 .
(59)

To show (53), it thus suffices to show the main term estimate

Σ 1 =( c ε +o(1)) B k x W ,
(60)

the first error term estimate

Σ 2 x 1 ε ,
(61)

and the second error term estimate

Σ 3 x log A x
(62)

for any fixed A>0.

We begin with (61). Observe that if p(n)>xε, then the only way that n , q W , d 1 , , d k 1 can exceed 1 is if there is a prime xε<px which divides both n and one of d 1 ,, d k 1 ; in particular, this case can only occur when k>1. For the sake of notation, we will just consider the contribution when there is a prime that divides p and d1, as the other 2k−3 cases are similar. By (57), this contribution to Σ2 can then be crudely bounded (using (1)) by

Σ 2 x ε < p x d 1 , , d k 1 , d 1 , , d k 1 x ; p | d 1 1 d 1 , d 1 d k 1 , d k 1 n x : p | n 1 x ε < p x x p e 1 x 2 ; p | e 1 τ ( e 1 ) e 1 i = 2 k 1 e i x 2 τ ( e i ) e i x ε < p x x p 2 x 1 ε

as required, where we have made the change of variables e i :=[ d i , d i ], using the divisor bound to control the multiplicity.

Now we show (62). From the hypothesis (28), we have q W , d 1 , , d k 1 x θ whenever the summand in (62) is non-zero. From the divisor bound, for each q⪻ ⪻xθ, there are O(τ(q)O(1)) choices of d 1 ,, d k 1 with q W , d 1 , , d k 1 =q. We see that the product in (59) is O(1). Thus, by (58), we may bound Σ3 by

Σ 3 q x θ τ ( q ) O ( 1 ) sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] λ F k λ G k 1 p ( · ) > x ε ; a ( q ) .

From (2), we easily obtain the bound

Σ 3 q x θ τ ( q ) O ( 1 ) sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] λ F k λ G k 1 p ( · ) > x ε ; a ( q ) x log O ( 1 ) x ,

so by Cauchy-Schwarz, it suffices to show that

q x θ sup a ( / qℤ ) × Δ 1 [ x + h k , 2 x + h k ] λ F k λ G k 1 p ( · ) > x ε ; a ( q ) x log A x
(63)

for any fixed A>0.

If we had the additional hypothesis S(F k )+S(G k )<1, then this would follow easily from the hypothesis GEH[ 𝜗] thanks to Claim 12, since one can write λ F k λ G k 1 p ( · ) > x ε =αβ with

α ( n ) : = 1 p ( n ) > x ε d , d : [ d , d ] = n μ ( d ) F k log x d μ d G k log x d

and

β ( n ) : = 1 p ( n ) > x ε .

But even in the absence of the hypothesis S(F k )+S(G k )<1, we can still invoke GEH[ 𝜗] after appealing to the fundamental theorem of arithmetic. Indeed, if n∈[x+h k ,2x+h k ] with p(·)>ε, then we have

n = p 1 p r

for some primes xε<p1≤⋯≤p r ≤2x+h k , which forces r 1 ε +1. If we then partition [ xε,2x+h k ] by O(logA+1x) intervals I1,…,I m , with each I j contained in an interval of the form [N,(1+ log−A x)N], then we have p i I j i for some 1≤j1≤⋯≤j r m, with the product interval I j 1 ·· I j r intersecting [ x+h k ,2x+h k ]. For fixed r, there are O(logA r+r x) such tuples (j1,…,j r ), and a simple application of the prime number theorem with classical error term (and crude estimates on the discrepancy Δ) shows that each tuple contributes O(x log−A r+O(1)x) to (63) (here, and for the rest of this section, implied constants will be independent of A unless stated otherwise). In particular, the O(logA(r−1)x) tuples (j1,…,j r ) with one repeated j i , or for which the interval I j 1 ·· I j r meets the boundary of [x+h k ,2x+h k ], contributes O(log−A+O(1)x). This is an acceptable error to (63), and so these tuples may be removed. Thus, it suffices to show that

q x θ sup a ( / qℤ ) × Δ λ F k λ G k 1 A j 1 , , j r ; a ( q ) x log A ( r + 1 ) + O ( 1 ) x

for any 1r 1 ε +1 and 1≤j1<⋯<j r m with I j 1 ·· I j r contained in [ x+h k ,x+2h k ], where A j 1 , , j r is the set of all products p1p r with p i I j i for i=1,…,r, and where we allow implied constants in the ≪ notation to depend on ε. But for n in A j 1 , , j r , the 2r factors of n are just the products of subsets of {p1,…,p r }, and from the smoothness of F k ,G k , we see that λ F k (n) is equal to some bounded constant (depending on j1,…,j r , but independent of p1,…,p r ), plus an error of O(log−A x). As before, the contribution of this error is O(log−A(r+1)+O(1)x), so it suffices to show that

q x θ sup a ( / qℤ ) × Δ 1 A j 1 , , j r ; a ( q ) x log A ( r + 1 ) + O ( 1 ) x.

But one can write 1 A j 1 , , j r as a convolution 1 A j 1 1 A j r , where A j i denotes the primes in I j i ; assigning A j r (for instance) to be β and the remaining portion of the convolution to be α, the claim now follows from the hypothesis GEH[ 𝜗], thanks to the Siegel-Walfisz theorem (see, e.g. [32], Satz 4] or [33], Th. 5.29]).

Finally, we show (60). By Lemma 30, we have

d 1 , , d k 1 , d 1 , , d k 1 d 1 d 1 , , d k 1 d k 1 , W coprime i = 1 k 1 μ d i μ d i F i log x d i G i log x d i φ q W , d 1 , , d k 1 = 1 φ ( W ) C + o ( 1 ) B k + 1 ,

where

C : = i = 1 k 1 0 1 F i ( t ) G i ( t ) dt

(note that F i ,G i are supported on [ 0,1] by hypothesis), so by (56) it suffices to show that

x + h k n 2 x + h k λ F k (n) λ G k (n) 1 p ( n ) > x ε = C ε + o ( 1 ) x log x ,
(64)

where C ε ′′ is a quantity depending on ε but not on x such that

lim ε 0 C ε ′′ = 0 1 F k ( t ) G k ( t ) dt.

In the case S(F k )+S(G k )<1, this would follow easily from (the k=1 case of) Theorem 20(i) and Proposition 14. In the general case, we may appeal once more to the fundamental theorem of arithmetic. As before, we may factor n=p1p r for some xεp1≤⋯≤p r ≤2x+h k and r 1 ε +1. The contribution of those n with a repeated prime factor p i =pi+1 can easily be shown to be ⪻ ⪻x1−ε in the same manner we dealt with Σ2, so we may restrict attention to the square-free n, for which the p i are strictly increasing. In that case, one can write

λ F k ( n ) = ( 1 ) r ( log x p 1 ) ( log x p r ) F k ( 0 )

and

λ G k ( n ) = ( 1 ) r ( log x p 1 ) ( log x p r ) G k ( 0 )

where (h)F(x):=F(x+h)−F(x). On the other hand, a standard application of Mertens’ theorem and the prime number theorem (and an induction on r) shows that for any fixed r≥1 and any fixed continuous function f: r , we have

x ε p 1 < < p r : x + h k p 1 p r 2 x + h k f log x p 1 , , log x p r = c f + o ( 1 ) x log x

where c f is the quantity

c f : = ε t 1 < < t r : t 1 + + t r = 1 f t 1 , , t r n 1 d t r 1 t 1 t r

where we lift Lebesgue measure d t1d tr−1 up to the hyperplane t1+⋯+t r =1, and thus

t 1 + + t r = 1 F ( t 1 , , t r ) d t 1 d t r 1 : = r 1 F ( t 1 , , t r 1 , 1 t 1 t r 1 ) d t 1 d t r 1 .

Putting all these together, we see that we obtain an asymptotic (64) with

C ε ′′ : = 1 r 1 ε + 1 ε t 1 < < t r : t 1 + + t r = 1 ( t 1 ) ( t r ) F k ( 0 ) ( t 1 ) ( t r ) G k ( 0 ) d t 1 d t r 1 t 1 t r .

By Proposition 14, we have C ε ′′ +O(ε)=O(1). In the case F k =G k , we see that this implies ε converges to a limit as ε→0, and the general case F k G k then follows from using the Cauchy-Schwarz inequality. Therefore, we have the absolute convergence

r > 0 0 < t 1 < < t r : t 1 + + t r = 1 | t 1 t r F k (0)|| t 1 t r G k (0)| d t 1 d t r 1 t 1 t r <,
(65)

and so, by the dominated convergence theorem, it suffices to establish the identity

r > 0 0 < t 1 < < t r : t 1 + + t r = 1 t 1 t r F k (0) t 1 t r G k (0) d t 1 d t r 1 t 1 t r = 0 1 F k (t) G k (t)dt.
(66)

It will suffice to show the identity

r > 0 0 < t 1 < < t r : t 1 + + t r = 1 | t 1 t r F(0) | 2 d t 1 d t r 1 t 1 t r = 0 1 | F (t) | 2 dt
(67)

for any smooth F:[0,+), since (66) follows by replacing F with F k +G k and F k G k and then subtracting.

At this point, we use the following identity:

Lemma 33.

For any positive reals t1,…,t r with r≥1, we have

1 t 1 t r = σ S r 1 i = 1 r j = i r t σ ( j ) .
(68)

Thus, for instance, when r=2, we have

1 t 1 t 2 = 1 ( t 1 + t 2 ) t 1 + 1 ( t 1 + t 2 ) t 2 .

Proof.

If the right-hand side of (68) is denoted f r (t1,…,t r ), then one easily verifies the identity

f r ( t 1 , , t r ) = 1 t 1 + + t r i = 1 r f r 1 ( t 1 , , t i 1 , t i + 1 , , t r )

for any r>1; but the left-hand side of (68) also obeys this identity, and the claim then follows from induction.

From this lemma and symmetrisation, we may rewrite the left-hand side of (67) as

r > 0 t 1 , , t r 0 t 1 + + t r = 1 | ( t 1 ) ( t r ) F ( 0 ) | 2 d t 1 d t r 1 i = 1 r j = i r t i .

Let

I a ( F ) : = 0 a F ( t ) 2 dt ,

and

J a ( F ) : = ( ( a ) F ( 0 ) ) 2 .

One can then rewrite (67) as the identity

I 1 (F)= r = 1 K 1 , r (F),
(69)

where

K a , r ( F ) : = t 1 , , t r 0 t 1 + + t r = a J t r ( t 1 ) t r 1 F d t 1 d t r 1 a a t 1 a t 1 t r 1 .

To prove this, we first observe the identity

I a ( F ) = 1 a J a ( F ) + 0 t a I a t ( t ) F dt a

for any a>0; indeed, we have

0 t a I a t ( ( t ) F ) dt a = 0 t a ; 0 u a t | F ( t + u ) F ( t ) | 2 dudt a = 0 t s a | F ( s ) F ( t ) | 2 dsdt a = 1 2 0 a 0 a | F ( s ) F ( t ) | 2 dsdt a = 0 a | F ( s ) | 2 ds 1 a 0 a F ( s ) ds 0 a F ( t ) dt = I a ( F ) 1 a J a ( F ) ,

and the claim follows. Iterating this identity k times, we see that

I a (F)= r = 1 k K a , r (F)+ L a , k (F)
(70)

for any k≥1, where

L a , k ( F ) : = t 1 , , t k 0 t 1 + + t k a I 1 t 1 t k ( t 1 ) ( t k ) F d t 1 d t k a ( a t 1 ) ( a t 1 t k 1 ) .

In particular, dropping the La,k(F) term and sending k yields the lower bound

r = 1 K a , r (F) I a (F).
(71)

On the other hand, we can expand La,k(F) as

t 1 , , t k , t 0 t 1 + + t k + t a | ( t 1 ) ( t k ) F ( t ) | 2 d t 1 d t k dt a ( a t 1 ) ( a t 1 t k 1 ) .

Writing s:=t1+⋯+t k , we obtain the upper bound

L a , k ( F ) s , t 0 : s + t a K s , k ( F t ) dt ,

where F t (x):=F(x+t). Summing this and using (71) and the monotone convergence theorem, we conclude that

k = 1 L a , k ( F ) s , t 0 : s + t a I s ( F t ) dt < ,

and in particular La,k(F)→0 as k. Sending k in (70), we obtain (69) as desired.

Reduction to a variational problem

Now that we have proven Theorems 19 and 20, we can now establish Theorems 22, 24, 26 and 28. The main technical difficulty is to take the multidimensional measurable functions F appearing in these functions and approximate them by tensor products of smooth functions, for which Theorems 19 and 20 may be applied.

5.1 Proof of Theorem 22

We now prove Theorem 22. Let k,m,𝜗 obey the hypotheses of that theorem, and thus we may find a fixed square-integrable function F:[0,+ ) k supported on the simplex

R k : = ( t 1 , , t k ) [ 0 , + ) k : t 1 + + t k 1

and not identically zero and with

i = 1 k J i ( F ) I ( F ) > 2 m 𝜗 .
(72)

We now perform a number of technical steps to further improve the structure of F. Our arguments here will be somewhat convoluted and are not the most efficient way to prove Theorem 22 (which in any event was already established in [5]), but they will motivate the similar arguments given below to prove the more difficult results in Theorems 24, 26 and 28. In particular, we will use regularisation techniques which are compatible with the vanishing marginal condition (35) that is a key hypothesis in Theorem 28.

We first need to rescale and retreat a little bit from the slanted boundary of the simplex R k . Let δ1>0 be a sufficiently small fixed quantity, and write F 1 :[0,+ ) k to be the rescaled function

F 1 ( t 1 , , t k ) : = F t 1 𝜗 / 2 δ 1 , , t k 𝜗 / 2 δ 1 .

Thus, F1 is a fixed square-integrable measurable function supported on the rescaled simplex

𝜗 / 2 δ 1 · R k = ( t 1 , , t k ) [ 0 , + ) k : t 1 + + t k 𝜗 / 2 δ 1 .

From (72), we see that if δ1 is small enough, then F1 is not identically zero and

i = 1 k J i ( F 1 ) I ( F 1 ) >m.
(73)

Let δ1 and F1 be as above. Next, let δ2>0 be a sufficiently small fixed quantity (smaller than δ1), and write F 2 :[0,+ ) k to be the shifted function, defined by setting

F 2 ( t 1 , , t k ) : = F 1 ( t 1 δ 2 , , t k δ 2 )

when t1,…,t k δ2, and F2(t1,…,t k )=0 otherwise. As F1 was square-integrable, compactly supported, and not identically zero, and because spatial translation is continuous in the strong operator topology on L2, it is easy to see that we will have F2 not identically zero and that

i = 1 k J i ( F 2 ) I ( F 2 ) >m
(74)

for δ2 small enough (after restricting F2 back to [ 0,+)k, of course). For δ2 small enough, this function will be supported on the region

( t 1 , , t k ) k : t 1 + t k 𝜗 / 2 δ 2 ; t 1 , , t k δ 2 ,

and thus F2 stays away from all the boundary faces of R k .

By convolving F2 with a smooth approximation to the identity that is supported sufficiently close to the origin, one may then find a smooth function F 3 :[0,+ ) k , supported on

( t 1 , , t k ) k : t 1 + t k 𝜗 / 2 δ 2 / 2 ; t 1 , , t k δ 2 / 2 ,

which is not identically zero and such that

i = 1 k J i ( F 3 ) I ( F 3 ) >m.
(75)

We extend F3 by zero to all of k and then define the function f 3 : k by

f 3 ( t 1 , , t k ) : = s 1 t 1 , , s k t k F 3 ( s 1 , , s k ) d s 1 d s k ,

and thus f3 is smooth, not identically zero and supported on the region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 2 ) 𝜗 / 2 δ 2 / 2 .
(76)

From the fundamental theorem of calculus, we have

F 3 ( t 1 ,, t k ):= ( 1 ) k k t 1 t k f 3 ( t 1 ,, t k ),
(77)

and so I( F 3 )=Ĩ( f 3 ) and J i ( F 3 )= J ~ i ( f 3 ) for i=1,…,k, where

Ĩ( f 3 ):= [ 0 , + ) k k t 1 t k f 3 ( t 1 , , t k ) 2 d t 1 d t k
(78)

and

J ~ i ( f 3 ) : = [ 0 , + ) k 1 k 1 t 1 t i 1 t i + 1 t k f 3 ( t 1 , , t i 1 , 0 , t i + 1 , , t k ) 2 d t 1 d t i 1 d t i + 1 d t k .
(79)

In particular,

i = 1 k J ~ i ( f 3 ) Ĩ ( f 3 ) >m.
(80)

Now we approximate f3 by linear combinations of tensor products. By the Stone-Weierstrass theorem, we may express f3 as the uniform limit of functions of the form

( t 1 ,, t k ) j = 1 J c j f 1 , j ( t 1 ) f k , j ( t k )
(81)

where c1,…,c J are real scalars, and f i , j : are smooth compactly supported functions. Since f3 is supported in (76), we can ensure that all the components f1,j(t1)…fk,j(t k ) are supported in the slightly larger region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 4 ) 𝜗 / 2 δ 2 / 4 .

Observe that if one convolves a function of the form (81) by a smooth approximation to the identity which is of tensor product form (t1,…,t k )↦φ1(t1)…φ1(t k ), one obtains another function of this form. Such a convolution converts a uniformly convergent sequence of functions to a uniformly smoothly convergent sequence of functions (that is to say, all derivatives of the functions converge uniformly). From this, we conclude that f3 can be expressed as the smooth limit of functions of the form (81), with each component f1,j(t1)…fk,j(t k ) supported in the region

t 1 , , t k k : i = 1 k max ( t i , δ 2 / 8 ) 𝜗 / 2 δ 2 / 8 .

Thus, we may find such a linear combination

f 4 ( t 1 ,, t k )= j = 1 J c j f 1 , j ( t 1 ) f k , j ( t k )
(82)

with J, c j , fi,j fixed and f4 not identically zero, with

i = 1 k J ~ i ( f 4 ) Ĩ ( f 4 ) >m.
(83)

Furthermore, by construction we have

S( f 1 , j )++S( f k , j )< 𝜗 2 1 2
(84)

for all j=1,…,J, where S() was defined in (22).

Now we construct the sieve weight ν: by the formula

ν(n):= j = 1 J c j λ f 1 , j ( n + h 1 ) λ f k , j ( n + h k ) 2 ,
(85)

where the divisor sums λ f were defined in (16).

Clearly ν is non-negative. Expanding out the square and using Theorem 20(i) and (84), we see that

x n 2 x n = b ( W ) ν ( n ) = ( α + o ( 1 ) ) B k x log x

where

α : = j = 1 J j = 1 J c j c j i = 1 k 0 f i , j ( t i ) f i , j ( t i ) d t i

which factorizes using (82) and (78) as

α = [ 0 , + ) k k t 1 t k f 4 ( t 1 , , t k ) 2 d t 1 d t k = Ĩ ( f 4 ) .

Now consider the sum

x n 2 x n = b ( W ) ν ( n ) θ ( n + h k ) .

By (20), one has

λ f k , j ( n + h k ) = f k , j ( 0 )

whenever n gives a non-zero contribution to the above sum. Expanding out the square in (85) again and using Theorem 19(i) and (84) (and the hypothesis EH[ 𝜗]), we thus see that

x n 2 x n = b ( W ) ν ( n ) θ ( n + h k ) = ( β k + o ( 1 ) ) B 1 k x φ ( W )

where

β k : = j = 1 J j = 1 J c j c j f i , j ( 0 ) f i , j ( 0 ) i = 1 k 1 0 f i , j ( t i ) f i , j ( t i ) d t i

which factorizes using (82) and (79) as

β k = [ 0 , + ) k k t 1 t k 1 f 4 ( t 1 , , t k 1 , 0 ) 2 d t 1 d t k 1 = J ~ k ( f 4 ) .

More generally, we see that

x n 2 x n = b ( W ) ν ( n ) θ ( n + h k ) = β i + o ( 1 ) B 1 k x φ ( W )

for i=1,…,k, with β i := J ~ i ( f 4 ). Applying Lemma 19 and (75), we obtain DHL[ k;m+1] as required.

5.2 Proof of Theorem 24

Now we prove Theorem 24, which uses a very similar argument to that of the previous section. Let k,m,ϖ,δ,F be as in Theorem 24. By performing the same rescaling as in the previous section (but with 1/2+2ϖ playing the role of 𝜗), we see that we can find a fixed square-integrable measurable function F1 supported on the rescaled truncated simplex

( t 1 , , t k ) [ 0 , + ) k : t 1 + + t k 1 4 + ϖ δ 1 ; t 1 , , t k < δ δ 1

for some sufficiently small fixed δ1>0, such that (73) holds. By repeating the arguments of the previous section, we may eventually arrive at a smooth function f 4 : k of the form (82), which is not identically zero and obeys (83) and such that each component f1,j(t1)…fk,j(t k ) is supported in the region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 8 ) 1 4 + ϖ δ 2 / 8 ; t 1 , , t k < δ δ 2 / 8

for some sufficiently small δ2>0. In particular, one has

S ( f 1 , j ) + + S ( f k , j ) < 1 4 + ϖ 1 2

and

S ( f 1 , j ) , , S ( f k , j ) < δ

for all j=1,…,J. If we then define ν by (85) as before, and repeat all of the above arguments (but use Theorem 19(ii) and MPZ[ ϖ,δ] in place of Theorem 19(i) and EH[ 𝜗]), we obtain the claim; we leave the details to the interested reader.

5.3 Proof of Theorem 26

Now we prove Theorem 26. Let k,m,ε,𝜗 be as in that theorem. Then, one may find a square-integrable function F:[0,+ ) k supported on (1+ε)· R k which is not identically zero, and with

i = 1 k J i , 1 ε ( F ) I ( F ) > 2 m 𝜗 .

By truncating and rescaling as in the ‘Proof of Theorem 22’ section, we may find a fixed bounded measurable function F 1 :[0,+ ) k on the simplex (1+ε) 𝜗 2 δ 1 · R k such that

i = 1 k J i , ( 1 ε ) 𝜗 2 ( F 1 ) I ( F 1 ) > m.

By repeating the arguments in the ‘Proof of Theorem 22’ section, we may eventually arrive at a smooth function f 4 : k of the form (82), which is not identically zero and obeys

i = 1 k J ~ i , ( 1 ε ) 𝜗 2 ( f 4 ) Ĩ ( f 4 ) >m
(86)

with

J ~ i , ( 1 ε ) 𝜗 2 ( f 4 ) : = ( 1 ε ) 𝜗 2 · R k 1 k 1 t 1 t i 1 t i + 1 t k f 4 ( t 1 , , t i 1 , 0 , t i + 1 , , t k ) 2 d t 1 d t i 1 d t i + 1 d t k ,

and such that each component f1,j(t1)…fk,j(t k ) is supported in the region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 8 ) ( 1 + ε ) 𝜗 2 δ 2 8

for some sufficiently small δ2>0. In particular, we have

S( f 1 , j )++S( f k , j )(1+ε) 𝜗 2 δ 2 8
(87)

for all 1≤jJ.

Let δ3>0 be a sufficiently small fixed quantity (smaller than δ1 or δ2). By a smooth partitioning, we may assume that all of the fi,j are supported in intervals of length at most δ3, while keeping the sum

j = 1 J | c j || f 1 , j ( t 1 )|| f k , j ( t k )|
(88)

bounded uniformly in t1,…,t k and in δ3.

Now let ν be as in (85), and consider the expression

x n 2 x n = b ( W ) ν ( n ) .

This expression expands as a linear combination of the expressions

x n 2 x n = b ( W ) i = 1 k λ f i , j ( n + h i ) λ f i , j ( n + h i )

for various 1≤j,jJ. We claim that this sum is equal to

i = 1 k 0 1 f i , j ( t i ) f i , j ( t i ) d t i + o ( 1 ) B k x W .

To see this, we divide into two cases. First, suppose that hypothesis (i) from Theorem 26 holds, then from (87) we have

i = 1 k S ( f i , j ) + S ( f i , j ) < ( 1 + ε ) 𝜗 < 1

and the claim follows from Theorem 20(i). Now suppose instead that hypothesis (ii) from Theorem 26 holds, then from (87) one has

i = 1 k S ( f i , j ) + S ( f i , j ) < ( 1 + ε ) 𝜗 < k k 1 𝜗 ,

and so from the pigeonhole principle, we have

1 i k : i i 0 S ( f i , j ) + S ( f i , j ) < 𝜗

for some i0=1,…,k. The claim now follows from Theorem 20(ii).

Putting these together as in the ‘Proof of Theorem 22’ section, we conclude that

x n 2 x n = b ( W ) ν ( n ) = ( α + o ( 1 ) ) B k x W

where

α : = Ĩ ( f 4 ) .

Now we consider the sum

x n 2 x n = b ( W ) ν(n)θ(n+ h k ).
(89)

From Proposition 13, we see that we have EH[ 𝜗] as a consequence of the hypotheses of Theorem 26. However, this and Theorem 19 are not strong enough to obtain an asymptotic for the sum (89), as there is an epsilon loss in (87). But observe that Lemma 18 only requires a lower bound on the sum (89), rather than an asymptotic.

To obtain this lower bound, we partition {1,…,J} into J 1 J 2 , where J 1 consists of those indices j∈{1,…,J} with

S( f 1 , j )++S( f k 1 , j )<(1ε) 𝜗 2
(90)

and J 2 is the complement. From the elementary inequality

( x 1 + x 2 ) 2 = x 1 2 + 2 x 1 x 2 + x 2 2 ( x 1 + 2 x 2 ) x 1 ,

we obtain the pointwise lower bound

ν ( n ) j J 1 + 2 j J 2 c j λ f 1 , j ( n + h 1 ) λ f k , j ( n + h k ) × j J 1 c j λ f 1 , j ( n + h 1 ) λ f k , j ( n + h k ) .

The point of performing this lower bound is that if j J 1 J 2 and j J 1 , then from (87) and (90) one has

i = 1 k 1 S f i , j + S f i , j < 𝜗

which makes Theorem 19(i) available for use. Indeed, for any j∈{1,…,J} and i=1,…,k, we have from (87) that

S f i , j ( 1 + ε ) 𝜗 2 < 𝜗 < 1 ,

and so by (20), we have

ν ( n ) θ ( n + h k ) j J 1 + 2 j J 2 c j λ f 1 , j n + h 1 λ f k 1 , j n + h k 1 f k , j ( 0 ) × j J 1 c j λ f 1 , j ( n + h 1 ) λ f k 1 , j n + h k 1 f k , j ( 0 ) θ ( n + h k )
(91)

for xn≤2x. If we then apply Theorem 19(i) and the hypothesis EH[ 𝜗], we obtain the lower bound

x n 2 x n = b ( W ) ν ( n ) θ ( n + h k ) ( β k o ( 1 ) ) B 1 k x φ ( W )

with

β k : = j J 1 + 2 j J 2 j J 1 c j c j f k , j ( 0 ) f k , j ( 0 ) i = 1 k 1 0 f i , j ( t i ) f i , j ( t i ) d t i

which we can rearrange as

β k = [ 0 , + ) k 1 k 1 t 1 t k 1 f 4 , 1 ( t 1 , , t k 1 , 0 ) + 2 k 1 t 1 t k 1 f 4 , 2 ( t 1 , , t k 1 , 0 ) k 1 t 1 t k 1 f 4 , 1 ( t 1 , , t k 1 , 0 ) d t 1 d t k 1

where

f 4 , l ( t 1 , , t k ) : = j J l c j f 1 , j ( t 1 ) f k , j ( t k )

for l=1,2. Note that f4,1,f4,2 are both bounded pointwise by (88), and their supports only overlap on a set of measure O(δ3). We conclude that

β k = J ~ k ( f 4 , 1 ) + O ( δ 3 )

with the implied constant independent of δ3, and thus

β k = J ~ k , ( 1 ε ) 𝜗 2 ( f 4 ) + O ( δ 3 ) .

A similar argument gives

x n 2 x n = b ( W ) ν ( n ) θ ( n + h i ) ( β i o ( 1 ) ) B 1 k x φ ( W )

for i=1,…,k with

β i = J ~ i , ( 1 ε ) 𝜗 2 ( f 4 ) + O ( δ 3 ) .

If we choose δ3 small enough, then the claim DHL[ k;m+1] now follows from Lemma 18 and (86).

5.4 Proof of Theorem 28

Finally, we prove Theorem 28. Let k,m,ε,F be as in that theorem. By rescaling as in previous sections, we may find a square-integrable function F 1 :[0,+ ) k supported on k k 1 𝜗 2 δ 1 · R k for some sufficiently small fixed δ1>0, which is not identically zero, which obeys the bound

i = 1 k J i , ( 1 ε ) 𝜗 2 ( F 1 ) I ( F 1 ) > m

and also obeys the vanishing marginal condition (35) whenever t1,…,ti−1,ti+1,…,t k ≥0 are such that

t 1 + + t i 1 + t i + 1 + + t k > ( 1 + ε ) 𝜗 2 δ 1 .

As before, we pass from F1 to F2 by a spatial translation, and from F2 to F3 by a regularisation; crucially, we note that both of these operations interact well with the vanishing marginal condition (35), with the end product being that we obtain a smooth function F 3 :[0,+ ) k , supported on the region

( t 1 , , t k ) k : t 1 + t k k k 1 𝜗 2 δ 2 2 ; t 1 , , t k δ 2 2

for some sufficiently small δ2>0, which is not identically zero, obeying the bound

i = 1 k J i , ( 1 ε ) 𝜗 2 ( F 3 ) I ( F 3 ) > m

and also obeying the vanishing marginal condition (35) whenever t1,…,ti−1,ti+1,…,t k ≥0 are such that

t 1 + + t i 1 + t i + 1 + + t k > ( 1 + ε ) 𝜗 2 δ 2 2 .

As before, we now define the function f 3 : k by

f 3 ( t 1 , , t k ) : = s 1 t 1 , , s k t k F 3 ( s 1 , , s k ) d s 1 d s k ,

and thus, f3 is smooth, not identically zero and supported on the region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 2 ) k k 1 𝜗 2 δ 2 2 .

Furthermore, from the vanishing marginal condition, we see that we also have

f 3 ( t 1 , , t k ) = 0

whenever we have some 1≤ik for which t i δ2/2 and

t 1 + + t i 1 + t i + 1 + + t k ( 1 + ε ) 𝜗 2 δ 2 2 .

From the fundamental theorem of calculus as before, we have

i = 1 k J ~ i , ( 1 ε ) 𝜗 2 ( f 3 ) Ĩ ( f 3 ) > m.

Using the Stone-Weierstrass theorem as before, we can then find a function f4 of the form

( t 1 ,, t k ) j = 1 J c j f 1 , j ( t 1 ) f k , j ( t k )
(92)

where c1,…,c J are real scalars, and f i , j : are smooth functions supported of intervals of length at most δ3>0 for some sufficiently small δ3>0, with the support of each component f1,j(t1)…fk,j(t k ) supported in the region

( t 1 , , t k ) k : i = 1 k max ( t i , δ 2 / 8 ) k k 1 𝜗 2 δ 2 / 8

and avoiding the regions

( t 1 , , t k ) k : t i δ 2 / 8 ; t 1 + + t i 1 + t i + 1 + + t k ( 1 + ε ) 𝜗 2 δ 2 / 8

for each i=1,…,k, and such that

i = 1 k J ~ i , ( 1 ε ) 𝜗 2 f 4 Ĩ ( f 4 ) > m.

In particular, for any j=1,…,J we have

S f 1 , j ++S f k , j < k k 1 𝜗 2 < 1 2 k k 1 1
(93)

and for any i=1,…,k with fk,i not vanishing at zero, we have

S f 1 , j ++S f k , i 1 +S f k , i + 1 ++S f k , j <(1+ε) 𝜗 2 .
(94)

Let ν be defined by (85). From (93), the hypothesis GEH[ 𝜗], and the argument from the previous section used to prove Theorem 26(ii), we have

x n 2 x n = b ( W ) ν ( n ) = ( α + o ( 1 ) ) B k x W

where

α : = Ĩ f 4 .

Similarly, from (94) (and the upper bound S(fi,j)<1 from (93)), the hypothesis EH[ 𝜗] (which is available by Proposition 13), and the argument from the previous section, we have

x n 2 x n = b ( W ) ν ( n ) θ ( n + h i ) β i o ( 1 ) B 1 k x φ ( W )

for i=1,…,k with

β i = J ~ i , ( 1 ε ) 𝜗 2 ( f 4 ) + O ( δ 3 ) .

Setting δ3 small enough, the claim DHL[ k;m+1] now follows from Lemma 18.

Asymptotic analysis

We now establish upper and lower bounds on the quantity M k defined in (33), as well as for the related quantities appearing in Theorem 24.

To obtain an upper bound on M k , we use the following consequence of the Cauchy-Schwarz inequality.

Lemma 34(Cauchy-Schwarz).

Let k≥2, and suppose that there exist positive measurable functions G i : R k (0,+) for i=1,…,k such that

0 G i ( t 1 ,, t k )d t i 1
(95)

for all t1,…,ti−1,ti+1,…,t k ≥0, where we extend G i by zero to all of [ 0,+)k. Then, we have

M k ess sup ( t 1 , , t k ) R k i = 1 k 1 G i ( t 1 , , t k ) .
(96)

Here ess sup refers to essential supremum (thus, we may ignore a subset of R k of measure zero in the supremum).

Proof.

Let F:[0,+ ) k be a square-integrable function supported on R k . From the Cauchy-Schwarz inequality and (95), we have

0 F ( t 1 , , t k ) d t i 2 0 F ( t 1 , , t k ) 2 G i ( t 1 , , t k ) d t i

for any t1,…,ti−1,ti+1,…,t k ≥0. Inserting this into (32) and integrating, we conclude that

J i ( F ) R k F ( t 1 , , t k ) 2 G i ( t 1 , , t k ) d t 1 d t k .

Summing in i and using (31), (33), and (96), we obtain the claim.

As a corollary, we can compute M k exactly if we can locate a positive eigenfunction:

Corollary 35.

Let k≥2, and suppose that there exists a positive function F: R k (0,+) obeying the eigenfunction equation

λF( t 1 ,, t k )= i = 1 k 0 F( t 1 ,, t i 1 , t i , t i + 1 ,, t k )d t i
(97)

for some λ>0 and all ( t 1 ,, t k ) R k , where we extend F by zero to all of [ 0,+)k. Then, λ=M k .

Proof.

On the one hand, if we integrate (97) against F and use (31) and (32), we see that

λI ( F ) = i = 1 k J i ( F ) ,

and thus by (33), we see that M k λ. On the other hand, if we apply Lemma 34 with

G i ( t 1 , , t k ) : = F ( t 1 , , t k ) 0 F ( t 1 , , t i 1 , t i , t i + 1 , , t k ) d t i ,

we see that M k λ, and the claim follows.

This allows for an exact calculation of M2:

Corollary 36(Computation of M2).

We have

M 2 = 1 1 W ( 1 / e ) = 1.38593

where the Lambert W-function W(x) is defined for positive x as the unique positive solution to x=W(x)eW(x).

Proof.

If we set λ:= 1 1 W ( 1 / e ) =1.38593, then a brief calculation shows that

2λ1=λlogλλlog(λ1).
(98)

Now if we define the function f: [ 0,1]→[ 0,+) by the formula

f ( x ) : = 1 λ 1 + x + 1 2 λ 1 log λ x λ 1 + x ,

then a further brief calculation shows that

0 1 x f ( y ) dy = λ 1 + x 2 λ 1 log λ x λ 1 + x + λ log λ λ log ( λ 1 ) 2 λ 1

for any 0≤x≤1, and hence by (98) that

0 1 x f y dy = ( λ 1 + x ) f ( x ) .

If we then define the function F: R 2 (0,+) by F(x,y):=f(x)+f(y), we conclude that

0 1 x F ( x , y ) d x + 0 1 y F ( x , y ) d y = λF ( x , y )

for all (x,y) R 2 , and the claim now follows from Corollary 35.

We conjecture that a positive eigenfunction for M k exists for all k≥2, not just for k=2; however, we were unable to produce any such eigenfunctions for k>2. Nevertheless, Lemma 34 still gives us a general upper bound:

Corollary 37.

We have M k k k 1 logk for any k≥2.

Thus, for instance, one has M2≤2 log2=1.38629…, which compares well with Corollary 36. On the other hand, Corollary 37 also gives

M 4 4 3 log 4 = 1.8454 ,

so that one cannot hope to establish DHL[ 4;2] (or DHL[ 3;2]) solely through Theorem 22 even when assuming GEH, and must rely instead on more sophisticated criteria for DHL[ k;m] such as Theorem 26 or Theorem 28.

Proof.

If we set G i : R k (0,+) for i=1,…,k to be the functions

G i ( t 1 , , t k ) : = k 1 log k 1 1 t 1 t k + k t i

then direct calculation shows that

0 G i ( t 1 , , t k ) d t i 1

for all t1,…,ti−1,ti+1,…,t k ≥0, where we extend G i by zero to all of [ 0,+)k. On the other hand, we have

i = 1 k 1 G i ( t 1 , , t k ) = k k 1 log k

for all ( t 1 ,, t k ) R k . The claim now follows from Lemma 34.

The upper bound arguments for M k can be extended to other quantities such as Mk,ε, although the bounds do not appear to be as sharp in that case. For instance, we have the following variant of Lemma 37, which shows that the improvement in constants when moving from M k to Mk,ε is asymptotically modest:

Proposition 38.

For any k≥2 and ε≥0, we have

M k , ε k k 1 log ( 2 k 1 ) .

Proof.

Let F:[0,+ ) k be a square-integrable function supported on (1+ε)· R k . If i=1,…,k and ( t 1 ,, t i 1 , t i + 1 ,, t k )(1ε)· R k , then if we write s:=1−t1−⋯−ti−1ti+1−⋯−t k , we have sε and hence

0 1 t 1 t i 1 t i + 1 t k + ε 1 1 t 1 t k + k t i d t i = 0 s + ε 1 s + ( k 1 ) t i d t i = 1 k 1 log ks + ( k 1 ) ε s 1 k 1 log ( 2 k 1 ) .

By Cauchy-Schwarz, we conclude that

0 F ( t 1 , , t k ) d t i 2 1 k 1 log ( 2 k 1 ) 0 ( 1 t 1 t k + k t i ) F ( t 1 , , t k ) 2 d t i .

Integrating in t1,…,ti−1,ti+1,…,t k and summing in i, we obtain the claim.

Remark 39.

The same argument, using the weight 1+a(−t1−⋯−t k +k t i ), gives the more general inequality

M k , ε k a ( k 1 ) log k + ( a ( 1 + ε ) 1 ) ( k 1 ) 1 a ( 1 ε )

whenever 1 1 + ε <a< 1 1 ε ; the case a=1 is Proposition 38, and the limiting case a= 1 1 + ε recovers Lemma 37 when one sends ε to zero.

One can also adapt the computations in Corollary 36 to obtain exact expressions for M2,ε, although the calculations are rather lengthy and will only be summarized here. For fixed 0<ε<1, the eigenfunctions F one seeks should take the form

F ( x , y ) : = f ( x ) + f ( y )

for x,y≥0 and x+y≤1+ε, where

f ( x ) : = 1 x 1 ε 0 1 + ε x F ( x , t ) dt.

In the regime 0<ε<1/3, one can calculate that f will (up to scalar multiples) take the form

f ( x ) = 1 x 2 ε C 1 λ 1 ε + x + 1 2 ε x 1 ε log ( λ x ) log ( λ 1 ε + x ) 2 λ 1 ε + 1 λ 1 ε + x

where

C 1 : = log ( λ 2 ε ) log ( λ 1 + ε ) 1 log ( λ 1 + ε ) + log ( λ 1 ε )

and λ is the largest root of the equation

1 = C 1 ( log ( λ 1 + ε ) log ( λ 1 ε ) ) log ( λ 1 + ε ) + ( λ 1 + ε ) log ( λ 1 + ε ) ( λ 2 ε ) log ( λ 2 ε ) 2 λ 1 ε .

In the regime 1/3≤ε<1, the situation is significantly simpler, and one has the exact expressions

f ( x ) = 1 x 1 ε λ 1 ε + x

and

λ = e ( 1 + ε ) 2 ε e 1 .

In both cases, a variant of Corollary 35 can be used to show that M2,ε will be equal to λ; thus, for instance,

M 2 , ε = e ( 1 + ε ) 2 ε e 1

for 1/3≤ε<1. In particular, M2,ε increases to 2 in the limit ε→1; the lower bound liminf ε 1 M 2 , ε 2 can also be established by testing with the function F(x,y):=1xδ,y≤1+εδ+1yδ,x≤1+εδ for some sufficiently small δ>0.

Now we turn to lower bounds on M k , which are of more relevance for the purpose of establishing results such as Theorem 23. If one restricts attention to those functions F: R k of the special form F(t1,…,t k )=f(t1+⋯+t k ) for some function f:[0,1], then the resulting variational problem has been optimized in previous works [39] (and originally in an unpublished work of Conrey), giving rise to the lower bound

M k 4 k ( k 1 ) j k 2 2

where jk−2 is the first positive zero of the Bessel function Jk−2. This lower bound is reasonably strong for small k; for instance, when k=2 it shows that

M 2 1.383

which compares well with Corollary 36, and also shows that M6>2, recovering the result of Goldston, Pintz, and Yıldırım that DHL[ 6;2] (and hence H1≤16) was true on the Elliott-Halberstam conjecture. However, one can show that 4 k ( k 1 ) j k 2 2 <4 for all k (see [36]), so this lower bound cannot be used to force M k to be larger than 4.

In [5], the lower bound

M k logk2loglogk2
(99)

was established for all sufficiently large k. In fact, the arguments in [5] can be used to show this bound for all k≥200 (for k<200, the right-hand side of (99) is either negative or undefined). Indeed, if we use the bound ([5], (7.19)) with A chosen so that A2eA=k, then 3<A< logk when k≥200, hence eA=k/A2>k/ log2k and so A≥ logk−2 log logk. By using the bounds A e A 1 < 1 6 (since A>3) and eA/k=1/A2<1/9, we see that the right-hand side of ([5], (8.17)) exceeds A 1 ( 1 1 / 6 1 / 9 ) 2 A2, which gives (99).

We will remove the log logk term in (99) via the following explicit estimate.

Theorem 40.

Let k≥2, and let c,T,τ>0 be parameters. Define the function g:[0,T] by

g(t):= 1 c + ( k 1 ) t
(100)

and the quantities

m 2 : = 0 T g ( t ) 2 dt
(101)
μ : = 1 m 2 0 T tg ( t ) 2 dt
(102)
σ 2 : = 1 m 2 0 T t 2 g ( t ) 2 dt μ 2 .
(103)

Assume the inequalities

1 τ
(104)
< 1 T
(105)
k σ 2 < ( 1 + τ ) 2 .
(106)

Then, one has

k k 1 logk M k [ T ] k k 1 Z + Z 3 + WX + VU ( 1 + τ / 2 ) 1 k σ 2 ( 1 + τ ) 2
(107)

where Z,Z3,W,X,V,U are the explicitly computable quantities

Z : = 1 τ 1 1 + τ r log r T + k σ 2 4 ( r ) 2 log r T + r 2 4 kT dr
(108)
Z 3 : = 1 m 2 0 T kt log 1 + t T g ( t ) 2 dt
(109)
W : = 1 m 2 0 T log 1 + τ kt g ( t ) 2 dt
(110)
X : = log k τ c 2
(111)
V : = c m 2 0 T 1 2 c + ( k 1 ) t g ( t ) 2 dt
(112)
U : = log k c 0 1 ( 1 + ( k 1 ) μ c ) 2 + ( k 1 ) σ 2 du.
(113)

Of course, since M k [ T ] M k , the bound (107) also holds with M k [ T ] replaced by M k .

Proof.

From (33), we have

i = 1 k J i ( F ) M k [ T ] I ( F )

whenever F:[0,+ ) k is square-integrable and supported on [ 0 , T ] k R k . By rescaling, we conclude that

i = 1 k J i ( F ) r M k [ T ] I ( F )

whenever r>0 and F:[0,+ ) k is square-integrable and supported on [ 0 , rT ] k r· R k . We apply this inequality with the function

F ( t 1 , , t k ) : = 1 t 1 + + t k r g ( t 1 ) g ( t k )

where r>1 is a parameter which we will eventually average over, and g is extended by zero to [ 0,+). We thus have

I ( F ) = m 2 k 0 0 1 t 1 + + t k r i = 1 k g ( t i ) 2 d t i m 2 .

We can interpret this probabilistically as

I ( F ) = m 2 k ( X 1 + + X k r )

where X1,…,X k are independent random variables taking values in [ 0,T] with probability distribution 1 m 2 g ( t ) 2 dt. In a similar fashion, we have

J k ( F ) = m 2 k 1 0 0 0 , r t 1 t k 1 g ( t ) dt 2 i = 1 k 1 g ( t i ) 2 d t i m 2 ,

where we adopt the convention that [ a , b ] vanishes when b<a. In probabilistic language, we thus have

J k ( F ) = m 2 k 1 𝔼 0 , r X 1 X k 1 g ( t ) dt 2 .

Also by symmetry, we see that J i (F)=J k (F) for all i=1,…,k. Putting all these together, we conclude that

𝔼 0 r X 1 X k 1 g ( t ) dt 2 m 2 M k [ T ] r k ( X 1 + + X k r )

for all r>1. Writing S i :=X1+⋯+X i , we abbreviate this as

𝔼 [ 0 , r S k 1 ] g ( t ) dt 2 m 2 M k [ T ] r k ( S k r).
(114)

Now we run a variant of the Cauchy-Schwarz argument used to prove Corollary 37. If, for fixed r>0, we introduce the random function h:(0,+) by the formula

h(t):= 1 r S k 1 + ( k 1 ) t 1 S k 1 < r
(115)

and observe that whenever Sk−1<r, we have

[ 0 , r S k 1 ] h(t)dt= log k k 1
(116)

and thus by the Legendre identity, we have

[ 0 , r S k 1 ] g ( t ) dt 2 = log k k 1 [ 0 , r S k 1 ] g ( t ) 2 h ( t ) dt 1 2 [ 0 , r S k 1 ] [ 0 , r S k 1 ] ( g ( s ) h ( t ) g ( t ) h ( s ) ) 2 h ( s ) h ( t ) dsdt

for Sk−1<r; but the claim also holds when rSk−1 since all integrals vanish in that case. On the other hand, we have

𝔼 [ 0 , r S k 1 ] g ( t ) 2 h ( t ) dt = m 2 𝔼 r S k 1 + ( k 1 ) X k 1 X k r S k 1 = m 2 𝔼 r S k + k X k 1 S k r = m 2 𝔼r 1 S k r = m 2 rℙ ( S k r )

where we have used symmetry to get the third equality. We conclude that

𝔼 [ 0 , r S k 1 ] g ( t ) dt 2 = log k k 1 m 2 rℙ ( S k r ) 1 2 𝔼 [ 0 , r S k 1 ] [ 0 , r S k 1 ] ( g ( s ) h ( t ) g ( t ) h ( s ) ) 2 h ( s ) h ( t ) dsdt.

Combining this with (114), we conclude that

Δrℙ ( S k r ) k 2 m 2 𝔼 [ 0 , r S k 1 ] [ 0 , r S k 1 ] g ( s ) h ( t ) g ( t ) h ( s ) 2 h ( s ) h ( t ) dsdt

where

Δ : = k k 1 log k M k [ T ] .

Splitting into regions where s,t are less than T or greater than T, and noting that g(s) vanishes for s>T, we conclude that

Δrℙ ( S k r ) Y 1 ( r ) + Y 2 ( r )

where

Y 1 ( r ) : = k m 2 𝔼 [ 0 , T ] [ T , r S k 1 ] g ( t ) 2 h ( t ) h ( s ) dsdt

and

Y 2 ( r ) : = k 2 m 2 𝔼 [ 0 , min ( T , r S k 1 ) ] [ 0 , min ( T , r S k 1 ) ] g ( s ) h ( t ) g ( t ) h ( s ) 2 h ( s ) h ( t ) dsdt.

We average this from r=1 to r=1+τ, to conclude that

Δ 1 τ 1 1 + τ rℙ ( S k r ) dr 1 τ 1 1 + τ Y 1 ( r ) dr + 1 τ 1 1 + τ Y 2 ( r ) dr.

Thus, to prove (107), it suffices (by (106)) to establish the bounds

1 τ 1 1 + τ rℙ( S k r)dr(1+τ/2) 1 k σ 2 ( 1 + τ ) 2 ,
(117)
k k 1 Y 1 (r)Z+ Z 3
(118)

for all 1<r≤1+τ, and

1 τ 1 1 + τ Y 2 (r)dr k k 1 (WX+VU).
(119)

We begin with (117). Since

1 τ 1 1 + τ r dr = 1 + τ 2 ,

it suffices to show that

( S k > 1 + τ ) k σ 2 ( 1 + τ ) 2 .

But from (102) and (103), we see that each X i has mean μ and variance σ2, so S k has mean k μ and variance k σ2. The claim now follows from Chebyshev’s inequality and (104).

Now we show (118). The quantity Y1(r) is vanishing unless rSk−1T. Using the crude bound h(s) 1 ( k 1 ) s from (115), we see that

[ T , r S k 1 ] h ( s ) ds 1 k 1 log + r S k 1 T

where log+(x):= max(logx,0). We conclude that

Y 1 ( r ) k k 1 1 m 2 𝔼 [ 0 , T ] g ( t ) 2 h ( t ) dt log + r S k 1 T .

We can rewrite this as

Y 1 ( r ) k k 1 𝔼 1 S k r h ( X k ) log + r S k 1 T .

By (115), we have

1 S k r h ( X k ) = r S k + k X k 1 S k r .

Also, from the elementary bound log+(x+y)≤ log+x+ log(1+y) for any x,y≥0, we see that

log + r S k 1 T log + r S k T + log 1 + X k T .

We conclude that

Y 1 ( r ) k k 1 𝔼 r S k + k X k log + r S k T + log 1 + X k T 1 S k r k k 1 𝔼 ( r S k + k X k ) log + r S k T + max ( r S k , 0 ) X k T + k X k log 1 + X k T

using the elementary bound log(1+y)≤y. Symmetrizing in the X1,…,X k , we conclude that

Y 1 (r) k k 1 ( Z 1 (r)+ Z 2 (r)+ Z 3 )
(120)

where

Z 1 ( r ) : = 𝔼r log + r S k T Z 2 ( r ) : = 𝔼 ( r S k ) 1 S k r S k kT

and Z3 was defined in (109).

For the minor error term Z2, we use the crude bound (r S k ) 1 S k r S k r 2 4 , so

Z 2 (r) r 2 4 kT .
(121)

For Z1, we upper bound log+x by a quadratic expression in x. More precisely, we observe the inequality

log + x x 2 a log a a 2 4 a 2 log a

for any a>1 and x, since the left-hand side is concave in x for x≥1, while the right-hand side is convex in x, non-negative, and tangent to the left-hand side at x=a. We conclude that

log + r S k T r S k 2 aT log a aT 2 4 a 2 T 2 log a .

On the other hand, from (102) and (103), we see that each X i has mean μ and variance σ2, so S k has mean k μ and variance k σ2. We conclude that

Z 1 ( r ) r r 2 aT log a aT 2 + k σ 2 4 a 2 T 2 log a

for any a>1.

From (105) and the assumption r>1, we may choose a:= r T here, leading to the simplified formula

Z 1 (r)r log r T + k σ 2 4 ( r ) 2 log r T .
(122)

From (120), (121), (122), and (108) we conclude (118).

Finally, we prove (119). Here, we finally use the specific form (100) of the function g. Indeed, from (100) and (115), we observe the identity

g ( t ) h ( t ) = ( r S k 1 c ) g ( t ) h ( t )

for t∈[0, min(rSk−1,T)]. Thus,

Y 2 ( r ) = k 2 m 2 𝔼 0 , min ( r S k 1 , T ) 0 , min ( r S k 1 , T ) ( g h ) ( s ) h ( t ) ( g h ) ( t ) h ( s ) 2 h ( s ) h ( t ) dsdt = k 2 m 2 𝔼 ( r S k 1 c ) 2 0 , min ( r S k 1 , T ) 0 , min ( r S k 1 , T ) g ( s ) g ( t ) 2 h ( s ) h ( t ) dsdt.

Using the crude bound (g(s)−g(t))2g(s)2+g(t)2 and using symmetry, we conclude

Y 2 ( r ) k m 2 𝔼 r S k 1 c 2 0 , min ( r S k 1 , T ) 0 , min ( r S k 1 , T ) g ( s ) 2 h ( s ) h ( t ) dsdt.

From (116) and (115), we conclude that

Y 2 ( r ) k k 1 Z 4 ( r )

where

Z 4 ( r ) : = log k m 2 𝔼 r S k 1 c 2 0 , min r S k 1 , T g ( s ) 2 r S k 1 + ( k 1 ) s ds .

To prove (119), it thus suffices (after making the change of variables r=1+u τ) to show that

0 1 Z 4 (1+)duWX+VU.
(123)

We will exploit the averaging in u to deal with the singular nature of the factor 1 r S k 1 + ( k 1 ) s . By Fubini’s theorem, the left-hand side of (123) may be written as

log k m 2 𝔼 0 1 Q ( u ) du

where Q(u) is the random variable

Q ( u ) : = 1 + S k 1 c 2 0 , min 1 + S k 1 , T g ( s ) 2 1 + S k 1 + ( k 1 ) s ds.

Note that Q(u) vanishes unless 1+u τSk−1>0. Consider first the contribution of those Q(u) for which

0 < 1 + S k 1 2 c.

In this regime, we may bound

1 + S k 1 c 2 c 2 ,

so this contribution to (123) may be bounded by

log k m 2 c 2 𝔼 0 , T g ( s ) 2 0 1 1 1 + S k 1 s 1 + S k 1 + ( k 1 ) s du ds.

Observe on making the change of variables v:=1+u τSk−1+(k−1)s that

0 1 1 1 + S k 1 s 1 + S k 1 + ( k 1 ) s du = 1 τ max ( ks , 1 S k 1 + ( k 1 ) s ) , 1 S k 1 + τ + ( k 1 ) s dv v 1 τ log ks + τ ks

and so this contribution to (123) is bounded by WX, where W,X are defined in (110) and (111).

Now we consider the contribution to (123) whene

1 + S k 1 > 2 c.

In this regime, we bound

1 1 + S k 1 + ( k 1 ) s 1 2 c + ( k 1 ) t ,

and so this portion of 0 1 Z 4 [1+]du may be bounded by

0 1 log k c 𝔼 1 + S k 1 c 2 V du = VU

where V,U are defined in (112) and (113). The proof of the theorem is now complete.

We can now perform an asymptotic analysis in the limit k to establish Theorem 23(xi) and Theorem 25(vi). For k sufficiently large, we select the parameters

c : = 1 log k + α log 2 k T : = β log k τ : = γ log k

for some real parameters α and β,γ>0 independent of k to be optimized in later. From (100) and (101), we have

m 2 = 1 k 1 1 c 1 c + ( k 1 ) T = log k k 1 α log k + o 1 log k

where we use o(f(k)) to denote a function g(k) of k with g(k)/f(k)→0 as k. On the other hand, we have from (100) and (102) that

m 2 ( c + ( k 1 ) μ ) = 0 T c + ( k 1 ) t g ( t ) 2 dt = 1 k 1 log c + ( k 1 ) T c = log k k 1 + log β log k + o 1 log k

and thus

= k k 1 1 + log β + α log k + o 1 log k kc k 1 = 1 + log β + α log k + o 1 log k 1 log k + o 1 log k = 1 + log β + α 1 log k + o 1 log k .

Similarly, from (100), (102), and (103), we have

m 2 c 2 + 2 c ( k 1 ) μ + ( k 1 ) 2 μ 2 + σ 2 = 0 T c + ( k 1 ) t 2 g ( t ) 2 dt = T

and thus

k σ 2 = k ( k 1 ) 2 T m 2 c 2 2 c ( k 1 ) μ k μ 2 = β log 2 k + o 1 log 2 k .

We conclude that the hypotheses (104), (105), and (106) will be obeyed for sufficiently large k if we have

log β + α + γ < 1 log β + α + β < 1 β < 1 + γ α log β 2 .

These conditions can be simultaneously obeyed, for instance by setting β=γ=1 and α=−1.

Now we crudely estimate the quantities Z,Z3,W,X,V,U in (108)-(113). For 1≤r≤1+τ, we have rk μ∼1/ logk, and so

r T 1 ; k σ 2 ( r ) 2 1 ; r 2 4 kT = o ( 1 )

and so by (108) Z=O(1). Using the crude bound log 1 + t T =O(1) for 0≤tT, we see from (109) and (102) that Z3=O(k μ)=O(1). It is clear that X=O(1), and using the crude bound 1 2 c + ( k 1 ) t 1 c we see from (112) and (101) that V=O(1). For 0≤u≤1 we have 1+u τ−(k−1)μc=O(1/ logk), so from (113) we have U=O(1). Finally, from (110) and the change of variables t= s k log k , we have

W = log k k m 2 0 kT log k log 1 + γ s ds 1 + α log k + k 1 k s 2 = O 0 log 1 + γ s ds ( 1 + o ( 1 ) ) ( 1 + s ) 2 = O ( 1 ) .

Finally, we have

1 k σ 2 ( 1 + τ ) 2 1 .

Putting all these together, we see from (107) that

M k M k [ T ] k k 1 log k O ( 1 )

giving Theorem 23(xi). Furthermore, if we set

ϖ : = 7 600 C log k

and

δ : = 1 4 + 7 600 β log k ,

then we will have 600ϖ+180δ<7 for C large enough, and Theorem 25(vi) also follows (as one can verify from inspection that all implied constants here are effective).

Finally, Theorem 23(viii), (ix), and (x) follow by setting

c : = θ log k T : = β log k τ = 1

with θ,β given by Table 2, with (107) then giving the bound M k [ T ] >M with M as given by the table, after verifying of course that the conditions (104), (105), and (106) are obeyed. Similarly, Theorem 25 (ii), (iii), (iv), and (v) follows with θ,β given by the same table, with ϖ chosen so that

M = m 1 4 + ϖ

with m=2,3,4,5 for (ii), (iii), (iv), (v), respectively, and δ chosen by the formula

δ : = T 1 4 + ϖ .
Table 2 Parameter choices for Theorems 23 and 25

The case of small and medium dimension

In this section, we establish lower bounds for M k (and related quantities, such as Mk,ε) both for small values of k (in particular, k=3 and k=4) and medium values of k (in particular, k=50 and k=54). Specifically, we will establish Theorem 23(vii), Theorem 27, and Theorem 29.

7.1 Bounding M k for medium k

We begin with the problem of lower bounding M k . We first formalize an observationf of Maynard [5] that one may restrict without loss of generality to symmetric functions:

Lemma 41.

For any k≥2, one has

M k : = sup k J 1 ( F ) I ( F )

where F ranges over symmetric square-integrable functions on R k that are not identically zero.

Proof.

Firstly, observe that if one replaces a square-integrable function F:[0,+ ) k with its absolute value |F|, then I(|F|)=I(F) and J i (|F|)≥J i (F). Thus, one may restrict the supremum in (33) to non-negative functions without loss of generality. We may thus find a sequence F n of square-integrable non-negative functions on R k , normalized so that I(F n )=1, and such that i = 1 k J i ( F n ) M k as n.

Now let

F n ¯ ( t 1 , , t k ) : = 1 k ! σ S k F n ( t σ ( 1 ) , , t σ ( k ) )

be the symmetrisation of F n . Since the F n are non-negative with I(F n )=1, we see that

I ( F n ¯ ) I 1 k ! F n = 1 ( k ! ) 2

and so I( F n ¯ ) is bounded away from zero. Also, from (33), we know that the quadratic form

Q ( F ) : = M k I ( F ) i = 1 k J i ( F )

is positive semi-definite and is also invariant with respect to symmetries, and so from the triangle inequality for inner product spaces, we conclude that

Q F n ¯ Q F n .

By construction, Q(F n ) goes to zero as n, and thus Q( F n ¯ ) also goes to zero. We conclude that

k J 1 F n ¯ I F n ¯ = i = 1 k J i F n ¯ I F n ¯ M k

as n, and so

M k sup k J 1 ( F ) I ( F ) .

The reverse inequality is immediate from (33), and the claim follows.

To establish a lower bound of the form M k >C for some C>0, one thus seeks to locate a symmetric function F:[0,+ ) k supported on R k such that

k J 1 (F)>CI(F).
(124)

To do this numerically, we follow [5] (see also [2] for some related ideas) and can restrict attention to functions F that are linear combinations

F = i = 1 n a i b i

of some explicit finite set b 1 ,, b n :[0,+ ) k supported on R k and some real scalars a1,…,a n that we may optimize in. The condition (124) then may be rewritten as

a T M 2 aC a T M 1 a>0
(125)

where a is the vector

a : = a 1 a n

and M1,M2 are the real symmetric and positive semi-definite n×n matrices

M 1 = k b i ( t 1 , , t k ) b j ( t 1 , , t k ) d t 1 d t k 1 i , j n
(126)
M 2 = k k + 1 b i ( t 1 , , t k ) b j ( t 1 , , t k 1 , t k ) d t 1 d t k d t k 1 i , j n .
(127)

If the b1,…,b n are linearly independent in L 2 ( R k ), then M1 is strictly positive definite, and (as observed in [5], Lemma 8.3]), one can find a obeying (125) if and only if the largest eigenvalue of M 2 M 1 1 exceeds C. This is a criterion that can be numerically verified for medium-sized values of n, if the b1,…,b n are chosen so that the matrix coefficients of M1,M2 are explicitly computable.

In order to facilitate computations, it is natural to work with bases b1,…,b n of symmetric polynomials. We have the following basic integration identity:

Lemma 42(Beta function identity).

For any non-negative a,a1,…,a k , we have

R k ( 1 t 1 t k ) a t 1 a 1 t k a k d t 1 d t k = Γ ( a + 1 ) Γ ( a 1 + 1 ) Γ ( a k + 1 ) Γ ( a 1 + + a k + k + a + 1 )

where Γ(s):= 0 t s 1 e t dt is the Gamma function. In particular, if a1,…,a k are natural numbers, then

R k ( 1 t 1 t k ) a t 1 a 1 t k a k d t 1 d t k = a ! a 1 ! a k ! ( a 1 + + a k + k + a ) ! .

Proof.

Since

R k ( 1 t 1 t k ) a t 1 a 1 t k a k d t 1 d t k = a R k + 1 t 1 a 1 t k a k t k + 1 a 1 d t 1 d t k + 1 ,

we see that to establish the lemma, it suffices to do so in the case a=0.

If we write

X : = t 1 + + t k = 1 t 1 a 1 t k a k d t 1 d t k 1 ,

then by homogeneity we have

r a 1 + + a k + k 1 X = t 1 + + t k = r t 1 a 1 t k a k d t 1 d t k 1

for any r>0, and hence on integrating r from 0 to 1, we conclude that

X a 1 + + a k + k = R k t 1 a 1 t k a k d t 1 d t k .

On the other hand, if we multiply by er and integrate r from 0 to , we obtain instead

0 r a 1 + + a k + k 1 X e r dr = [ 0 , + ) k t 1 a 1 t k a k e t 1 t k d t 1 d t k .

Using the definition of the Gamma function, this becomes

Γ ( a 1 + + a k + k ) X = Γ ( a 1 + 1 ) Γ ( a k + 1 )

and the claim follows.

Define a signature to be a non-increasing sequence α=(α1,α2,…,α k ) of natural numbers; for brevity, we omit zeroes; thus, for instance if k=6, then (2,2,1,1,0,0) will be abbreviated as (2,2,1,1). The number of non-zero elements of α will be called the length of the signature α, and as usual the degree of α with be α1+⋯+α k . For each signature α, we then define the symmetric polynomials P α = P α ( k ) by the formula

P α ( t 1 , , t k ) = a : s ( a ) = α t 1 a 1 t k a k

where the summation is over all tuples a=(a1,…,a k ) whose non-increasing rearrangement s(a) is equal to α. Thus, for instance

P ( 1 ) ( t 1 , , t k ) = t 1 + + t k P ( 2 ) ( t 1 , , t k ) = t 1 2 + + t k 2 P ( 1 , 1 ) ( t 1 , , t k ) = 1 i < j k t i t j P ( 2 , 1 ) ( t 1 , , t k ) = 1 i < j k t i 2 t j + t i t j 2

and so forth. Clearly, the P α form a linear basis for the symmetric polynomials of t1,…,t k . Observe that if α=(α,1) is a signature containing 1, then one can express P α as P ( 1 ) P α minus a linear combination of polynomials P β with the length of β less than that of α. This implies that the functions P ( 1 ) a P α , with a≥0 and α avoiding 1, are also a basis for the symmetric polynomials. Equivalently, the functions (1−P(1))aP α with a≥0 and α avoiding 1 form a basis.

After extensive experimentation, we have discovered that a good basis b1,…,b n to use for the above problem comes by setting the b i to be all the symmetric polynomials of the form (1−P(1))aP α , where a≥0 and α consists entirely of even numbers, whose total degree a+α1+⋯+α k is less than or equal to some chosen threshold d. For such functions, the coefficients of M1,M2 can be computed exactly using Lemma 42.

More explicitly, first we quickly compute a look-up table for the structure constants c α , β , γ derived from simple products of the form

P α P β = γ c α , β , γ P γ

where deg(α)+ deg(β)≤d. Using this look-up table, we rewrite the integrands of the entries of the matrices in (126) and (127) as integer linear combinations of nearly ‘pure’ monomials of the form ( 1 P ( 1 ) ) a t 1 a 1 t k a k . We then calculate the entries of M1 and M2, as exact rational numbers, using Lemma 42.

We next run a generalized eigenvector routine on (real approximations to) M1 and M2 to find a vector a which nearly maximize the quantityC in (125). Taking a rational approximation a to a, we then do the quick (and exact) arithmetic to verify that (125) holds for some constant C>4. This generalized eigenvector routine is time-intensive when the sizes of M1 and M2 are large (say, bigger than 1,500×1,500) and in practice is the most computationally intensive step of our calculation. When one does not care about an exact arithmetic proof that C>4, instead one can run a test for positive-definiteness for the matrix C M1M2, which is usually much faster and less RAM intensive.

Using this method, we were able to demonstrate M54>4.00238, thus establishing Theorem 23(vii). We took d=23 and imposing the restriction on signatures α that they be composed only of even numbers. It is likely that d=22 would suffice in the absence of this restriction on signatures, but we found that the gain in M54 from lifting this restriction is typically only in the region of 0.005, whereas the execution time is increased by a large factor. We do not have a good understanding of why this particular restriction on signatures is so inexpensive in terms of the trade-off between the accuracy of M-values and computational complexity. The total run-time for this computation was under 1 h.

We now describe a second choice for the basis elements b1,…,b n , which uses the Krylov subspace method; it gives faster and more efficient numerical results than the previous basis, but does not seem to extend as well to more complicated variational problems such as Mk,ε. We introduce the linear operator : L 2 ( R k ) L 2 ( R k ) defined by

ℒf ( t 1 , , t k ) : = i = 1 k 0 1 t 1 t i 1 t i + 1 t k f ( t 1 , , t i 1 , t i , t i + 1 , , t k ) d t i .

This is a self-adjoint and positive semi-definite operator on L 2 ( R k ). For symmetric b 1 ,, b n L 2 ( R k ), one can then write

M 1 = b i , b j 1 i , j n M 2 = b i , b j 1 i , j n .

If we then choose

b i : = i 1 1

where 1 is the unit constant function on R k , then the matrices M1,M2 take the Hankel form

M 1 = i + j 2 1 , 1 1 i , j n M 2 = i + j 1 1 , 1 1 i , j n ,

and so can be computed entirely in terms of the 2n numbers i 1,1 for i=0,…,2n−1.

The operator maps symmetric polynomials to symmetric polynomials; for instance, one has

1 = k ( k 1 ) P ( 1 ) P ( 1 ) = k 2 k 1 2 P ( 2 ) ( k 2 ) P ( 1 , 1 )

and so forth. From this and Lemma 42, the quantities i 1,1 are explicitly computable rational numbers; for instance, one can calculate

1 , 1 = 1 k ! 1 , 1 = 2 k ( k + 1 ) ! 2 1 , 1 = k ( 5 k + 1 ) ( k + 2 ) ! 3 1 , 1 = 2 k 2 ( 7 k + 5 ) ( k + 3 ) !

and so forth.

With Maple, we were able to compute i 1,1 for i≤50 and k≤100, leading to lower bounds on M k for these values of k, a selection of which is given in Table 3.

Table 3 Selected lower bounds on M k obtained from the Krylov subspace method, with k k 1 logk upper bound displayed for comparison

7.2 Bounding Mk,εfor medium k

When bounding Mk,ε, we have not been able to implement the Krylov method because the analogue of i 1 in this context is piecewise polynomial instead of polynomial, and we were only able to compute it explicitly for very small values of i, such as i=1,2,3, which are insufficient for good numerics. Thus, we rely on the previously discussed approach, in which symmetric polynomials are used for the basis functions. Instead of computing integrals over the region R k , we pass to the regions (1±ε) R k . In order to apply Lemma 42 over these regions, this necessitates working with a slightly different basis of polynomials. We chose to work with those polynomials of the form (1+εP(1))aP α , where α is a signature with no 1’s. Over the region (1+ε) R k , a single change of variables converts the needed integrals into those of the form in Lemma 42, and we can then compute the entries of M1.

On the other hand, over the region (1ε) R k , we instead want to work with polynomials of the form (1−εP(1))aP α . Since (1+εP(1))a=(2ε+(1−εP(1)))a, an expansion using the binomial theorem allows us to convert from our given basis to polynomials of the needed form.

With these modifications, and calculating as in the previous section, we find that M50,1/25>4.00124 if d=25 and M50,1/25>4.0043 if d=27, thus establishing Theorem 27(i). As before, we found it optimal to restrict signatures to contain only even entries, which greatly reduced execution time while only reducing M by a few thousandths.

One surprising additional computational difficulty introduced by allowing ε>0 is that the ‘complexity’ of ε as a rational number affects the run-time of the calculations. We found that choosing ε=1/m (where m has only small prime factors) reduces this effect.

A similar argument gives M51,1/50>4.00156, thus establishing Theorem 27(xiii). In this case, our polynomials were of maximum degree d=22.

Code and data for these calculations may be found at http://www.dropbox.com/sh/0xb4xrsx4qmua7u/WOhuo2Gx7f/Polymath8b.

7.3 Bounding M4,ε

We now prove Theorem 27(xii’), which can be established by a direct numerical calculation. We introduce the explicit function F:[0,+ ) 4 defined by

F ( t 1 , t 2 , t 3 , t 4 ) : = ( 1 α ( t 1 + t 2 + t 3 + t 4 ) ) 1 t 1 + t 2 + t 3 + t 4 1 + ε

with ε:=0.168 and α:=0.784. As F is symmetric in t1,t2,t3,t4, we have Ji,1−ε(F)=J1,1−ε(F), so to show Theorem 27(xii’) it will suffice to show that

4 J 1 , 1 ε ( F ) I ( F ) >2.00558.
(128)

By making the change of variables s=t1+t2+t3+t4, we see that

I ( F ) = t 1 + t 2 + t 3 + t 4 1 + ε 1 α ( t 1 + t 2 + t 3 + t 4 ) 2 d t 1 d t 2 d t 3 d t 4 = 0 1 + ε ( 1 αs ) 2 s 3 3 ! ds = α 2 ( 1 + ε ) 6 36 α ( 1 + ε ) 5 15 + ( 1 + ε ) 4 24 = 0.00728001347

and similarly by making the change of variables u=t1+t2+t3

J 1 , 1 ε ( F ) = t 1 + t 2 + t 3 1 ε 0 1 + ε t 1 t 2 t 3 ( 1 α ( t 1 + t 2 + t 3 + t 4 ) ) d t 4 2 d t 1 d t 2 d t 3 = 0 1 ε 0 1 + ε u ( 1 α ( u + t 4 ) ) d t 4 2 u 2 2 ! du = 0 1 ε ( 1 + ε u ) 2 1 α 1 + ε + u 2 2 u 2 2 du = 0.003650160667

and so (128) follows.

Remark 43.

If we use the truncated function

F ~ ( t 1 , t 2 , t 3 , t 4 ) : = F ( t 1 , t 2 , t 3 , t 4 ) 1 t 1 , t 2 , t 3 , t 4 1

in place of F and set ε to 0.18 instead of 0.168, one can compute that

4 J 1 , 1 ε ( F ~ ) I ( F ~ ) > 2.00235 .

Thus, it is possible to establish Theorem 27(xii’) using a cutoff function F that is also supported in the unit cube [ 0,1]4. This allows for a slight simplification to the proof of DHL[ 4;2] assuming GEH, as one can add the additional hypothesis S( F i 0 )+S( G i 0 )<1 to Theorem 20(ii) in that case.

Remark 44.

By optimizing in ε and taking F to be a symmetric polynomial of degree higher than 1, one can get slightly better lower bounds for M4,ε; for instance, setting ε=5/21 and choosing F to be a cubic polynomial, we were able to obtain the bound M4,ε≥2.05411. On the other hand, the best lower bound for M3,ε that we were able to obtain was 1.91726 (taking ε=56/113 and optimizing over cubic polynomials). Again, see http://www.dropbox.com/sh/0xb4xrsx4qmua7u/WOhuo2Gx7f/Polymath8b for the relevant code and data.

7.4 Three-dimensional cutoffs

In this section, we establish Theorem 29. We relabel the variables (t1,t2,t3) as (x,y,z); thus, our task is to locate a piecewise polynomial function F:[0,+ ) 3 supported on the simplex

R : = ( x , y , z ) [ 0 , + ) 3 : x + y + z 3 2

and symmetric in the x,y,z variables, obeying the vanishing marginal condition

0 F(x,y,z)dz=0
(129)

whenever x,y≥0 with x+y>1+ε, and such that

J(F)>2I(F)
(130)

where

J(F):=3 x + y 1 ε 0 F ( x , y , z ) dz 2 dxdy
(131)

and

I(F):= R F ( x , y , z ) 2 dxdydz
(132)

and

ε : = 1 / 4 .

Our strategy will be as follows. We will decompose the simplex R (up to null sets) into a carefully selected set of disjoint open polyhedra P1,…,P m (in fact m will be 60), and on each P i we will take F(x,y,z) to be a low-degree polynomial F i (x,y,z) (indeed, the degree will never exceed 3). The left-hand and right-hand sides of (130) become quadratic functions in the coefficients of the F i . Meanwhile, the requirement of symmetry, as well as the marginal requirement (129), imposes some linear constraints on these coefficients. In principle, this creates a finite-dimensional quadratic program, which one can try to solve numerically. However, to make this strategy practical, one needs to keep the number of linear constraints imposed on the coefficients to be fairly small, as compared with the total number of coefficients. To achieve this, the following properties on the polynomials P i are desirable:

(Symmetry) If P i is a polytope in the partition, then every reflection of P i formed by permuting the x,y,z coordinates should also lie in the partition.

(Graph structure) Each polytope P i should be of the form

( x , y , z ) : z Q i ; a i ( x , y ) < z < b i ( x , y ) ,
(133)

where a i (x,y),b i (x,y) are linear forms and Q i is a polygon.

(Epsilon splitting) Each Q i is contained in one of the regions {(x,y):x+y<1−ε}, {(x,y):1−ε<x+y<1+ε}, or {(x,y):1+ε<x+y<3/2}.

Observe that the vanishing marginal condition (129) now takes the form

i : ( x , y ) Q i a i ( x , y ) b i ( x , y ) F i (x,y,z)dz=0
(134)

for every x,y>0 with x+y>1+ε. If the set {i:(x,y)∈Q i } is fixed, then the left-hand side of (134) is a polynomial in x,y whose coefficients depend linearly on the coefficients on the F i , and thus (134) imposes a set of linear conditions on these coefficients for each possible set {i:(x,y)∈Q i } with x+y>1+ε.

Now we describe the partition we will use. This partition can in fact be used for all ε in the interval [ 1/4,1/3], but the endpoint ε=1/4 has some simplifications which allowed for reasonably good numerical results. To obtain the symmetry property, it is natural to split R (modulo null sets) into six polyhedra R xyz ,R xzy ,R yxz ,R yzx ,R zxy ,R zyx , where

R xyz : = ( x , y , z ) R : x + y < y + z < z + x = ( x , y , z ) : 0 < y < x < z ; x + y + z 3 / 2

and the other polyhedra are obtained by permuting the indices x,y,z, thus for instance

R yxz : = ( x , y , z ) R : y + x < x + z < z + y = ( x , y , z ) : 0 < x < y < z ; x + y + z 3 / 2 .

To obtain the epsilon splitting property, we decompose R xyz (modulo null sets) into eight sub-polytopes

A xyz = ( x , y , z ) R : x + y < y + z < z + x < 1 ε , B xyz = ( x , y , z ) R : x + y < y + z < 1 ε < z + x < 1 + ε , C xyz = ( x , y , z ) R : x + y < 1 ε < y + z < z + x < 1 + ε , D xyz = ( x , y , z ) R : 1 ε < x + y < y + z < z + x < 1 + ε , E xyz = ( x , y , z ) R : x + y < y + z < 1 ε < 1 + ε < z + x , F xyz = ( x , y , z ) R : x + y < 1 ε < y + z < 1 + ε < z + x , G xyz = ( x , y , z ) R : x + y < 1 ε < 1 + ε < y + z < z + x , H xyz = ( x , y , z ) R : 1 ε < x + y < y + z < 1 + ε < z + x ;

the other five polytopes R xzy ,R yxz ,R yzx ,R zxy ,R zyx are decomposed similarly, leading to a partition of R into 6×8=48 polytopes. This is almost the partition we will use; however, there is a technical difficulty arising from the fact that some of the permutations of F xyz do not obey the graph structure property. So we will split F xyz further into the three pieces

S xyz = ( x , y , z ) F xyz : z < 1 / 2 + ε , T xyz = ( x , y , z ) F xyz : z > 1 / 2 + ε ; x > 1 / 2 ε , U xyz = ( x , y , z ) F xyz : x < 1 / 2 ε .

Thus, R xyz is now partitioned into ten polytopes A xyz ,B xyz ,C xyz ,D xyz , E xyz , S xyz , T xyz , U xyz , G xyz , H xyz , and similarly for permutations of R xyz , leading to a decomposition of R into 6×10=60 polytopes.

A symmetric piecewise polynomial function F supported on R can now be described (almost everywhere) by specifying a polynomial function F P :P for the ten polytopes P=A xyz ,B xyz ,C xyz ,D xyz ,E xyz ,S xyz ,T xyz ,U xyz ,G xyz ,H xyz , and then extending by symmetry, thus for instance

F A yzx ( x , y , z ) = F A xyz ( z , x , y ) .

As discussed earlier, the expressions I(F),J(F) can now be written as quadratic forms in the coefficients of the F P , and the vanishing marginal condition (129) imposes some linear constraints on these coefficients.

Observe that the polytope D xyz and all of its permutations make no contribution to either the functional J(F) or to the marginal condition (129), and give a non-negative contribution to I(F). Thus, without loss of generality we may assume that

F D xyz = 0 .

However, the other nine polytopes A xyz ,B xyz ,C xyz ,E xyz ,S xyz ,T xyz ,U xyz ,G xyz ,H xyz have at least one permutation which gives a non-trivial contribution to either J(F) or to (129), and cannot be easily eliminated.

Now we compute I(F). By symmetry, we have

I ( F ) = 3 ! I ( F R xyz ) = 6 P I ( F P )

where P ranges over the nine polytopes A xyz ,B xyz ,C xyz ,E xyz ,S xyz ,T xyz ,U xyz ,G xyz ,H xyz . A tedious but straightforward computation shows that

I ( F A xyz ) = x = 0 1 / 2 ε / 2 y = 0 x z = x 1 ε x F A xyz 2 dz dy dx I ( F B xyz ) = z = 1 / 2 ε / 2 1 / 2 + ε / 2 x = 1 ε z z + z = 1 / 2 + ε / 2 1 ε x = 1 ε z 1 + ε z y = 0 1 ε z F B xyz 2 dy dx dz I ( F C xyz ) = y = 0 1 / 2 3 ε / 2 x = y y + 2 ε + y = 1 / 2 3 ε / 2 1 / 2 ε x = y 1 ε y z = 1 ε y 1 + ε x + y = 1 / 2 ε 1 / 2 ε / 2 x = y 1 ε y z = 1 ε y 3 / 2 x y F C xyz 2 dz dx dy I ( F E xyz ) = z = 1 / 2 + ε / 2 1 ε x = 1 + ε z z y = 0 1 ε z F E xyz 2 dy dx dz I ( F S xyz ) = y = 0 1 / 2 3 ε / 2 z = 1 ε y 1 / 2 + ε + y = 1 / 2 3 ε / 2 1 / 2 ε z = y + 2 ε 1 / 2 + ε x = 1 + ε z 1 ε y F S xyz dx dz dy I ( F T xyz ) = z = 1 / 2 + ε 1 / 2 + 2 ε x = 1 + ε z 3 / 2 z + z = 1 / 2 + 2 ε 1 + ε x = 1 / 2 ε 3 / 2 z y = 0 3 / 2 x z F T xyz 2 dy dz dx I ( F U xyz ) = x = 0 1 / 2 ε y = 0 x z = 1 + ε x 1 + ε y F U xyz dz dy dx I ( F G xyz ) = x = 0 1 / 2 ε y = 0 x z = 1 + ε y 3 / 2 x y F G xyz 2 dx dz dy

and

I ( F H xyz ) = x = 1 / 2 + ε / 2 1 ε y = 1 ε x 3 / 2 2 x + x = 1 ε 3 / 4 y = 0 3 / 2 2 x z = x 3 / 2 x y + x = 1 / 2 1 / 2 + ε / 2 y = 1 ε x 1 / 2 ε z = 1 + ε x 3 / 2 x y F H xyz 2 dz dy dx.

Now we consider the quantity J(F). Here we only have the symmetry of swapping x and y, so that

J ( F ) = 6 0 < y < x ; x + y < 1 ε 0 3 / 2 x y F ( x , y , z ) dz 2 dxdy.

The region of integration meets the polytopes A xyz , A yzx , A zyx , B xyz , B zyx , C xyz , E xyz , E zyx , S xyz , T xyz , U xyz , and G xyz .

Projecting these regions to the (x,y)-plane, we have the diagram:

This diagram is drawn to scale in the case when ε=1/4; otherwise, there is a separation between the J5 and J7 regions. For each of these eight regions, there are eight corresponding integrals J1,J2,…,J8, and thus

J = 2 J 1 + + J 8 .

We have

J 1 = x = 0 1 / 2 ε y = 0 x z = 0 y F A yzx + z = y x F A zyx + z = x 1 ε x F A xyz + z = 1 ε x 1 ε y F B xyz + z = 1 ε y 1 + ε x F C xyz + z = 1 + ε x 1 + ε y F U xyz + z = 1 + ε y 3 / 2 x y F G xyz dz 2 dy dx.

Next comes

J 2 = x = 1 / 2 ε 1 / 2 ε / 2 y = 1 / 2 ε x z = 0 y F A yzx + z = y x F A zyx + z = x 1 ε x F A xyz + z = 1 ε x 1 ε y F B xyz + z = 1 ε y 3 / 2 x y F C xyz dz 2 dy dx.

Third is the piece

J 3 = x = 1 / 2 ε 1 / 2 ε / 2 y = 0 1 / 2 ε z = 0 y F A yzx + z = y x F A zyx + z = x 1 ε x F A xyz + z = 1 ε x 1 ε y F B xyz + z = 1 ε y 1 + ε x F C xyz + z = 1 + ε x 3 / 2 x y F T xyz dz 2 dy dx.

We now have dealt with all integrals involving A xyz , and all remaining integrals pass through B zyx . Continuing, we have

J 4 = x = 1 / 2 ε / 2 1 / 2 y = 1 / 2 ε 1 ε x z = 0 y F A yzx + z = y 1 ε x F A zyx + z = 1 ε x x F B zyx + z = x 1 ε y F B xyz + z = 1 ε y 3 / 2 x y F C xyz dz 2 dy dx.

Another component is

J 5 = x = 1 / 2 ε / 2 1 / 2 y = 0 1 / 2 ε z = 0 y F A yzx + z = y 1 ε x F A zyx + z = 1 ε x x F B zyx + z = x 1 ε y F B xyz + z = 1 ε y 1 + ε x F C xyz + z = 1 + ε x 3 / 2 x y F T xyz dz 2 dy dx.

The most complicated piece is

J 6 = x = 1 / 2 2 ε y = 0 1 ε x + x = 2 ε 1 / 2 + ε / 2 y = x 2 ε 1 ε x z = 0 y F A yzx + z = y 1 ε x F A zyx + z = 1 ε x x F B zyx + z = x 1 ε y F B xyz + z = 1 ε y 1 + ε x F C xyz + z = 1 + ε x 1 / 2 + ε F S xyz + z = 1 / 2 + ε 3 / 2 x y F T xyz dz 2 dy dx.

Here we use x = 1 / 2 2 ε y = 0 1 ε x + x = 2 ε 1 / 2 + ε / 2 y = x 2 ε 1 ε x f(x,y)dydx as an abbreviation for

x = 1 / 2 2 ε y = 0 1 ε x f ( x , y ) dydx + x = 2 ε 1 / 2 + ε / 2 y = x 2 ε 1 ε x f ( x , y ) dydx.

We have now exhausted C xyz . The seventh piece is

J 7 = x = 2 ε 1 / 2 + ε / 2 y = 0 x 2 ε z = 0 y F A yzx + z = y 1 ε x F A zyx + z = 1 ε x x F B zyx + z = x 1 + ε x F B xyz + z = 1 + ε x 1 ε y F E xyz + 1 ε y 1 / 2 + ε F S xyz + 1 / 2 + ε 3 / 2 x y F T xyz dz 2 dy dx.

Finally, we have

J 8 = x = 1 / 2 + ε / 2 1 ε y = 0 1 ε x z = 0 y F A yzx + z = y 1 ε x F A zyx + z = 1 ε x 1 + ε x F B zyx + z = 1 + ε x x F E zyx + z = x 1 ε y F E xyz + 1 ε y 1 / 2 + ε F S xyz + 1 / 2 + ε 3 / 2 x y F T xyz dz 2 dy dx.

In the case ε=1/4, the marginal conditions (129) reduce to requiring

z = 0 3 / 2 x y F G yzx dz = 0
(135)
z = 0 y F G yzx + z = y 3 / 2 x y F G zyx dz = 0
(136)
z = 0 1 + ε x F U yzx + z = 1 + ε x y F G yzx + z = y 3 / 2 x y F G zyx dz = 0
(137)
z = 0 1 + ε x F U yzx + z = 1 + ε x 3 / 2 x y F G yzx dz = 0
(138)
z = 0 3 / 2 x y F T yzx dz = 0
(139)
z = 0 1 ε x F E yzx + z = 1 ε x 1 ε y F S yzx + z = 1 ε y 3 / 2 x y F H yzx dz = 0 .
(140)

Each of these constraints is only required to hold for some portion of the parameter space {(x,y):1+εx+y≤3/2}, but as the left-hand sides are all polynomial functions in x,y (using the signed definite integral b a = a b ), it is equivalent to require that all coefficients of these polynomial functions vanish.

Now we specify F. After some numerical experimentation, we have found that the simplest choice of F which still achieves the desired goal comes by taking F(x,y,z) to be a polynomial of degree 1 on each of E xyz , S xyz , and H xyz ; degree 2 on T xyz , vanishing on D xyz ; and degree 3 on the remaining five relevant components of R xyz . After solving the quadratic program, rounding, and clearing denominators, we arrive at the choice

F A xyz : = 66 + 96 x 147 x 2 + 125 x 3 + 128 y 122 xy + 104 x 2 y 275 y 2 + 394 y 3 + 99 z 58 xz + 63 x 2 z 98 yz + 51 xyz + 41 y 2 z 112 z 2 + 24 x z 2 + 72 y z 2 + 50 z 3 F B xyz : = 41 + 52 x 73 x 2 + 25 x 3 + 108 y 66 xy + 71 x 2 y 294 y 2 + 56 x y 2 + 363 y 3 + 33 z + 15 xz + 22 x 2 z 40 yz 42 xyz + 75 y 2 z 36 z 2 24 x z 2 + 26 y z 2 + 20 z 3 F C xyz : = 22 + 45 x 35 x 2 + 63 y 99 xy + 82 x 2 y 140 y 2 + 54 x y 2 + 179 y 3 F E xyz : = 12 + 8 x + 32 y F S xyz : = 6 + 8 x + 16 y F T xyz : = 18 30 x + 12 x 2 + 42 y 20 xy 66 y 2 45 z + 34 xz + 22 z 2 F U xyz : = 94 1 , 823 x + 5 , 760 x 2 5 , 128 x 3 + 54 y 168 x 2 y + 105 y 2 + 1 , 422 xz 2 , 340 x 2 z 192 y 2 z 128 z 2 268 x z 2 + 64 z 3 F G xyz : = 5 , 274 19 , 833 x + 18 , 570 x 2 5 , 128 x 3 18 , 024 y + 44 , 696 xy 20 , 664 x 2 y + 16 , 158 y 2 19 , 056 x y 2 4 , 592 y 3 10 , 704 z + 26 , 860 xz 12 , 588 x 2 z + 24 , 448 yz 30 , 352 xyz 10 , 980 y 2 z + 7 , 240 z 2 9 , 092 x z 2 8 , 288 y z 2 1 , 632 z 3 F H xyz : = 8 z.

One may compute that

I ( F ) = 62 , 082 , 439 , 864 , 241 507 , 343 , 011 , 840

and

J ( F ) = 9 , 933 , 190 , 664 , 926 , 733 40 , 587 , 440 , 947 , 200

with all the marginal conditions (135)-(140) obeyed, and thus

J ( F ) I ( F ) = 2 + 286 , 648 , 173 4 , 966 , 595 , 189 , 139 , 280

and (130) follows.

The parity problem

In this section, we argue why the ‘parity barrier’ of Selberg [7] prohibits sieve-theoretic methods, such as the ones in this paper, from obtaining any bound on H1 that is stronger than H1≤6, even on the assumption of strong distributional conjectures such as the generalized Elliott-Halberstam conjecture GEH[ 𝜗] and even if one uses sieves other than the Selberg sieve. Our discussion will be somewhat informal and heuristic in nature.

We begin by briefly recalling how the bound H1≤6 on GEH (i.e., Theorem 4(xii)) was proven. This was deduced from the claim DHL[ 3;2], or more specifically from the claim that the set

A:= n : atleasttwoofn , n + 2 , n + 6 areprime
(141)

was infinite.

To do this, we (implicitly) established a lower bound

n ν ( n ) 1 A ( n ) > 0

for some non-negative weight ν: + supported on [ x,2x] for a sufficiently large x. This bound was in turn established (after a lengthy sieve-theoretic analysis, and with a carefully chosen weight ν) from upper bounds on various discrepancies. More precisely, one required good upper bounds (on average) for the expressions

x n 2 x : x = a ( q ) f ( n + h ) 1 φ ( q ) x n 2 x : ( n + h , q ) = 1 f ( n + h )
(142)

for all h∈{0,2,6} and various residue classes a (q) with qx1−ε and arithmetic functions f, such as the constant function f=1, the von Mangoldt function f=Λ, or Dirichlet convolutions f=αβ of the type considered in Claim 12. (In the presentation of this argument in previous sections, the shift by h was eliminated using the change of variables n=n+h, but for the current discussion, it is important that we do not use this shift.) One also required good asymptotic control on the main terms

x n 2 x : ( n + h , q ) = 1 f(n+h).
(143)

An inspection of these arguments (which no longer exploit change of variables such as n=n+h in the n variable) shows that they would be equally valid if one inserted a further non-negative weight ω: + in the summation over n. More precisely, the above sieve-theoretic argument would also deduce the lower bound

n ν ( n ) 1 A ( n ) ω ( n ) > 0

if one had control on the weighted discrepancies

x n 2 x : x = a ( q ) f ( n + h ) ω ( n ) 1 φ ( q ) x n 2 x : ( n + h , q ) = 1 f ( n + h ) ω ( n )
(144)

and on the weighted main terms

x n 2 x : ( n + h , q ) = 1 f(n+h)ω(n)
(145)

that were of the same form as in the unweighted case ω=1.

Now suppose for instance that one was trying to prove the bound H1≤4. A natural way to proceed here would be to replace the set A in (141) with the smaller set

A :={n:n,n+2arebothprime}{n:n+2,n+6arebothprime}
(146)

and hope to establish a bound of the form

n ν ( n ) 1 A ( n ) > 0

for a well-chosen function ν: + supported on [ x,2x], by deriving this bound from suitable (averaged) upper bounds on the discrepancies (142) and control on the main terms (143). If the arguments were sieve-theoretic in nature, then (as in the H1≤6 case) one could then also deduce the lower bound

n ν(n) 1 A (n)ω(n)>0
(147)

for any non-negative weight ω: + , provided that one had the same control on the weighted discrepancies (144) and weighted main terms (145) that one did on (142) and (143).

We apply this observation to the weight

ω ( n ) : = ( 1 λ ( n ) λ ( n + 2 ) ) ( 1 λ ( n + 2 ) λ ( n + 6 ) ) = 1 λ ( n ) λ ( n + 2 ) λ ( n + 2 ) λ ( n + 6 ) + λ ( n ) λ ( n + 6 )

where λ(n):=(−1)Ω(n) is the Liouville function. Observe that ω vanishes for any nA, and hence

n ν(n) 1 A (n)ω(n)=0
(148)

for any ν. On the other hand, the ‘Möbius randomness law’ (see, e.g. [33]) predicts a significant amount of cancellation for any non-trivial sum involving the Möbius function μ or the closely related Liouville function λ. For instance, the expression

x n 2 x : n = a ( q ) λ ( n + h )

is expected to be very small (of sizegO x q log A x for any fixed A) for any residue class a (q) with qx1−ε, and any h∈{0,2,6}; similarly for more complicated expressions such as

x n 2 x : n = a ( q ) λ ( n + 2 ) λ ( n + 6 )

or

x n 2 x : n = a ( q ) Λ ( n ) λ ( n + 2 ) λ ( n + 6 )

or more generally

x n 2 x : n = a ( q ) f ( n ) λ ( n + 2 ) λ ( n + 6 )

where f is a Dirichlet convolution αβ of the form considered in Claim 12. Similarly for expressions such as

x n 2 x : n = a ( q ) f ( n ) λ ( n ) λ ( n + 2 ) ;

note from the complete multiplicativity of λ that (αβ)λ=(α λ)⋆(β λ), so if f is of the form in Claim 12, then f λ is also. In view of these observations (and similar observations arising from permutations of {0,2,6}), we conclude (heuristically, at least) that all the bounds that are believed to hold for (142) and (143) should also hold (up to minor changes in the implied constants) for (144) and (145). Thus, if the bound H1≤4 could be proven in a sieve-theoretic fashion, one should be able to conclude the bound (147), which is in direct contradiction to (148).

Remark 45.

Similar arguments work for any set of the form

A H : = { n : n p 1 < p 2 n + H ; p 1 , p 2 bothprime , p 2 p 1 4 }

and any fixed H>0, to prohibit any non-trivial lower bound on n ν(n) 1 A H (n) from sieve-theoretic methods. Indeed, one uses the weight

ω ( n ) : = 0 i i H ; ( n + i , 3 ) = ( n + i , 3 ) = 1 ; i i 4 ( 1 λ ( n + i ) λ ( n + i ) ) ;

we leave the details to the interested reader. This seems to block any attempt to use any argument based only on the distribution of the prime numbers and related expressions in arithmetic progressions to prove H1≤4.

The same arguments of course also prohibit a sieve-theoretic proof of the twin prime conjecture H1=2. In this case, one can use the simpler weight ω(n)=1−λ(n)λ(n+2) to rule out such a proof, and the argument is essentially due to Selberg [7].

Of course, the parity barrier could be circumvented if one were able to introduce stronger sieve-theoretic axioms than the ‘linear’ axioms currently available (which only control sums of the form (142) or (143)). For instance, if one were able to obtain non-trivial bounds for ‘bilinear’ expressions such as

x n 2 x f ( n ) Λ ( n + 2 ) = d m α ( d ) β ( m ) 1 [ x , 2 x ] ( dm ) Λ ( dm + 2 )

for functions f=αβ of the form in Claim 12, then (by a modification of the proof of Proposition 13) one would very likely obtain non-trivial bounds on

x n 2 x Λ ( n ) Λ ( n + 2 )

which would soon lead to a proof of the twin prime conjecture. Unfortunately, we do not know of any plausible way to control such bilinear expressions. (Note however that there are some other situations in which bilinear sieve axioms may be established, for instance in the argument of Friedlander and Iwaniec [40] establishing an infinitude of primes of the form a2+b4.)

Additional remarks

The proof of Theorem 16(xii) may be modified to establish the following variant:

Proposition 46.

Assume the generalized Elliott-Halberstam conjecture GEH[θ] for all 0<θ<1. Let 0<ε<1/2 be fixed. Then, if x is a sufficiently large multiple of 6, there exists a natural number n with ε xn≤(1−ε)x such that at least two of n,n−2,xn are prime, and similarly if n−2 is replaced by n+2.

Note that if at least two of n,n−2,xn are prime, then either n,n+2 are twin primes or else at least one of x,x−2 is expressible as the sum of two primes, and Theorem 5 easily follows.

Proof.

(Sketch) We just discuss the case of n−2, as the n+2 case is similar. Observe from the Chinese remainder theorem (and the hypothesis that x is divisible by 6) that one can find a residue class b (W) such that b,b−2,xb are all coprime to W (in particular, one has b=1 (6)). By a routine modification of the proof of Lemma 18, it suffices to find a non-negative weight function ν: + and fixed quantities α>0 and β1,β2,β3≥0, such that one has the asymptotic upper bound

εx n ( 1 ε ) x n = b ( W ) ν ( n ) S ( α + o ( 1 ) ) B k ( 1 2 ε ) x W ,

the asymptotic lower bounds

εx n ( 1 ε ) x n = b ( W ) ν ( n ) θ ( n ) S ( β 1 o ( 1 ) ) B 1 k ( 1 2 ε ) x φ ( W ) εx n ( 1 ε ) x n = b ( W ) ν ( n ) θ ( n + 2 ) S ( β 2 o ( 1 ) ) B 1 k ( 1 2 ε ) x φ ( W ) εx n ( 1 ε ) x n = b ( W ) ν ( n ) θ ( x n ) S ( β 3 o ( 1 ) ) B 1 k ( 1 2 ε ) x φ ( W )

and the inequality

β 1 + β 2 + β 3 > 2 α ,

where is the singular series

S : = p | x ( x 2 ) ; p > w p p 1 .

We select ν to be of the form

ν ( n ) = j = 1 J c j λ F j , 1 ( n ) λ F j , 2 ( n + 2 ) λ F j , 3 ( x n ) 2

for various fixed coefficients c 1 ,, c J and fixed smooth compactly supported functions F j , i :[0,+) with j=1,…,J and i=1,…,3. It is then routineh to verify that analogues of Theorem 19 and Theorem 20 hold for the various components of ν, with the role of x in the right-hand side replaced by (1−2ε)x, and the claim then follows by a suitable modification of Theorem 28, taking advantage of the function F constructed in Theorem 29.

It is likely that the bounds in Theorem 4 can be improved further by refining the sieve-theoretic methods employed in this paper, with the exception of part (xii) for which the parity problem prevents further improvement, as discussed in the ‘The parity problem’ section. We list some possible avenues to such improvements as follows:

  1. 1.

    In Theorem 27, the bound M k,ε>4 was obtained for some ε>0 and k=50. It is possible that k could be lowered slightly, for instance to k=49, by further numerical computations, but we were only barely able to establish the k=50 bound after 2 weeks of computation. However, there may be a more efficient way to solve the required variational problem (e.g. by selecting a more efficient basis than the symmetric monomial basis) that would allow one to advance in this direction; this would improve the bound H 1≤246 slightly. Extrapolation of existing numerics also raises the possibility that M 53 exceeds 4, in which case the bound of 270 in Theorem 4(vii) could be lowered to 264.

  2. 2.

    To reduce k (and thus H 1) further, one could try to solve another variational problem, such as the one arising in Theorem 24 or in Theorem 28, rather than trying to lower bound M k or M k,ε. It is also possible to use the more complicated versions of MPZ[ ϖ,δ] established (in which the modulus q is assumed to be densely divisible rather than smooth) to replace the truncated simplex appearing in Theorem 24 with a more complicated region (such regions also appear implicitly in [§4.5]). However, in the medium-dimensional setting k≈50, we were not able to accurately and rapidly evaluate the various integrals associated to these variational problems when applied to a suitable basis of functions. One key difficulty here is that whereas polynomials appear to be an adequate choice of basis for the M k , an analysis of the Euler-Lagrange equation reveals that one should use piecewise polynomial basis functions instead for more complicated variational problems such as the M k,ε problem (as was done in the three-dimensional case in the ‘Three-dimensional cutoffs’ section), and these are difficult to work with in medium dimensions. From our experience with the low k problems, it looks like one should allow these piecewise polynomials to have relatively high degree on some polytopes and low degree on other polytopes, and vanish completely on yet further polytopesi, but we do not have a systematic understanding of what the optimal placement of degrees should be.

  3. 3.

    In Theorem 28, the function F was required to be supported in the simplex k k 1 · R k . However, one can consider functions F supported in other regions R, subject to the constraint that all elements of the sumset R+R lie in a region treatable by one of the cases of Theorem 20. This could potentially lead to other optimization problems that lead to superior numerology, although again it appears difficult to perform efficient numerics for such problems in the medium k regime k≈50. One possibility would be to adopt a ‘free boundary’ perspective, in which the support of F is not fixed in advance, but is allowed to evolve by some iterative numerical scheme.

  4. 4.

    To improve the bounds on H m for m=2,3,4,5, one could seek a better lower bound on M k than the one provided by Theorem 40; one could also try to lower bound more complicated quantities such as M k,ε.

  5. 5.

    One could attempt to improve the range of ϖ,δ for which estimates of the form MPZ[ ϖ,δ] are known to hold, which would improve the results of Theorem 4(ii)-(vi). For instance, we believe that the condition 600ϖ+180δ<7 in Theorem 11 could be improved slightly to 1,080ϖ+330δ<13 by refining the arguments, but this requires a hypothesis of square root cancellation in a certain four-dimensional exponential sum over finite fields, which we have thus far been unable to establish rigorously. Another direction to pursue would be to improve the δ parameter, or to otherwise relax the requirement of smoothness in the moduli, in order to reduce the need to pass to a truncation of the simplex R k , which is the primary reason why the m=1 results are currently unable to use the existing estimates of the form MPZ[ ϖ,δ]. Another speculative possibility is to seek MPZ[ ϖ,δ] type estimates which only control distribution for a positive proportion of smooth moduli, rather than for all moduli, and then to design a sieve ν adapted to just that proportion of moduli (cf. [41]). Finally, there may be a way to combine the arguments currently used to prove MPZ[ ϖ,δ] with the automorphic forms (or ‘Kloostermania’) methods used to prove nontrivial equidistribution results with respect to a fixed modulus, although we do not have any ideas on how to actually achieve such a combination.

  6. 6.

    It is also possible that one could tighten the argument in Lemma 18, for instance by establishing a non-trivial lower bound on the portion of the sum n ν(n) when n+h 1,…,n+h k are all composite, or a sufficiently strong upper bound on the pair correlations n θ(n+ h i )θ(n+ h j ) (see [9], §6] for a recent implementation of this latter idea). However, our preliminary attempts to exploit these adjustements suggested that the gain from the former idea would be exponentially small in k, whereas the gain from the latter would also be very slight (perhaps reducing k by O(1) in large k regimes, e.g. k≥5,000).

  7. 7.

    All of our sieves used are essentially of Selberg type, being the square of a divisor sum. We have experimented with a number of non-Selberg type sieves (for instance trying to exploit the obvious positivity of 1 p x : p | n log p log x when nx); however, none of these variants offered a numerical improvement over the Selberg sieve. Indeed it appears that after optimizing the cutoff function F, the Selberg sieve is in some sense a ‘local maximum’ in the space of non-negative sieve functions, and one would need a radically different sieve to obtain numerically superior results.

  8. 8.

    Our numerical bounds for the diameter H(k) of the narrowest admissible k-tuple are known to be exact for k≤342, but there is scope for some slight improvement for larger values of k, which would lead to some improvements in the bounds on H m for m=2,3,4,5. However, we believe that our bounds on H m are already fairly close (e.g. within 10%) of optimal, so there is only a limited amount of gain to be obtained solely from this component of the argument.

Narrow admissible tuples

In this section, we outline the methods used to obtain the numerical bounds on H(k) given by Theorem 17, which are reproduced below:

  1. 1.

    H(3)=6,

  2. 2.

    H(50)=246,

  3. 3.

    H(51)=252,

  4. 4.

    H(54)=270,

  5. 5.

    H(5,511)≤52,116,

  6. 6.

    H(35,410)≤398,130,

  7. 7.

    H(41,588)≤474,266,

  8. 8.

    H(309,661)≤4,137,854,

  9. 9.

    H(1,649,821)≤24,797,814,

  10. 10.

    H(75,845,707)≤1,431,556,072,

  11. 11.

    H(3,473,955,908)≤80,550,202,480.

10.1 H(k) values for small k

The equalities in the first four bounds (1)-(4) were previously known. The case H(3)=6 is obvious: the admissible 3-tuples (0,2,6) and (0,4,6) have diameter 6 and no 3-tuple of smaller diameter is admissible. The cases H(50)=246, H(51)=252, and H(54)=270 follow from results of Clark and Jarvis [42]. They define ϱ(x) to be the largest integer k for which there exists an admissible k-tuple that lies in a half-open interval (y,y+x] of length x. For each integer k>1, the largest x for which ϱ(x)=k is precisely H(k+1). Table 1 of [42] lists these largest x values for 2≤k≤170, and we find that H(50)=246, H(51)=252, and H(54)=270. Admissible tuples that realize these bounds are shown in Subsubsections “Admissible 50-tuple realizing H(50) = 246”, “Admissible 51-tuple realizing H(51) = 252” and “Admissible 54-tuple realizing H(54) = 270”.

10.1.1 Admissible 50-tuple realizing H(50) = 246

10.1.2 Admissible 51-tuple realizing H(51) = 252

10.1.3 Admissible 54-tuple realizing H(54) = 270

10.2 H(k) bounds for mid-range k

As previously noted, exact values for H(k) are known only for k≤342. The upper bounds on H(k) for the five cases (5)-(9) were obtained by constructing admissible k-tuples using techniques developed during the first part of the Polymath8 project. These are described in detail in section 3 of [4], but for the sake of completeness, we summarize the most relevant methods here.

10.2.1 Fast admissibility testing

A key component of all our constructions is the ability to efficiently determine whether a given k-tuple =( h 1 ,, h k ) is admissible. We say that is admissible modulo p if its elements do not form a complete set of residues modulo p. Any k-tuple is automatically admissible modulo all primes p>k, since a k-tuple cannot occupy more than k residue classes; thus, we only need to test admissibility modulo primes p<k.

A simple way to test admissibility modulo p is to enumerate the elements of modulo p and keep track of which residue classes have been encountered in a table with p boolean-valued entries. Assuming the elements of have absolute value bounded by O(k logk) (true of all the tuples we consider), this approach yields a total bit-complexity of O(k2/ logk M(logk)), where M(n) denotes the complexity of multiplying two n-bit integers, which, up to a constant factor, also bounds the complexity of division with remainder. Applying the Schönhage-Strassen bound M(n)=O(n logn log logn) from [43], this is O(k2 log logk log log logk), essentially quadratic in k.

This approach can be improved by observing that for most of the primes p<k, there are likely to be many unoccupied residue classes modulo p. In order to verify admissibility at p, it is enough to find one of them, and we typically do not need to check them all in order to do so. Using a heuristic model that assumes the elements of are approximately equidistributed modulo p, one can determine a bound m<p such that k random elements of /pℤ are unlikely to occupy all of the residue classes in [ 0,m]. By representing the k-tuple as a boolean vector =( b 0 ,, b h k h 1 ) in which b i =1 if and only if i=h j h1 for some h j , we can efficiently test whether occupies every residue class in [ 0,m] by examining the the entries

b 0 , , b m , b p , , b p + m , b 2 p , , b 2 p + m ,

of . The key point is that when p<k is large, say p>(1+ε)k/ logk, we can choose m so that we only need to examine a small subset of the entries in . Indeed, for primes p>k/c (for any constant c), we can take m=O(1) and only need to examine O(logk) elements of (assuming its total size is O(k logk), which applies to all the tuples we consider here).

Of course it may happen that occupies every residue class in [ 0,m] modulo p. In this case, we revert to our original approach of enumerating the elements of modulo p, but we expect this to happen for only a small proportion of the primes p<k. Heuristically, this reduces the complexity of admissibility testing by a factor of O(logk), making it sub-quadratic. In practice, we find this approach to be much more efficient than the straightforward method when k is large (see [§3.1] for further details.

10.2.2 Sieving methods

Our techniques for constructing admissible k-tuples all involve sieving an integer interval [ s,t] of residue classes modulo primes p<k and then selecting an admissible k-tuple from the survivors. There are various approaches one can take, depending on the choice of interval and the residue classes to sieve. We list four of these below, starting with the classical sieve of Eratosthenes and proceeding to more modern variations.

Sieve of Eratosthenes. We sieve an interval [ 2,x] to obtain admissible k-tuples

p m + 1 , , p m + k .

with m as small as possible. If we sieve the the residue class 0(p) for all primes pk, we have m=π(k) and pm+1>k. In this case, no admissibility testing is required, since the residue class 0(p) is unoccupied for all pk. Applying the Prime Number Theorem in the forms

p k = k log k + k log log k k + O k log log k log k , π ( x ) = x log x + O x log 2 x ,

this construction yields the upper bound

H(k)klogk+kloglogkk+o(k).
(149)

As an optimization, rather than sieving modulo every prime pk, we instead sieve modulo increasing primes p and as soon as the first k survivors form an admissible tuple. This will typically happen for for some p m <k.

Hensley-Richards sieve. The bound in (149) was improved by Hensley and Richards [44]-[46], who observed that rather than sieving [ 2,x] it is better to sieve the interval [ −x/2,x/2] to obtain admissible k-tuples of the form

p m + k / 2 1 , , p m + 1 , , 1 , 1 , , p m + 1 , , p m + ( k + 1 ) / 2 1 ,

where we again wish to make m as small as possible. It follows from Lemma 5 of [45] that one can take m=o(k/ logk), leading to the improved upper bound

H(k)klogk+kloglogk(1+log2)k+o(k).
(150)

Shifted Schinzel sieve. As noted by Schinzel in [47], in the Hensley-Richards sieve, it is slightly better to sieve 1(2) rather than 0(2); this leaves unsieved powers of 2 near the center of the interval [ −x/2,x/2] that would otherwise be removed (more generally, one can sieve 1(p) for many small primes p, but we did not). Additionally, we find that shifting the interval [ −x/2,x/2] can yield significant improvements (one can also view this as changing the choices of residue classes).

This leads to the following approach: we sieve an interval [ s,s+x] of odd integers and multiples of odd primes pp m , where x is large enough to ensure at least k survivors, and m is large enough to ensure that the survivors form an admissible tuple, with x and m minimal subject to these constraints. A tuple of exactly k survivors is then chosen to minimize the diameter. By varying s and comparing the results, we can choose a starting point s∈[ −x/2,x/2] that yields the smallest final diameter. For large k, we typically find sk is optimal, as opposed to s≈−(k/2) logk in the Hensley-Richards sieve.

Shifted greedy sieve. As a further optimization, we can allow greater freedom in the choice of residue class to sieve. We begin as in the shifted Schinzel sieve, but for primes pp m that exceed 2 k log k , rather than sieving 0(p), we choose a minimally occupied residue class a(p). As above, we sieve the interval [ s,s+x] for varying values of s∈[ −x/2,x/2] and select the best result, but unlike the shifted Schinzel sieve, for large k, we typically choose s≈−(k/ logkk)/2.

We remark that while one might suppose that it would be better to choose a minimally occupied residue class at all primes, not just the larger ones, we find that this is generally not the case. Fixing a structured choice of residue classes for the small primes avoids the erratic behavior that can result from making greedy choices to soon (see [48], Fig. 1] for an illustration of this).

Table 4 lists the bounds obtained by applying each of these techniques (in the online version of this paper, each table entry includes a link to the constructed tuple). To the admissible tuples obtained using the shifted greedy sieve, we additionally applied various local optimizations that are detailed in ([§3.6]). As can be seen in the table, the additional improvement due to these local optimizations is quite small compared to that gained by using better sieving algorithms, especially when k is large.

Table 4 Upper bounds on H ( k ) for selected values of k

Table 4 also lists the value ⌊k logk+k⌋ that we conjecture as an upper bound on H(k) for all sufficiently large k.

10.3 H(k) bounds for large k

The upper bounds on H(k) for the last two cases (10) and (11) were obtained using modified versions of the techniques described above that are better suited for handling very large values of k. These entail three types of optimizations that are summarized in the subsections below.

10.3.1 Improved time complexity

As noted above, the complexity of admissibility testing is quasi-quadratic in k. Each of the techniques listed in the ‘H(k) bounds for mid-range k’ section involves optimizing over a parameter space whose size is at least quasi-linear in k, leading to an overall quasi-cubic time complexity for constructing a narrow admissible k-tuple; this makes it impractical to handle k>109. We can reduce this complexity in a number of ways.

First, we can combine parameter optimization and admissibility testing. In both the sieve of Eratosthenes and Hensley-Richards sieves, taking m=k guarantees an admissible k-tuple. For m<k, if the corresponding k-tuple is inadmissible, it is typically because it is inadmissible modulo the smallest prime pm+1 that appears in the tuple. This suggests a heuristic approach in which we start with m=k, and then iteratively reduce m, testing the admissibility of each k-tuple modulo pm+1 as we go, until we can proceed no further. We then verify that the last k-tuple that was admissible modulo pm+1 is also admissible modulo all primes p>pm+1 (we know it is admissible at all primes pp m because we have sieved a residue class for each of these primes). We expect this to be the case, but if not we can increase m as required. Heuristically, this yields a quasi-quadratic running time, and in practice, it takes less time to find the minimal m than it does to verify the admissibility of the resulting k-tuple.

Second, we can avoid a complete search of the parameter space. In the case of the shifted Schinzel sieve, for example, we find empirically that taking s=k typically yields an admissible k-tuple whose diameter is not much larger than that achieved by an optimal choice of s; we can then simply focus on optimizing m using the strategy described above. Similar comments apply to the shifted greedy sieve.

10.3.2 Improved space complexity

We expect a narrow admissible k-tuple to have diameter d=(1+o(1))k logk. Whether we encode this tuple as a sequence of k integers, or as a bitmap of d+1 bits, as in the fast admissibility testing algorithm, we will need approximately k logk bits. For k>109, this may be too large to conveniently fit in memory. We can reduce the space to O(k log logk) bits by encoding the k-tuple as a sequence of k−1 gaps; the average gap between consecutive entries has size logk and can be encoded in O(log logk) bits. In practical terms, for the sequences we constructed, almost all gaps can be encoded using a single 8-bit byte for each gap.

One can further reduce space by partitioning the sieving interval into windows. For the construction of our largest tuples, we used windows of size O( d ) and converted to a gap-sequence representation only after sieving at all primes up to an O( d ) bound.

10.3.3 Parallelization

With the exception of the greedy sieve, all the techniques described above are easily parallelized. The greedy sieve is more difficult to parallelize because the choice of a minimally occupied residue class modulo p depends on the set of survivors obtained after sieving modulo primes less than p. To address this issue, we modified the greedy approach to work with batches of consecutive primes of size n, where n is a multiple of the number of parallel threads of execution. After sieving fixed residue classes modulo all small primes p<2 k log k , we determine minimally occupied residue classes for the next n primes in parallel, sieve these residue classes, and then proceed to the next batch of n primes.

In addition to the techniques described above, we also considered a modified Schinzel sieve in which we check admissibility modulo each successive prime p before sieving multiples of p, in order to verify that sieving modulo p is actually necessary. For values of p close to but slightly less than p m , it will often be the case that the set of survivors is already admissibility modulo p, even though it does contain multiples of p (because some other residue class is unoccupied). As with the greedy sieve, when using this approach, we sieve residue classes in batches of size n to facilitate parallelization.

10.3.4 Results for large k

Table 5 lists the bounds obtained for the two largest values of k. For k=75,845,707, the best results were obtained with a shifted greedy sieve that was modified for parallel execution as described above, using the fixed shift parameter s=−(k logkk)/2. A list of the sieved residue classes is available at http://math.mit.edu/~drew/n75845707_1431556072.txt. This file contains values of k, s, d, and m, along with a list of prime indices n i >m and residue classes r i such that sieving the interval [ s,s+d] of odd integers, multiples of p n for 1<nm, and at r i modulo p n i yields an admissible k-tuple.

Table 5 Upper bounds on H ( k ) for selected values of k

For k=3,473,955,908, we did not attempt any form of greedy sieving due to practical limits on the time and computational resources available. The best results were obtained using a modified Schinzel sieve that avoids unnecessary sieving, as described above, using the fixed shift parameter s=k 0. A list of the sieved residue classes is available at http://math.mit.edu/~drew/n75845707_1431556072.txt.

This file contains values of k, s, d, and m, along with a list of prime indices n i >m such that sieving the interval [ s,s+d] of odd integers, multiples of p n for 1<nm, and multiples of p n i yields an admissible k-tuple.

Source code for our implementation is available at http://math.mit.edu/~drew/ompadm_v0.5.tar; this code can be used to verify the admissibility of both the tuples listed above.

Endnotes

a When a,b are real numbers, we will also need to use (a,b) and [ a,b] to denote the open and closed intervals, respectively, with endpoints a,b. Unfortunately, this notation conflicts with the notation given above, but it should be clear from the context which notation is in use.

b Actually, there are some differences between Conjecture 1 of [28] and the claim here. Firstly, we need an estimate that is uniform for all a, whereas in [28] only the case of a fixed modulus a was asserted. On the other hand, α,β were assumed to be controlled in 2 instead of via the pointwise bounds (6), and Q was allowed to be as large as x log−C x for some fixed C (although, in view of the negative results in [23],[24], this latter strengthening may be too ambitious).

c One could also use the Heath-Brown identity [49] here if desired.

d In the k=1 case, we of course just have q W , d 1 , , d k 1 =W.

e One could obtain a small improvement to the bounds here by replacing the threshold 2c with a parameter to be optimized over.

f The arguments in [5] are rigorous under the assumption of a positive eigenfunction as in Corollary 35, but the existence of such an eigenfunction remains open for k≥3.

g Indeed, one might be even more ambitious and conjecture a square-root cancellation x / q for such sums (see [50] for some similar conjectures), although such stronger cancellations generally do not play an essential role in sieve-theoretic computations.

h One new technical difficulty here is that some of the various moduli [ d j , d j ] arising in these arguments are not required to be coprime at primes p>w dividing x or x−2; this requires some modification to Lemma 30 that ultimately leads to the appearance of the singular series . However, these modifications are quite standard, and we do not give the details here.

i In particular, the optimal choice F for Mk,ε should vanish on the polytope {( t 1 ,, t k )(1+ε)· R k : i i 0 t i 1εforall i 0 =1,,k}.