Brought to you by:
Paper

Scale-free Monte Carlo method for calculating the critical exponent γ of self-avoiding walks*

Published 6 June 2017 © 2017 IOP Publishing Ltd
, , Combinatorics of lattice models Citation Nathan Clisby 2017 J. Phys. A: Math. Theor. 50 264003 DOI 10.1088/1751-8121/aa7231

1751-8121/50/26/264003

Abstract

We implement a scale-free version of the pivot algorithm and use it to sample pairs of three-dimensional self-avoiding walks, for the purpose of efficiently calculating an observable that corresponds to the probability that pairs of self-avoiding walks remain self-avoiding when they are concatenated. We study the properties of this Markov chain, and then use it to find the critical exponent γ for self-avoiding walks to unprecedented accuracy. Our final estimate for γ is $1.156\, 953\, 00(95)$ .

Export citation and abstract BibTeX RIS

1. Introduction

An N-step self-avoiding walk (SAW) on the d-dimensional cubic lattice is a mapping $\omega : \{0, 1, \ldots, N\} \to {\mathbb Z}^d$ with $\vert \omega(i+1)-\omega(i)\vert =1$ for each i ($\vert x\vert $ denotes the Euclidean norm of x), with $\omega(0)$ at the origin, and with $\omega(i) \neq \omega(\,j)$ for all $i \neq j$ . It is of fundamental interest in the theory of critical phenomena as the $n \rightarrow 0$ limit of the n-vector model, and is the simplest model which captures the universal behavior of polymers in a good solvent.

The number of self-avoiding walks of length N on ${\mathbb Z}^3$ , which we denote cN, is believed to be given by

Equation (1)

The exponents γ and $\Delta_1$ are universal, i.e. they are dependent only on the dimensionality of the lattice, while the growth constant μ and amplitude A are not. The exponent $\Delta_1 = 0.528(8)$ [1], and next-to-leading correction terms with exponents $-1, -2\Delta_1, -\Delta_2$ are folded into the O(1/N) expression. For bipartite lattices there is an additional 'anti-ferromagnetic' term which has a factor of (−1)N. It is important to take this into account when studying series from exact enumeration [2], but it is negligible for the values of N that are accessible to the Monte Carlo computer experiments considered here and so we neglect it.

In two dimensions the critical exponent γ is known exactly, predicted to be 43/32 over thirty years ago via Coulomb gas arguments by Nienhuis [3]. This has been verified to extremely high precision via enumerations using the finite lattice method [48]; the most recent estimate confirms the exact result to more than five decimal places, $\gamma = 1.343\, 745(5)$ [8].

In three dimensions the finite lattice method is not as powerful, and the best estimates for γ come from other enumeration techniques [9] and Monte Carlo simulation [10, 11].

In this work, we will calculate the critical exponent γ and amplitude A for SAWs on ${\mathbb Z}^3$ to a high degree of accuracy via a Monte Carlo computer experiment. We will use an efficient implementation of the pivot algorithm [12, 13] which makes it feasible to rapidly sample self-avoiding walks of millions of steps. Our simulation framework is similar to an earlier calculation of the growth constant μ [14]; here we go into more depth and explicitly study the behaviour of the autocorrelation function of the Markov chain.

1.1. Outline of paper

We introduce our method to calculate γ in section 2, which includes a calculation of the autocorrelation function of the Markov chain for different choices of sampling scheme. We then present our results and analysis in section 3. Finally, we compare our estimates for γ and A to values from the literature, discuss the potential for scale-free moves as a paradigm for modeling polymers, and give a brief conclusion in section 4.

2. Method

2.1. An observable for calculating γ via the pivot algorithm

The pivot algorithm is the most efficient method known for sampling self-avoiding walks [15, 16], and recent improvements [12, 13, 17] have made it even more effective, especially in the large N limit. These improvements are highly beneficial as they allow one to obtain accurate data for large N, which reduces systematic errors due to corrections-to-scaling.

The method described here is very similar to that of a recent paper [14], but as we wish to emphasize different aspects of the method we will keep the description self-contained, even though this will result in a degree of repetition.

The principal difficulty in applying the pivot algorithm to the estimation of γ is that it samples walks in the fixed-length ensemble, whereas γ is intrinsically associated with the growth in the number of walks as a function of length. Caracciolo et al [10] overcame this difficulty by inventing the join-and-cut algorithm which samples pairs of self-avoiding walks of fixed total length. Another approach is the Berretti–Sokal algorithm [18] which naturally samples walks of different lengths.

We wish to use the pivot algorithm to sample self-avoiding walks, and so we must find an observable that allows us to estimate γ from the fixed-length ensemble.

To do this we sample the same observable as a previous paper [14]: the probability that two self-avoiding walks of length N can be concatenated to form a self-avoiding walk of length 2N  +  1. We note that the use of pairs of walks to estimate γ was suggested by Madras and Sokal [16], and that the join-and-cut algorithm [10] is also similar. We define an observable B on pairs of walks $\omega_1$ and $\omega_2$ via

Equation (2)

The concatenation operation is illustrated in figure 1; it is not the standard operation because an additional bond is inserted between $\omega_1$ and $\omega_2$ . We adopt the convention that the sites of the walk which are incident to the concatenating bond are labeled 0, and increase in number going out to the free ends.

Figure 1.

Figure 1. Examples of the concatenation of two walks of ten steps on the square lattice, with the labels of the innermost and outermost sites shown. For the pair of walks on the left $B(\omega_1, \omega_2) = 1$ , while on the right $B(\omega_1, \omega_2) = 0$ .

Standard image High-resolution image

We define BN as the expectation of B on the set of all pairs of self-avoiding walks of N steps:

Equation (3)

Equation (4)

Equation (5)

where $\Omega$ is the coordination number of the lattice ($\Omega = 6$ for the simple cubic lattice). Now we define ${\widetilde{B}_N} \equiv \Omega {B_N}$ , and use the asymptotic form of cN from (1) to obtain:

Equation (6)

Equation (7)

Equation (8)

2.2. The pivot algorithm for sampling self-avoiding walks

The pivot algorithm is a Markov chain Monte Carlo algorithm which samples walks of fixed length N. The elementary move is a pivot, where a lattice symmetry operation (rotation or reflection) is applied to part of a walk, and it generates a correlated sequence of self-avoiding walks as follows:

  • 1.  
    Select a pivot site of the current SAW according to some prescription (usually uniformly at random, here we will use a non-uniform distribution);
  • 2.  
    Randomly choose a lattice symmetry (rotation or reflection) which is not the identity;
  • 3.  
    Apply this symmetry to one of the two sub-walks created by splitting the walk at the pivot site;
  • 4.  
    If the resulting walk is self-avoiding: accept the pivot and update the configuration;
  • 5.  
    If the resulting walk is not self-avoiding: reject the pivot and keep the old configuration;
  • 6.  
    Repeat.

The pivot algorithm is ergodic, and satisfies the detailed balance condition which ensures that self-avoiding walks are sampled uniformly at random [16].

Successful pivot moves make large changes to global observables which measure the size of a walk, and Madras and Sokal [16] argued that in fact the integrated autocorrelation time, $\tau_{{\rm int}}$ , for such observables was of the same order as the mean time for a successful pivot to occur. For the simple cubic lattice the probability of a pivot move being successful is O(Np) with $p \approx 0.11$ , which leads to $\tau_{{\rm int}} = O(N^{\,p})$ for global observables.

Madras and Sokal [16] gave a hash table implementation of the pivot algorithm which resulted in mean CPU time of $O(N^{1-p})=O(N^{0.89})$ per pivot attempt (alternatively, CPU time O(N) per successful pivot). This has since been improved by Kennedy [17] to roughly mean CPU time of O(N  0.74) per pivot attempt, and further still by the present author [12] to $O(\log N)$ . This makes the pivot algorithm extremely efficient for sampling global observables for self-avoiding walks, but it is not obvious how efficient it is for sampling our observable B.

We note that the pivot algorithm can be applied to sampling self-avoiding walks on other Bravais lattices. There are no significant difficulties to extending the fast implementation of the pivot algorithm [12] to the face-centered cubic and body-centered cubic lattices (this is work in progress), but the situation is more complicated for lattices such as the diamond and hydrogen peroxide lattices, where the orientations of bonds incident to a particular site depend on the sublattice on which the site resides.

2.3. Autocorrelation functions for uniform and scale-free pivot moves

We now proceed to calculate the autocorrelation function for the Markov chain sampling of the observable B for different choices of pivot site distribution.

As B is either 0 or 1, we can write down a closed form expression for its variance in terms of its expectation:

Equation (9)

Then, following [19], we define the autocorrelation function for the time series measurement of our observable B as

Equation (10)

The integrated autocorrelation time for B, $\tau_{\rm int}(B)$ , is given in terms of $\rho_B(t)$ as

Equation (11)

which then enters the expression for the standard deviation of the estimate of the expectation of B after $n_{{\rm{sample}}}$ Markov chain time steps:

Equation (12)

$\tau_{\rm int}(B)$ may be thought of as the number of Markov chain time steps to reach an effectively new configuration with respect to B.

It is clear from figure 1 that the shape of each of the walks close to the joint is crucially important with respect to the probability of intersection, whereas the shape of the walks at their far ends will have almost negligible effect on the intersection probability. In fact, if on the square lattice either walk is like that of figure 2, then an intersection must occur regardless of the shapes of the remainders of the walks. In [14] we argued that sampling pivot sites uniformly at random would lead to configurations like that in figure 2 being frozen for O(N) Markov chain time steps, and this in turn would lead to $\tau_{\rm int}(B) = O(N)$ .

Figure 2.

Figure 2. Minimal trapped walk of seven steps on the square lattice (solid line) with a possible extension (dashed line).

Standard image High-resolution image

In [14] we argued that sampling pivot sites uniformly at all length scales with respect to the distance to the concatenated ends would dramatically reduce the integrated autocorrelation time, and conjectured that in this case $\tau_{{\rm int}}(B) = O(N^{\,p} \log N)$ . We will further test this assumption that scale-free moves drastically reduce the integrated autocorrelation time by directly calculating the autocorrelation function, and also by estimating the integrated autocorrelation time.

We calculated the autocorrelation function for three separate choices of pivot site distribution. In each case we initialized the system as follows:

  • 1.  
    Use the pseudo_dimerize procedure of [12] to generate two initial N-step SAW configurations.
  • 2.  
    Initialize Markov chain by performing at least 20N successful pivots on each SAW. Pivot sites are sampled uniformly at random. The stopping criterion must be based on the number of attempted pivots so as not to introduce bias.

Our sampling procedure for the uniform pivot site distribution case was:

  • 1.  
    Select one of the two walks uniformly at random.
  • 2.  
    Select a pivot site on this walk by selecting a pivot site uniformly at random in the interval [0, N  −  1].
  • 3.  
    Attempt pivot move, applied to the free end of the walk; update walk if result is self-avoiding.
  • 4.  
    Calculate $B(\omega_1, \omega_2)$ , and update our estimate of ${\widetilde{B}_N}$ .
  • 5.  
    Repeat.

The procedure with log uniform sampling was:

  • 1.  
    Select one of the two walks uniformly at random.
  • 2.  
    Select a pivot site on this walk by generating a pseudorandom number x between 0 and $\log (N+1) $ , and let pivot site $j = \lfloor e^x - 1 \rfloor$ , so that $j \in [0, N-1]$ .
  • 3.  
    Attempt pivot move, applied to the free end of the walk; update walk if result is self-avoiding.
  • 4.  
    Calculate $B(\omega_1, \omega_2)$ , and update our estimate of ${\widetilde{B}_N}$ .
  • 5.  
    Repeat.

Finally, the procedure with log uniform sampling plus global rotations (which we denote log+) was:

  • 1.  
    Select one of the two walks uniformly at random.
  • 2.  
    Randomly pivot each of the walks around the innermost sites, i.e. those with label 0. (These pivot moves are always successful.)
  • 3.  
    Select a pivot site on this walk by generating a pseudorandom number x between 0 and $\log N $ , and let pivot site $j = \lfloor e^x \rfloor$ , so that $j \in [1, N-1]$ .
  • 4.  
    Attempt pivot move, applied to the free end of the walk; update walk if result is self-avoiding.
  • 5.  
    Calculate $B(\omega_1, \omega_2)$ , and update our estimate of ${\widetilde{B}_N}$ .
  • 6.  
    Repeat.

We refer to the log and log+  sampling distributions as 'scale-free' because pivot sites are sampled uniformly at all possible length scales with respect to the distance to the concatenation sites.

We calculated the autocorrelation function $\rho_B(t)$ for the uniform, log, and log+  procedures, for walks of length N  =  999 (1000 sites) and $N=99\;999$ ($100\;000$ sites), by running simulations of the Markov chains, and collecting information about correlations at 40 different time intervals between 1 and 1048 576. We make log-log plots of $\rho_B(t)$ against t for N  =  999 in figure 3 and for $N=99\;999$ in figure 4, so that we can see the behaviour over many time scales simultaneously. For regimes where $\rho_B(t)$ is decaying as a power law ts with s  <  0, we expect that the plot will be linear with slope s, whereas when $\rho_B(t)$ is decaying exponentially we expect to see the plot sharply decreasing, as $\log \rho_B(t) = O(t) = O(\exp(\log t))$ which implies that $\log \rho_B(t)$ will grow exponentially rapidly towards negative infinity as a function of $\log t$ .

Figure 3.

Figure 3. Autocorrelation function $\rho_B(t)$ for uniform and logarithmic choices of pivot site distribution for N  =  999.

Standard image High-resolution image
Figure 4.

Figure 4. Autocorrelation function $\rho_B(t)$ for uniform and logarithmic choices of pivot site distribution for $N=99\;999$ .

Standard image High-resolution image

In figures 3 and 4 it can be seen that when pivot sites are selected uniformly the autocorrelation function decays slowly until t is of the same order as N (i.e. to within a constant factor), and then decays exponentially. It is possible that t  =  O(N) is the only important timescale for this Markov chain. For the log sampling procedure, we see rapid decay which appears to be approaching a straight line, and so is consistent with a power law. Decay in the autocorrelation function is dramatically faster than for uniform sampling, as expected. Finally, for the log+  sampling scheme we see a dramatic drop for the first Markov chain time step, due to the use of global rotations which causes initially rapid decorrelation, and thereafter it decays in a similar manner to the log sampling scheme. In fact, for large t we expect that $\rho_B(t)$ will be the same for log and log+  , as for long times it becomes increasingly likely that global rotations have occurred for each walk under the log sampling procedure. Thus log and log+  will behave similarly in terms of asymptotic performance, but the steep initial drop in the autocorrelation function makes it clear that log+  will be better by a not-insignificant constant factor for lengths which are accessible to computer experiments.

2.4. Details of computer experiment

To extract information about γ from (8) we must estimate ${\widetilde{B}_N}$ in the large N limit in order to reduce the influence of corrections-to-scaling. We sample pairs of self-avoiding walks using the pivot algorithm and we invest computational resources approximately uniformly on a wide range of length scales, from N  =  1023 to $N=3355\, 443$ . The situation is quite different for the calculation of μ in [14], for which a near-optimal design for the computer experiment required almost all computational effort to be expended on sampling short walks.

The log+  procedure was very similar to the method used for the main computer experiment. However, the main computer experiment was slightly sub-optimal in two ways: (a) it was possible for the log uniform sampling to select the sites labeled 0, and (b) one of the two global pivot moves allowed for the identity symmetry. Each of these differences result in slightly worse performance, and for future numerical experiments the log+  procedure will be used (unless a procedure that is better still can be devised).

The computer experiment was run for 200 thousand CPU hours on Dell PowerEdge FC630 machines with Intel Xeon E5-2680 CPUs (these were run in hyperthreaded mode which gave a modest performance boost; 400 thousand CPU thread hours were used). In total there were $1.60 \times 10^6$ batches of 108 attempted pivots, and thus there were a grand total of $1.60 \times 10^{14}$ attempted pivots across all walk sizes.

3. Results and analysis

We report our results for ${\widetilde{B}_N}$ in table A1 of appendix.

In figure 5 we plot estimates for $\tau_{{\rm int}}(B)$ which we obtain indirectly from (12), inferring it from batch estimates of the error in table A1. The accuracy of this technique relies on the assumption that the batch error estimate is accurate, which in turn relies upon the degree of correlation between successive batches being negligible. For the batch sizes of 108 used in this computer experiment this condition is undoubtedly satisfied. In the plot of $\tau_{{\rm int}}(B)$ we see, remarkably, that over the range of N plotted $\tau_{{\rm int}}(B)$ is growing less quickly than $O(\log N)$ ! This is significantly smaller than the $O(N^{\,p} \log N)$ behaviour postulated in our earlier paper [14]. It may be that figure 5 does not capture the asymptotic regime, perhaps due to the steep initial decline in $\rho_B(t)$ which is apparent for the log+  procedure in figures 3 and 4. However, it is possible, perhaps even plausible, that $\tau_{{\rm int}}(B) = O(\log N)$ , and it certainly seems highly probable that $\tau_{{\rm int}}(B) = o(N^{\,p} \log N)$ .

Figure 5.

Figure 5. Integrated autocorrelation time, $\tau_{{\rm int}}(B)$ . These data are from the full Monte Carlo computer experiment and are calculated via the batch method.

Standard image High-resolution image

We now proceed to analyze our data for ${\widetilde{B}_N}$ to extract estimates for the critical exponent γ and amplitude A via (1). We utilize an improved observable, similarly to [20, 21], and more recently [1]. The idea is to combine our estimates for ${\widetilde{B}_N}$ with estimates from another observable, so as to create a new improved observable for which the amplitude of the leading correction-to-scaling term is negligible. For this purpose we use the estimates of the ratio of the mean-squared end-to-end distance and the mean-squared radius of gyration, $\langle R_{{\rm E}}^2 \rangle_N / \langle R_{{\rm G}}^2 \rangle_N$ , from table IV of appendix B of [1]. Note that N in that table refers to the number of sites, whereas here our N refers to the number of steps, which is one fewer, and so these lengths are in one-to-one correspondence.

The expected asymptotic form of this ratio is

Equation (13)

Note that asymptotically this ratio of observables is a pure number, namely the universal amplitude ratio $D_{{\rm E}}/D_{{\rm G}}$ . We now form the observable ${\widetilde{B}_N} (\langle R_{{\rm E}}^2 \rangle_N / \langle R_{{\rm G}}^2 \rangle_N){\hspace{0pt}}^\kappa$ , which involves an arbitrary constant κ which we will choose a value for shortly. From (8) and (13) we determine the asymptotic form of our new observable:

Equation (14)

Equation (15)

Equation (16)

taking $K = (2^{\gamma-1}\mu / A) (D_{{\rm E}}/D_{{\rm G}}){\hspace{0pt}}^\kappa$ . Thus it becomes apparent that if we choose κ judiciously so that $b + {\rm d} \kappa \approx 0$ , then our observable will have negligible leading-order correction-to-scaling. In this case we say that the new observable is 'improved' with respect to the original observable ${\widetilde{B}_N}$ .

Our analysis was completed as follows. We fixed κ at an arbitrary value (initially 0), and calculated estimates of the new observable ${\widetilde{B}_N} (\langle R_{{\rm E}}^2 \rangle_N / \langle R_{{\rm G}}^2 \rangle_N){\hspace{0pt}}^\kappa$ , with confidence intervals, from our data for ${\widetilde{B}_N}$ in table A1 of appendix, and the data for $\langle R_{{\rm E}}^2 \rangle_N / \langle R_{{\rm G}}^2 \rangle_N$ in table IV of appendix B of [1]. We then performed weighted non-linear fits of these data using the statistical programming language R, where our statistical model was a single power law of the form ${\rm{const.}} N^x$ . We truncated our data by only fitting values with $N \geqslant N_{\rm min}$ , varying Nmin to get a sequence of estimates for which we expect the systematic error due to unfitted corrections-to-scaling to decrease. To determine a near-optimal choice of κ, we varied κ and calculated the reduced $\chi^2$ for these fits, eventually settling on a value of $\kappa = -0.585$ for which the reduced $\chi^2$ was approximately one for all $N_{{\rm min}} \geqslant 2895$ . These fits with $\kappa = -0.585$ gave a sequence of estimates for $1-\gamma$ (which we converted to estimates of γ) and K from (16). We plot these estimates in figures 6 and 7 respectively, against $N_{{\rm min}}^{-1}$ as this is the expected order of magnitude of the systematic error. This choice of variable for the x-axis should result in linear convergence as the asymptotic regime is reached; we extrapolate the fits from the right to where they intersect the y-axis which corresponds to the $N_{{\rm min}} \rightarrow \infty$ limit.

Figure 6.

Figure 6. Estimates of the critical exponent γ, with the weighted least squares linear fit of the last six values shown. Our best estimate $\gamma=1.156\, 953\, 00(95)$ is shown in bold on the y-axis.

Standard image High-resolution image
Figure 7.

Figure 7. Estimates of the amplitude K, with the weighted least squares linear fit of the last six values shown. Our best estimate $K=1.469\, 869(16)$ is shown in bold on the y-axis.

Standard image High-resolution image

We have extrapolated these sequences of estimates to obtain $\gamma = 1.156\, 953\, 00(95)$ and $K=1.469\, 869(16)$ . Using our estimates for K and γ, together with estimates of $\mu=4.684\, 039\, 931(27)$ [14] and $D_{{\rm E}}/D_{{\rm G}} = 6.253\, 531(10)$ [1], we obtain $A = 1.215\, 783(14)$ . The dominant contribution to the error of this estimate comes from K.

We note that analysis of results from a previous computer experiment with poorer statistics gave $\gamma = 1.156\, 957(9)$ , where the method of analysis used the non-improved observable ${\widetilde{B}_N}$ . This is consistent with but much less precise than the final estimate reported here. This unpublished value was used in the estimation of critical exponents $\gamma_1$ , for self-avoiding walks tethered to a surface, and $\gamma_b$ , for bridges [22].

4. Discussion and conclusion

We compare our estimates for γ and A with previous estimates in table 1, and see that the new estimates significantly improve on the state of the art. We make the observation that estimates for γ have trended downwards over time, both for the series and Monte Carlo estimates, which is perhaps symptomatic of the fact that the systematic influence corrections-to-scaling have diminished as data for larger N has become available. The most recent series estimates have N  =  36, while this paper provides Monte Carlo data up to $N = 33\, 554\, 431$ .

Table 1. Comparison of parameter estimates.

Source γ A
This work 1.156 953 00(95) 1.215 783(14)
Unpublisheda 1.156 957(9) 1.215 72(18)
[23] MC (2004) 1.1573(2)  
[11] MC (1998) 1.1575(6)  
[24] Series $N \leqslant 36$ (2011) 1.156 98(34) 1.2150(22)
[2]b Series $N \leqslant 30$ (2007) 1.1569(6) 1.2154(28)
[25] Series $N \leqslant 26$ (2000) 1.1585  
[26] Series $N \leqslant 23$ (1992) 1.161 93(10)  
[27] Series $N \leqslant 21$ (1989) 1.161(2)  
[28] FT d  =  3 (1998) 1.1596(20)  
[28] FT epsilon (1998) 1.1575(60)  
[28] FT epsilon bc (1998) 1.1571(30)  

aResult of an earlier computer experiment which used similar methodology, but with poorer statistics and no use of an improved observable bUsing equations (74) and (75) with $0.516 \leqslant \Delta_1 \leqslant 0.54$ .

Besides the estimates for γ and A, our other main results are the striking evidence in figures 3 and 4 of the efficiency gain of scale-free sampling versus uniform sampling, and evidence from figure 5 which suggests that $\tau_{{\rm int}} = O(\log N)$ for the log+  Markov chain algorithm.

Besides ν and γ, the critical exponent associated with the number growth of self-avoiding polygons, α, is also of interest. If it could be calculated to a comparable to degree of accuracy as ν and γ this would provide a strong test of the hyperscaling relation ${\rm d} \nu = 2-\alpha$ . Unfortunately, there is no operation for polygons which parallels concatenation for walks, and we have not been able to devise an appropriate observable, and associated computer experiment, which would allow for an efficient calculation of α. This is a topic that is well worth further study.

The scale-free move framework described here could be applied equally as well to other global Monte Carlo moves besides the pivot move, in particular to cut-and-paste moves [29, 30]. We expect scale-free moves to also be useful when simulating polymers which satisfy a geometric restriction, as has already proved to be the case for self-avoiding walks tethered to a hard surface [22]. Equally, it could be useful for the sampling of branched polymers where the distances to internal joints introduce additional internal length scales.

One major advantage of the scale-free approach is that it is not necessary to decide which length scale is important. Suppose, for the sake of argument, that for a given system optimal efficiency is attained by performing moves at one particular length scale. Since the scale-free framework performs moves at all length scales, including the important one, the penalty of using the scale-free algorithm is at most $\log N$ in the integrated autocorrelation time, and $\sqrt{\log N}$ in the error.

Acknowledgments

Support from the Australian Research Council under the Future Fellowship scheme (project number FT130100972) and Discovery scheme (project number DP140101110) is gratefully acknowledged.

Appendix. Numerical data

Table A1. Estimates of ${\widetilde{B}_N}$ .

N ${\widetilde{B}_N}$ N ${\widetilde{B}_N}$
1023 1.450 7968(16) 65 535 0.753 6518(22)
1447 1.373 4488(17) 92 671 0.713 7264(22)
2047 1.300 2643(17) 131 071 0.675 9013(22)
2895 1.231 0935(18) 185 343 0.640 1084(22)
4095 1.165 6136(19) 262 143 0.606 1940(23)
5791 1.103 7063(19) 524 287 0.543 6837(23)
8191 1.045 0800(20) 1048 575 0.487 6280(23)
11 583 0.989 6313(20) 2097 151 0.437 3552(23)
16 383 0.937 1139(20) 4194 303 0.392 2662(23)
23 167 0.887 4326(21) 8388 607 0.351 8267(23)
32 767 0.840 3684(21) 16 777 215 0.315 5514(23)
46 335 0.795 8358(22) 33 554 431 0.283 0274(22)

Footnotes

  • Dedicated to Tony Guttmann on the occasion of his 70th birthday.

Please wait… references are loading.
10.1088/1751-8121/aa7231