1 Introduction

Let \(d\ge 2\) and let \( E_d\) be the set of all non oriented edges in the \(d\)-dimensional integer lattice, that is, \(E_d = \{e = \{x,y\}: x,y \in {\mathbb Z}^d, |x-y|=1\}\). Let \(\{\mu _e\}_{e\in E_d}\) be a random process with non-negative values, defined on some probability space \((\Omega , \mathcal {F}, \mathrm{\mathbb {P} })\). The process \(\{\mu _e\}_{e\in E_d}\) represents random conductances. We write \(\mu _{xy} = \mu _{yx} = \mu _{\{x,y\}}\) and set \(\mu _{xy}=0\) if \(\{x,y\} \notin E_d\). Set

$$\begin{aligned} \mu _x = \sum _y \mu _{xy}, \qquad P(x,y) = \frac{\mu _{xy}}{\mu _x}, \end{aligned}$$

with the convention that \(0/0=0\) and \(P(x,y)=0\) if \(\{x,y\} \notin E_d\). For a fixed \(\omega \in \Omega \), let \(X = \{X_t, t\ge 0, P^x_\omega , x \in {\mathbb Z}^d\}\) be the continuous time random walk on \({\mathbb Z}^d\), with transition probabilities \(P(x,y) = P_\omega (x,y)\), and exponential waiting times with mean \(1/\mu _x\). The corresponding expectation will be denoted \(E_\omega ^x\). For a fixed \(\omega \in \Omega \), the generator \(\mathcal {L}\) of \(X\) is given by

$$\begin{aligned} \mathcal {L}f(x) = \tfrac{1}{2} \sum _y \mu _{xy} (f(y) - f(x)). \end{aligned}$$
(1.1)

In [4] this is called the variable speed random walk (VSRW) among the conductances \(\mu _e\). (We have inserted here a factor of \(\frac{1}{2}\)—see Remark 1.5(5).) This model, of a reversible (or symmetric) random walk in a random environment, is often called the random conductance model (RCM).

We are interested in functional Central Limit Theorems (CLTs) for the process \(X\). Given any process \(X\), for \(\varepsilon >0\), set \(X^{(\varepsilon )}_t = \varepsilon X_{t /\varepsilon ^2}, \,t\ge 0\). Let \(\mathcal {D}_T = D([0,T], \mathbb {R}^d)\) denote the Skorokhod space, and let \(\mathcal {D}_\infty =D([0,\infty ), \mathbb {R}^d)\). Write \(d_S\) for the Skorokhod metric and \(\mathcal {B}(\mathcal {D}_T)\) for the \(\sigma \)-field of Borel sets in the corresponding topology. Let \(X\) be the canonical process on \(\mathcal {D}_\infty \) or \(\mathcal {D}_T, \,P_{\text {BM}}\) be Wiener measure on \((\mathcal {D}_\infty , \mathcal {B}(\mathcal {D}_\infty ))\) and let \(E_{\text {BM}}\) be the corresponding expectation. We will write \(W\) for a standard Brownian motion. It will be convenient to assume that \(\{\mu _e\}_{e\in E_d}\) are defined on a probability space \((\Omega , \mathcal {F}, \mathrm{\mathbb {P} })\), and that \(X\) is defined on \((\Omega , \mathcal {F}) \times (\mathcal {D}_\infty , \mathcal {B}(\mathcal {D}_\infty ))\) or \((\Omega , \mathcal {F}) \times (\mathcal {D}_T, \mathcal {B}(\mathcal {D}_T))\). We also define the averaged or annealed measure \(\mathbf{P}\) on \((\mathcal {D}_\infty , \mathcal {B}(\mathcal {D}_\infty ))\) or \((\mathcal {D}_T, \mathcal {B}(\mathcal {D}_T))\) by

$$\begin{aligned} \mathbf{P}(G) = \mathrm{\mathbb {E} }P^0_{\omega }(G). \end{aligned}$$

Definition 1.1

For a bounded function \(F\) on \(\mathcal {D}_T\) and a constant matrix \(\Sigma \), let \(\Psi ^F_\varepsilon = E^0_\omega F(X^{(\varepsilon )})\) and \(\Psi ^F_\Sigma = E_{\text {BM}}F(\Sigma W)\). In the remaining part of the definition we assume that \(\Sigma \) is not identically zero.

  1. (i)

    We say that the Quenched Functional CLT (QFCLT) holds for \(X\) with limit \(\Sigma W\) if for every \(T>0\) and every bounded continuous function \(F\) on \(\mathcal {D}_T\) we have \(\Psi ^F_\varepsilon \rightarrow \Psi ^F_\Sigma \) as \(\varepsilon \rightarrow 0\), with \(\mathrm{\mathbb {P} }\)-probability 1.

  2. (ii)

    We say that the Weak Functional CLT (WFCLT) holds for \(X\) with limit \(\Sigma W\) if for every \(T>0\) and every bounded continuous function \(F\) on \(\mathcal {D}_T\) we have \(\Psi ^F_\varepsilon \rightarrow \Psi ^F_\Sigma \) as \(\varepsilon \rightarrow 0\), in \(\mathrm{\mathbb {P} }\)-probability.

  3. (iii)

    We say that the Averaged (or Annealed) Functional CLT (AFCLT) holds for \(X\) with limit \(\Sigma W\) if for every \(T>0\) and every bounded continuous function \(F\) on \(\mathcal {D}_T\) we have \( \mathrm{\mathbb {E} }\Psi ^F_\varepsilon \rightarrow \Psi _{\Sigma }^F\). This is the same as standard weak convergence with respect to the probability measure \(\mathbf{P}\).

Since the functions \(F\) in this definition are bounded, it is immediate that QFCLT \(\Rightarrow \) WFCLT \(\Rightarrow \) AFCLT. One could consider a more general form of the WFCLT and QFCLT in which one allows the matrix \(\Sigma \) to depend on the environment \(\mu _\cdot ({\omega })\). However, if the environment is stationary and ergodic, then \(\Sigma \) is a shift invariant function of the environment, so must be \(\mathrm{\mathbb {P} }\)—a.s. constant.

In [12] it is proved that if \(\mu _e\) is a stationary ergodic environment with \(\mathrm{\mathbb {E} }\mu _e<\infty \) then the WFCLT holds (here \(\Sigma \equiv 0\) is allowed). It is an open question as to whether the QFLCT holds under these hypotheses. For the QFCLT in the case of percolation see [7, 15, 18], and for the Random Conductance Model with \(\mu _e\) i.i.d see [2, 4, 10, 16]. In the i.i.d. case the QFCLT holds (with \(\Sigma \not \equiv 0\)) for any distribution of \(\mu _e\) provided \(p_+=\mathrm{\mathbb {P} }(\mu _e>0) > p_c\), where \(p_c\) is the critical probability for bond percolation in \({\mathbb Z}^d\).

Definition 1.2

For \(1\le i < j \le d\) let \(T_{ij}\) be the isometry of \({\mathbb Z}^d\) defined by interchanging the \(i\)th and \(j\)th coordinates, and \(T_i\) be the isometry defined by \(T_i(x_1, \dots , x_i, \dots , x_d) = (x_1, \dots , - x_i, \dots , x_d)\). We say that an environment \((\mu _e)\) on \({\mathbb Z}^d\) is symmetric if the law of \((\mu _e)\) is invariant under \(T_i, 1\le i\le d\) and \(\{ T_{ij}, 1\le i < j \le d\}\).

If \((\mu _e)\) is stationary, ergodic and symmetric, and the WFCLT holds with limit \(\Sigma W\) then the limiting covariance matrix \(\Sigma ^T \Sigma \) must also be invariant under symmetries of \({\mathbb Z}^d\), so must be a constant \(\sigma \ge 0\) times the identity.

Our first result concerns the relation between the weak and averaged FCLT. In general, of course, for a sequence of random variables \(\xi _n\), convergence of \(\mathrm{\mathbb {E} }\xi _n\) does not imply convergence in probability. However, in the context of the RCM, the AFCLT and WFCLT are equivalent.

Theorem 1.3

Suppose the AFCLT holds. Then the WFCLT holds.

A slightly more general result is given in Theorem 2.14 below. Our second result concerns the relation between the weak and quenched FCLT.

Theorem 1.4

Let \(d=2\) and \(p<1\). There exists a symmetric stationary ergodic environment \(\{\mu _e\}_{e\in E_2}\) with \(\mathrm{\mathbb {E} }(\mu _e^p \vee \mu _e^{-p})<\infty \) and a sequence \(\varepsilon _n \rightarrow 0\) such that

  1. (a)

    the WFCLT holds for \(X^{(\varepsilon _n)}\) with limit \(W\), but

  2. (b)

    the QFCLT does not hold for \(X^{(\varepsilon _n)}\) with limit \( \Sigma W\) for any \(\Sigma \).

Remark 1.5

  1. 1.

    Under the weaker condition that \(\mathrm{\mathbb {E} }\mu _e^p<\infty \) and \(\mathrm{\mathbb {E} }\mu _e^{-q}<\infty \) with \(p<1, \,q<1/2\) we have the full WFCLT for \(X^{(\varepsilon )}\) as \(\varepsilon \rightarrow 0\), i.e., not just along a sequence \(\varepsilon _n\). However, the proof of this is very much harder and longer than that of Theorem 1.4(a)—see [5]. (Since our environment has \(\mathrm{\mathbb {E} }\mu _e = \infty \) we cannot use the results of [12].) We have chosen to use in this paper essentially the same environment as in [5], although for Theorem 1.4 a slightly simpler environment would have been sufficient.

  2. 2.

    Biskup [9] has proved that the QFCLT holds with \(\sigma >0\) if \(d=2\) and \((\mu _e)\) are symmetric and ergodic with \(\mathrm{\mathbb {E} }( \mu _e \vee \mu _e^{-1})<\infty \).

  3. 3.

    See Remark 6.4 for how our example can be adapted to \({\mathbb Z}^d\) with \(d\ge 3\); in that case we have the same moment conditions as in Theorem 1.4.

  4. 4.

    In [1] it is proved that the QFCLT holds (in \({\mathbb Z}^d, \,d\ge 2\)) for stationary symmetric ergodic environments \((\mu _e)\) under the conditions \(\mathrm{\mathbb {E} }\mu _e^p <\infty , \,\mathrm{\mathbb {E} }\mu _e^{-q}<\infty \), with \(p^{-1}+q^{-1} <2/d\).

  5. 5.

    If \(\mu _e \equiv 1\) then due to the normalisation factor \(\frac{1}{2}\) in (1.1), the vertical jumps of \(X\) occur at rate 1, and the FCLT holds for \(X\) with limit \(W\).

The remainder of the paper after Sect. 2 constitutes the proof of Theorem 1.4. The argument is split into several sections. In the proof, we will discuss the conditions listed in Definition 1.1 for \(T=1\) only, as it is clear that the same argument works for general \(T>0\).

2 Averaged and weak invariance principles

The basic setup will be slightly more general in this section than in the introduction. As in the Introduction, let \((\Omega , \mathcal {F}, \mathrm{\mathbb {P} })\) be a probability space, fix some \(T>0\) and let \(\mathcal {D}=\mathcal {D}_T\) in this section (although we will also use \(\mathcal {D}_{2T}\)). Recall that \(X\) is the coordinate/identity process on \(\mathcal {D}\). Let \(C(\mathcal {D})\) be the family of all functions \(F: \mathcal {D}\rightarrow \mathbb {R}\) which are continuous in the Skorokhod topology. In the following definition, \(P^{\omega }_n\) will stand for a probability measure (not necessarily arising from an RCM) on \(\mathcal {D}\) for \(\omega \in \Omega \) and \(n\ge 1\). We will also refer to a probability measure \(P_0\) on \(\mathcal {D}\). The corresponding expectations will be denoted \(E^{\omega }_n \) and \( E_0\). The following definition was first introduced in [14], see also [12].

Definition 2.1

We will say that \(P^{\omega }_n\) converge weakly in measure to \(P_0\) if for each bounded \(F\in C(\mathcal {D})\),

$$\begin{aligned} E^{\omega }_n F(X) \rightarrow E_0 F(X)\,\, \hbox { in }\,\, \mathrm{\mathbb {P} }\,\hbox {probability}. \end{aligned}$$
(2.1)

Let \(\delta _n \rightarrow 0\), let \(\Lambda _n = \delta _n {\mathbb Z}^d\), and let \(\lambda _n\) be counting measure on \(\Lambda _n\) normalized so that \(\lambda _n \rightarrow dx\) weakly, where \(dx\) is Lebesgue measure on \(\mathbb {R}^d\). Suppose that for each \({\omega }\) and \(n \ge 1\) we have Markov processes \(X^{(n)}=(X_t, t\ge 0, P^x_{{\omega },n}, x \in \Lambda _n)\) with values in \(\Lambda _n\). The corresponding expectations will be denoted \(E^x_{{\omega },n}\). Write

$$\begin{aligned} T^{({\omega }, n)}_t f(x) = E^x_{{\omega },n} f( X_t) \end{aligned}$$

for the semigroup of \(X^{(n)}\). Since we are discussing weak convergence, it is natural to put the index \(n\) in the probability measures \(P^x_{{\omega },n}\) rather than the process; however we will sometimes abuse notation and refer to \(X^{(n)}\) rather than \(X\) under the laws \((P^x_{{\omega },n})\). Recall that \(W\) denotes a standard Brownian motion.

For the remainder of this section, we will suppose that the following Assumption holds.

Assumption 2.2

  1. 1.

    For each \({\omega }\), the semigroup \(T^{({\omega }, n)}_t\) is self adjoint on \(L^2( \Lambda _n, \lambda _n)\).

  2. 2.

    The \(\mathrm{\mathbb {P} }\) law of the ‘environment’ for \(X^{(n)}\) is stationary. More precisely, for \(x \in \Lambda _n\) there exist measure preserving maps \(T_x : \Omega \rightarrow \Omega \) such that for all bounded measurable \(F\) on \(\mathcal {D}_T\),

    $$\begin{aligned} E^x_{{\omega },n} F( X)&= E^0_{T_x {\omega },n} F( X+x) , \end{aligned}$$
    (2.2)
    $$\begin{aligned} \mathrm{\mathbb {E} }E^0_{T_x {\omega },n} F( X)&= \mathrm{\mathbb {E} }E^0_{{\omega },n} F( X). \end{aligned}$$
    (2.3)
  3. 3.

    The AFCLT holds, that is for all \(T>0\) and bounded continuous \(F\) on \(\mathcal {D}_T\),

    $$\begin{aligned} \mathrm{\mathbb {E} }E^0_{{\omega },n} F(X) \rightarrow E_{\text {BM}}F(X). \end{aligned}$$

Given a function \(F \) from \(\mathcal {D}_T\) to \(\mathbb {R}\) set

$$\begin{aligned} F_x(w) = F(x+w), \quad x \in \mathbb {R}^d,\,\, w \in \mathcal {D}_T. \end{aligned}$$

Note that combining (2.2) and (2.3) we obtain

$$\begin{aligned} \mathrm{\mathbb {E} }E^x_{{\omega },n} F( X) = \mathrm{\mathbb {E} }E^0_{{\omega },n} F_x( X), \quad x \in \Lambda _n. \end{aligned}$$

Set

$$\begin{aligned} \mathcal {T}_t^n f(x) = \mathrm{\mathbb {E} }T^{({\omega },n)}_t f(x). \end{aligned}$$

Note that \(\mathcal {T}^{(n)}_t\) is not in general a semigroup. Write \(K_t\) for the semigroup of Brownian motion on \(\mathbb {R}^d\). We also need notation for expectation of general functions \(F\) on \(\mathcal {D}_T\), so we define

$$\begin{aligned} T^{({\omega },n)} F(x)&= E^x_{{\omega },n} F(X), \\ \mathcal {T}^{(n)} F(x)&= \mathrm{\mathbb {E} }E^x_{{\omega },n} F(X), \\ \mathcal {K}F(x)&= E_{BM} F(x+W), \\ U^{({\omega },n)}F(x)&= T^{({\omega },n)}F(x) - \mathcal {K}F(x). \end{aligned}$$

Using this notation, the AFCLT states that for \(F \in C(\mathcal {D}_T)\)

$$\begin{aligned} \mathcal {T}^{(n)} F(0) \rightarrow \mathcal {K}F(0). \end{aligned}$$
(2.4)

Definition 2.3

Fix \(T>0\) and recall that \(\mathcal {D}=\mathcal {D}_T\). Write \(d_U\) for the uniform norm, i.e.,

$$\begin{aligned} d_U(w,w') = \sup _{0\le s\le T} | w(s)-w'(s)|. \end{aligned}$$

Recall that we defined \(d_S(w,w')\) to be the usual Skorokhod metric on \(\mathcal {D}\). We have \(d_S(w,w') \le d_U(w,w')\), but the topologies given by the two metrics are distinct. Let \(\mathcal {M}(\mathcal {D})\) be the set of measurable \(F\) on \(\mathcal {D}\). A function \(F\in \mathcal {M}(\mathcal {D})\) is uniformly continuous in the uniform norm on \(\mathcal {D}\) if there exists \(\rho (\varepsilon )\) with \(\lim _{\varepsilon \rightarrow 0} \rho (\varepsilon ) =0\) such that if \(w, w' \in \mathcal {D}_T\) with \(d_U(w,w')\le \varepsilon \) then

$$\begin{aligned} |F(w) -F(w') | \le \rho (\varepsilon ). \end{aligned}$$
(2.5)

Write \(C_U(\mathcal {D})\) for the set of \(F\) in \(\mathcal {M}(\mathcal {D})\) which are uniformly continuous in the uniform norm. Note that we do not have \(C_U(\mathcal {D}) \subset C(\mathcal {D})\).

Let \(C^1_0(\mathbb {R}^d)\) denote the set of continuously differentiable functions with compact support. Let \(\mathcal {A}_m\) be the set of \(F\) such that

$$\begin{aligned} F(w) = \prod _{i=1}^m f_i(w(t_i)), \end{aligned}$$
(2.6)

where \(0 \le t_1 \le \dots t_m \le T, \,f_i \in C^1_0(\mathbb {R}^d)\), and let \(\mathcal {A}= \bigcup _m \mathcal {A}_m\).

Lemma 2.4

Let \(F \in \mathcal {A}\). Then \(F \in C_U(\mathcal {D})\), and \(\mathcal {K}F \in C_b(\mathbb {R}^d) \cap L^1(\mathbb {R}^d)\).

Proof

Let \(f \in \mathcal {A}_m\). Choose \(C\ge 2\) so that \(||f_i||_\infty \le C\) and \(|f_i(x)-f_i(y)| \le C|x-y|\) for all \(x,y ,i\). Then

$$\begin{aligned} |F(w) - F(w')| \le m C^{m} d_U(w,w'). \end{aligned}$$

Since \(f_i\) are bounded and continuous, so is \(\mathcal {K}F\). Also, \(|F| \le C^{m-1} |f(w(t_1))|\), so

$$\begin{aligned} \left| \int \mathcal {K}F(x) dx \right|&\le \int \mathcal {K}|F|(x) dx \le C^{m-1} {\langle K_{t_1} |f_1| , 1 \rangle } \\&= C^{m-1} {\langle | f_1|, 1 \rangle } = C^{m-1} ||f_1||_1 < \infty . \end{aligned}$$

\(\square \)

Lemma 2.5

For all \(F \in \mathcal {M}(\mathcal {D})\),

$$\begin{aligned} \begin{aligned} T^{({\omega },n)} F(x) \,&{\buildrel (d) \over =}\, T^{({\omega },n)}F_x(0), \\ U^{({\omega },n)} F(x) \,&{\buildrel (d) \over =}\, U^{({\omega },n)}F_x(0). \end{aligned} \end{aligned}$$
(2.7)

Proof

By the stationarity of the environment,

$$\begin{aligned} T^{({\omega },n)} F(x) = E^x_{{\omega },n } F(X) = E^0_{T_x {\omega },n } F(X+x) =^{(d)} E^0_{{\omega },n} F(X +x) = T^{({\omega },n)} F_x(0). \end{aligned}$$

The result for \(U^{({\omega },n)}\) is then immediate. \(\square \)

Lemma 2.6

Let \(F \in C_U(\mathcal {D}_T)\). Then \( T^{({\omega },n)} F_x(0), \, U^{({\omega },n)} F_x(0)\), and \(\mathcal {T}^{(n)} F(x)\) are uniformly continuous on \(\Lambda _n\) for every \(n \in {\mathbb {N}}\), with a modulus of continuity which is independent of \(n\).

Proof

If \(|x-y| \le \varepsilon \) then \(d_U(w+x,w+y)\le \varepsilon \), so if \(F\in C_U(\mathcal {D}_T)\) and \(\rho \) is such that (2.5) holds, then \(|F_x(w)-F_y(w)| \le \rho (\varepsilon )\), and hence

$$\begin{aligned} | T^{({\omega },n)}_t F_x(0) - T^{({\omega },n)}_t F_y(0)|&= | E^0_{{\omega },n} F( x + X) - E^0_{{\omega },n} F( y + X) | \\&\le E^0_{{\omega },n} | F( x + X) - F( y + X) | \le \rho (\varepsilon ). \end{aligned}$$

This implies the uniform continuity of \( T^{({\omega },n)} F_x(0)\) and \( U^{({\omega },n)} F_x(0)\). By (2.7),

$$\begin{aligned} \mathcal {T}^{(n)} F (x) = \mathrm{\mathbb {E} }T^{({\omega },n)} F(x) = \mathrm{\mathbb {E} }T^{({\omega },n)}F_x(0), \end{aligned}$$

so the uniform continuity of \(\mathcal {T}^{(n)} F(x)\) follows from that of \(T^{({\omega },n)} F_x(0)\). \(\square \)

Lemma 2.7

Let \(F \in \mathcal {A}\). Then

$$\begin{aligned} \mathcal {T}^{(n)} F(x) \rightarrow \mathcal {K}F(x)\quad \hbox { for all } x \in \mathbb {R}^d. \end{aligned}$$
(2.8)

Proof

The AFCLT (in 2.2) implies that \(\mathrm{\mathbb {E} }P^0_{{\omega }, n}\) converge weakly to \(P_{BM}\). Hence the finite dimensional distributions of \(X^{(n)}\) converge to those of \(W\), and this is equivalent to (2.8). \(\square \)

Let \(C_b(\mathbb {R}^d)\) denote the space of bounded continuous functions on \(\mathbb {R}^d\).

Lemma 2.8

Let \(F \in \mathcal {A}\), and \(h \in C_b(\mathbb {R}^d)\cap L^1(\mathbb {R}^d)\). Then

$$\begin{aligned} \int h(x) \mathcal {T}^{(n)} F(x) \lambda _n(dx) \rightarrow \int h(x) \mathcal {K}F(x) dx. \end{aligned}$$
(2.9)

Proof

This is immediate from (2.8) and the uniform continuity proved in Lemma 2.6. \(\square \)

The next Lemma gives the key construction in this section: using the self-adjointness of \(T^{({\omega },n)}_t\) we can linearise expectations of products. A similar idea is used in [19] in the context of transition densities.

Let \(F \in \mathcal {A}_m\) be given by (2.6). Set \(s_j=t_m-t_{m-j}\), and let

$$\begin{aligned} \widehat{F}(w) = \prod _{j=1}^{m-1} f_{m-j}( w_{s_j}) \prod _{j=1}^m f_j(w_{t_m +t_j}). \end{aligned}$$

Note that \(\widehat{F}\) is defined on functions \(w \in \mathcal {D}_{2T}\) (not \(\mathcal {D}_T\)). Write \({\langle f,g \rangle }_n\) for the inner product in \(L^2(\lambda _n)\) and \({\langle f,g \rangle }\) for the inner product in \(L^2(\mathbb {R}^d)\).

Lemma 2.9

With \(F\) and \(\widehat{F}\) as above,

$$\begin{aligned} \int ( T^{({\omega },n)} F(x) )^2 \lambda _n(dx)&= \int ( T^{({\omega },n)}\widehat{F}(x) ) f_m(x) \lambda _n(dx), \end{aligned}$$
(2.10)
$$\begin{aligned} \int (\mathcal {K}F(x) )^2 dx&= \int ( \mathcal {K}\widehat{F}(x) ) f_m(x) dx. \end{aligned}$$
(2.11)

Proof

Using the Markov property of \(X^{(n)}\)

$$\begin{aligned} T^{({\omega },n)}F(x) = E^x_{{\omega },n} \prod _{j=1}^m f_j(w_{t_j}) = E^x_{{\omega },n} \left( \prod _{j=1}^{m-1} f_j(w_{t_j}) T^{({\omega },n)}_{t_m-t_{m-1}} f_m(X_{t_{m-1}} )\right) . \end{aligned}$$

Hence we obtain

$$\begin{aligned} T^{({\omega },n)}F(x) = T^{({\omega },n)}_{t_1} \left( f_1 T^{({\omega },n)}_{t_2-t_1} \left( f_2 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m(x) \dots \right) \right) . \end{aligned}$$

Using the self-adjointness of \(T^{({\omega },n)}_t\) gives

$$\begin{aligned}&{\langle T^{({\omega },n)}F, T^{({\omega },n)}F \rangle }_n \\&\quad = {\langle T^{({\omega },n)}_{t_1} f_1 T^{({\omega },n)}_{t_2-t_1} f_2 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m, T^{({\omega },n)}_{t_1} f_1 T^{({\omega },n)}_{t_2-t_1} f_2 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m \rangle }_n \\&\quad = {\langle f_1 T^{({\omega },n)}_{t_1} T^{({\omega },n)}_{t_1} f_1 T^{({\omega },n)}_{t_2-t_1} f_2 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m, T^{({\omega },n)}_{t_2-t_1} f_2 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m \rangle }_n. \end{aligned}$$

Continuing in this way we obtain

$$\begin{aligned}&{\langle T^{({\omega },n)}F, T^{({\omega },n)}F \rangle }_n \\&\quad = {\langle T^{({\omega },n)}_{t_m-t_{m-1}} f_{m-1} T^{({\omega },n)}_{t_{m-1}-t_{m-2}} f_{m-2} \dots f_1 T^{({\omega },n)}_{t_1} T^{({\omega },n)}_{t_1} f_1 \dots T^{({\omega },n)}_{t_m-t_{m-1}} f_m, f_m \rangle }_n \\&\quad = {\langle T^{({\omega },n)}\widehat{F}, f_m \rangle }_n. \end{aligned}$$

The proof for \(\mathcal {K}\) is exactly the same. \(\square \)

Lemma 2.10

Let \(F \in \mathcal {A}\). Then

$$\begin{aligned} \mathrm{\mathbb {E} }\int ( T^{({\omega },n)} F(x) - \mathcal {K}F(x) )^2 \lambda _n(dx) \rightarrow 0. \end{aligned}$$
(2.12)

Proof

We have

$$\begin{aligned}&\int ( T^{({\omega },n)}F (x)- \mathcal {K}F(x) )^2 \lambda _n(dx) = {\langle ( T^{({\omega },n)}F - \mathcal {K}F), ( T^{({\omega },n)}F - \mathcal {K}F) \rangle }_n \\&\quad = {\langle T^{({\omega },n)}F, T^{({\omega },n)}F \rangle }_n - 2 {\langle T^{({\omega },n)}F, \mathcal {K}F \rangle }_n + {\langle \mathcal {K}F , \mathcal {K}F \rangle }_n. \end{aligned}$$

Thus

$$\begin{aligned}&\mathrm{\mathbb {E} }\int ( T^{({\omega },n)}F(x) - \mathcal {K}F(x) )^2 \lambda _n(dx) \nonumber \\&\quad = \mathrm{\mathbb {E} }{\langle T^{({\omega },n)}F, T^{({\omega },n)}F \rangle }_n - 2 {\langle \mathcal {T}^{(n)}F, \mathcal {K}F \rangle }_n + {\langle \mathcal {K}F, \mathcal {K}F \rangle }_n. \end{aligned}$$
(2.13)

Since \(\mathcal {K}F\) is continuous we have

$$\begin{aligned} {\langle \mathcal {K}F, \mathcal {K}F \rangle }_n \rightarrow {\langle \mathcal {K}F , \mathcal {K}F \rangle }. \end{aligned}$$

Taking \(h= \mathcal {K}F\) and using Lemma 2.4, Lemma 2.8 gives that

$$\begin{aligned} {\langle \mathcal {T}^{(n)} F, \mathcal {K}F \rangle }_n \rightarrow {\langle \mathcal {K}F , \mathcal {K}F \rangle }. \end{aligned}$$

Let \(f_m\) and \(\widehat{F}\) be as in the the previous lemma. Then

$$\begin{aligned} \mathrm{\mathbb {E} }{\langle T^{({\omega },n)}F, T^{({\omega },n)}F \rangle }_n =\mathrm{\mathbb {E} }{\langle T^{({\omega },n)}\widehat{F}, f_m \rangle }_n = {\langle \mathcal {T}^{(n)} \widehat{F}, f_m \rangle }_n. \end{aligned}$$

Again by Lemma 2.8 and (2.11),

$$\begin{aligned} {\langle \mathcal {T}^{(n)} \widehat{F}, f_m \rangle }_n \rightarrow {\langle \mathcal {K}\widehat{F}, f_m \rangle } = {\langle \mathcal {K}F, \mathcal {K}F \rangle }. \end{aligned}$$

Adding the limits of the three terms in (2.13), we obtain (2.12). \(\square \)

Lemma 2.11

Let \(F \in \mathcal {A}\). Then

$$\begin{aligned} T^{({\omega },n)} F(0) \rightarrow \mathcal {K}F(0)\,\, \hbox { in }\,\, \mathrm{\mathbb {P} }\hbox {-probability}. \end{aligned}$$
(2.14)

Proof

The previous lemma gives

$$\begin{aligned} \mathrm{\mathbb {E} }\int ( U^{({\omega },n)}F(x))^2 \lambda _n(dx) \rightarrow 0. \end{aligned}$$

Using Lemma 2.5 we have

$$\begin{aligned} \mathrm{\mathbb {E} }\int ( U^{({\omega },n)}F_x(0) )^2 \lambda _n(dx) \rightarrow 0, \end{aligned}$$
(2.15)

and using the uniform continuity of \(U^{({\omega },n)}F_x(0)\) gives (2.14). \(\square \)

Write \(\mathrm{\mathbb {D} }\) for the set of dyadic rationals.

Proposition 2.12

Given any subsequence \((n_k)\) there exists a subsequence \((n'_k)\) of \((n_k)\) and a set \(\Omega _0\) with \(\mathrm{\mathbb {P} }(\Omega _0)=1\), such that for any \({\omega }\in \Omega _0\) and \(q_1 \le q_2 \le \cdots \le q_m\) with \(q_i \in \mathrm{\mathbb {D} }\), the r.v. \((X_{q_i}, i=1, \dots ,m)\) under \(P^0_{{\omega },n'_k}\) converge in distribution to \((W_{q_i}, i=1, \dots , m)\).

Proof

Let \(\mathrm{\mathbb {D} }_T=[0,T] \cap \mathrm{\mathbb {D} }\). Fix a finite set \(q_1 \le \dots \le q_m\) with \(q_i \in \mathrm{\mathbb {D} }_T\). Then convergence of \((X_{q_i}, i=1, \dots ,m, P^0_{{\omega },n} )\) is determined by a countable set of functions \(F_i \in \mathcal {A}_m\). So by Lemma 2.11 we can find nested subsequences \((n^{(i)}_k)\) of \((n_k)\) such that for each \(i\)

$$\begin{aligned} \lim _{k \rightarrow \infty } P^0_{({\omega },n^{(i)}_k)} F_j(0) = \mathcal {K}F_j(0) \qquad \mathrm{\mathbb {P} }\hbox {-a.s.},\quad \hbox { for } 1 \le j \le i. \end{aligned}$$

A diagonalization argument then implies that there exists a subsequence \(n''_k\) such that \((X_{q_i}, i=1, \dots , m, P^0_{{\omega },n''_k} )\) converge in distribution to \((W_{q_i}, i=1, \dots , m)\). Since the set of the finite sets \(\{q_1, \dots , q_m\}\) is countable, an additional diagonalization argument then implies that there exists a subsequence \((n'_k)\) such that this convergence holds for all such finite sets. \(\square \)

Lemma 2.13

If AFCLT holds then “tightness in probability” holds, i.e., for any \(\delta >0\) there exist \(\delta _1>0\) and \(n_1\) such that for \(n \ge n_1\), there is a set \(A_n\) of \(\omega \) with \(\mathrm{\mathbb {P} }(A_n) \ge 1- \delta \), such that for \(\omega \in A_n\),

$$\begin{aligned} P^0_{\omega ,n}\left( \sup _{0\le s \le t \le T, t-s \le \delta _1} |X^{(n)}_s - X^{(n)}_t | \ge \delta \right) < \delta . \end{aligned}$$
(2.16)

Proof

If AFCLT holds then, by the Skorokhod Lemma, we can construct \(X^{(n)}\) and \(W\) on a common probability space, in such a way that each \(X^{(n)}\) has the distribution \(\mathrm{\mathbb {E} }P^0_{\omega ,n}\) and \(X^{(n)} \rightarrow W\) in the Skorokhod topology, a.s.

Fix any \(\delta >0\). By continuity of Brownian motion there exists \(\delta _1>0\) such that

$$\begin{aligned} P_{BM}\left( \sup _{0\le s \le t \le T, t-s \le \delta _1 } |W_s - W_t | \ge \delta \right) < \delta . \end{aligned}$$
(2.17)

If a sequence of processes converges in the Skorokhod topology to a continuous process then it converges also in the uniform sense. Hence, in view of (2.17), there exists \(n_1\) such that for \(n\ge n_1\),

$$\begin{aligned} \mathrm{\mathbb {E} }P^0_{\omega ,n}\left( \sup _{0\le s \le t \le T, t-s \le \delta _1 } |X^{(n)}_s - X^{(n)}_t | \ge 2\delta \right) < 2\delta . \end{aligned}$$

This implies that for \(n\ge n_1\), there is a set \(A_n\) of \(\omega \) with \(\mathrm{\mathbb {P} }(A_n) \ge 1- \sqrt{2\delta }\), such that for \(\omega \in A_n\),

$$\begin{aligned} P^0_{\omega ,n}\left( \sup _{0\le s \le t \le T, t-s \le \delta _1 } |X^{(n)}_s - X^{(n)}_t | \ge 2\delta \right) < \sqrt{2\delta }. \end{aligned}$$

It is elementary to convert the form of this estimate to the form given in the lemma. \(\square \)

Theorem 2.14

If Assumption 2.2 holds then \( P^0_{{\omega },n}\) converge weakly in measure to \(P_{\text {BM}}\).

Proof

Fix any \(T>0\), an arbitrarily small \(\varepsilon >0\) and any bounded function \(F \in C(\mathcal {D}_T)\). Let \(W\) denote Brownian motion and suppose that processes \(Y\) and \(W\) are defined on the same probability space, for which we use the generic notation \(P\) and \(E\). It is easy to see that one can find \(\delta \in (0, \varepsilon /2)\) so small that if the process \(Y\) satisfies

$$\begin{aligned} P\left( \sup _{0\le t \le T } |Y_t - W_t | \ge 3\delta \right) < 3\delta , \end{aligned}$$
(2.18)

then

$$\begin{aligned} |E F(Y) - E F(W) | < \varepsilon . \end{aligned}$$
(2.19)

Let \(\delta _1>0\) be so small that (2.16) and (2.17) hold with the present choice of \(\delta \). Suppose that \(0 = q_1 \le q_2 \le \dots \le q_m = T\) are dyadic rationals and \(q_k - q_{k-1} \le \delta _1\) for all \(k\) (note that we can assume that \(T\) is a dyadic rational without loss of generality). By Proposition 2.12, we can find a sequence \(n_k\) such that the joint distributions of the random variables \((X_{q_i}, i=1, \dots ,m)\) under \(P^0_{{\omega },n_k}\) converge to the distribution of \((W_{q_i}, i=1, \dots , m)\), as \(k\rightarrow \infty , \,\mathrm{\mathbb {P} }\)-a.s. By the Skorokhod Lemma, we can construct \((X^{\omega ,n_k}_{q_i}, i=1, \dots ,m)\) and \((W^{\omega ,n_k}_{q_i}, i=1, \dots , m)\) on the same probability space \((\Omega _\omega , \mathcal {F}_\omega , P_\omega )\) so that

$$\begin{aligned} (X^{\omega ,n_k}_{q_i}, i=1, \dots ,m) \rightarrow (W^{\omega ,n_k}_{q_i}, i=1, \dots , m), \qquad P_\omega \text {-a.s.,}\,\, \mathrm{\mathbb {P} }\text {-a.s.,} \end{aligned}$$
(2.20)

\((X^{\omega ,n_k}_{q_i}, i=1, \dots ,m)\) has the same distribution under \(P_\omega \) as \((X_{q_i}, i=1, \dots ,m)\) under \(P^0_{{\omega },n_k}\), and \((W^{\omega ,n_k}_{q_i}, i=1, \dots , m)\) has the same distribution under \(P_\omega \) as Brownian motion (sampled at a finite number of times).

Using conditional probabilities and enlarging the probability space, if necessary, we can assume that there exist processes \((X^{\omega ,n_k}_t, 0\le t \le T)\) and \((W^{\omega ,n_k}_t, 0\le t \le T)\) on the same probability space \((\Omega _\omega , \mathcal {F}_\omega , P_\omega )\) such that \((X^{\omega ,n_k}_t, 0\le t \le T)\) has the same distribution under \(P_\omega \) as \((X_t, 0\le t \le T)\) under \(P^0_{{\omega },n_k}, \,(W^{\omega ,n_k}_t, 0\le t \le T)\) is Brownian motion, and all the conditions stated in the previous paragraph hold for these processes sampled at \(q_i, i=1, \dots ,m\); in particular, (2.20) holds.

It follows from (2.20) that there exist an event \(H\) with \(\mathrm{\mathbb {P} }(H) > 1-\delta \) and \(k_1\) such that for \(k\ge k_1\) and each \(\omega \in H\),

$$\begin{aligned} P_\omega (|X^{\omega ,n_k}_{q_k} - W^{\omega ,n_k}_{q_k}| < \delta , \forall \ k=1,\dots ,m) \ge 1-\delta . \end{aligned}$$
(2.21)

By Lemma 2.13, for \(k\ge k_2\), there is a set \(A_k\) of \(\omega \) with \(\mathrm{\mathbb {P} }(A_k) \ge 1- \delta \), such that for \(\omega \in A_k\),

$$\begin{aligned} P^0_{\omega ,n_k}\left( \sup _{0\le s \le t \le T, t-s \le \delta _1 } |X^{(n_k)}_s - X^{(n_k)}_t | \ge \delta \right) < \delta . \end{aligned}$$
(2.22)

Since \((X^{\omega ,n_k}_t, 0\le t \le T)\) has the same distribution under \(P_\omega \) as \((X^{(n_k)}_t, 0\le t \le T)\) under \(P^0_{{\omega },n_k}\), it follows from (2.22) that for \(k\ge k_2\), there is a set \(A_k\) of \(\omega \) with \(\mathrm{\mathbb {P} }(A_k) \ge 1- \delta \), such that for \(\omega \in A_k\),

$$\begin{aligned} P_\omega \left( \sup _{0\le s \le t \le T, t-s \le \delta _1} |X^{\omega ,n_k}_s - X^{\omega ,n_k}_t | \ge \delta \right) < \delta . \end{aligned}$$
(2.23)

For similar reasons, (2.17) implies that

$$\begin{aligned} P_\omega \left( \sup _{0\le s \le t \le T, t-s \le \delta _1 } |W^{\omega ,n_k}_s - W^{\omega ,n_k}_t | \ge \delta \right) < \delta . \end{aligned}$$
(2.24)

We now combine (2.21), (2.23) and (2.24) to conclude that for \(k \ge k_1 \vee k_2\), there is a set \(H\cap A_{k}\) of \(\omega \) with \(\mathrm{\mathbb {P} }(H\cap A_{k}) \ge 1- 2\delta \), such that for \(\omega \in H\cap A_{k}\),

$$\begin{aligned} P_\omega \left( \sup _{0\le t \le T } |X^{\omega ,n_k}_t - W^{\omega ,n_k}_t | \ge 3\delta \right) < 3\delta . \end{aligned}$$

In view of (2.18)–(2.19) this implies that for \(k \ge k_1 \vee k_2\), there is a set \(H\cap A_{k}\) of \(\omega \) with \(\mathrm{\mathbb {P} }(H\cap A_{k}) \ge 1- 2\delta \), such that for \(\omega \in H\cap A_{k}\),

$$\begin{aligned} |E^0_{\omega ,n_k} F(X) - E F(W) | = | E_\omega F(X^{\omega ,n_k}) - E F(W^{\omega ,n_k}) | < \varepsilon . \end{aligned}$$
(2.25)

Set \(\xi _n = |E^0_{\omega ,n} F(X) - E F(W) |\); since \(\delta < \varepsilon /2\), (2.25) implies that

$$\begin{aligned} \mathrm{\mathbb {P} }( \xi _{n_k} > \varepsilon ) < \varepsilon \quad \hbox { for }\,\, k \ge k_1 \vee k_2. \end{aligned}$$
(2.26)

We now extend this result to the whole sequence, and claim that there exists \(n_1\) such that

$$\begin{aligned} \mathrm{\mathbb {P} }( \xi _{n} > \varepsilon ) < \varepsilon \quad \hbox { for }\,\, n \ge n_1. \end{aligned}$$
(2.27)

Suppose not: then there exists a subsequence \(n^*_k\) with \(\mathrm{\mathbb {P} }( \xi _{n^*_k} > \varepsilon ) \ge \varepsilon \) for all \(k\). However, by Proposition 2.12, we can find a subsequence \(n_k\) of \(n^*_k\) such that the joint distributions of the random variables \((X_{q_i}, i=1, \dots ,m)\) under \(P^0_{{\omega },n_k}\) converge to the distribution of \((W_{q_i}, i=1, \dots , m)\), as \(k\rightarrow \infty , \,\mathrm{\mathbb {P} }\)-a.s. Applying the argument above to this subsequence, we have a contradiction to (2.26). Thus (2.27) holds, and this completes the proof of the theorem. \(\square \)

3 Construction of the environment

The remainder of this paper is concerned with the proof of Theorem 1.4. The main idea of the proof as as follows. We choose a sequence \(a_n\) of integers, with \(a_n \gg a_{n-1}\), and \(a_n/a_{n-1} = m_n \in {\mathbb Z}\). For each \(n\) we define an ergodic tiling of \({\mathbb Z}^2\) into (disjoint) squares, each with \(a_n^2\) points. Write \(\mathcal {S}_n\) for the collection of these squares; they are defined so that each square in \(\mathcal {S}_n\) is the union of \(m_n^2\) squares in \(\mathcal {S}_{n-1}\). In each square in \(\mathcal {S}_n\) we place 4 obstacles of diameter \(O(b_n)\), where \(b_n \simeq n^{-1/2} a_n\). The obstacles are chosen so that the resulting environment is symmetric. Let \(F_n\) be the event that \(0\) is within a distance \(O(b_n)\) of an obstacle at scale \(n\). The obstacles are such that if \(F_n\) holds then the rescaled process \(Z_n=(b_n^{-1} X_{b_n^2 t}, 0\le t \le 1)\) will be far from a Brownian motion. Thus if \(F_n\) holds i.o. then the QFCLT will fail. On the other hand, if \(\mathrm{\mathbb {P} }(F_n) \rightarrow 0\) then with high probability \(Z_n\) will be close to BM, and (after some work) we do have the WFCLT.

We now begin by giving the construction of the sets \(\mathcal {S}_n\) and the associated environment. Let \(\Omega = (0,\infty )^{E_2}\), and \(\mathcal {F}\) be the Borel \(\sigma \)-algebra defined using the usual product topology. Then every \(t\in {\mathbb Z}^2\) defines a translation \(T_t \) of the environment by \(t\). Stationarity and ergodicity of the measures defined below will be understood with respect to these transformations.

All constants (often denoted \(c_1, c_2\), etc.) are assumed to be strictly positive and finite. For a set \(A \subset {\mathbb Z}^2\) let \(E(A)\) be the set of edges in \(A\) if regarded as a subgraph of \({\mathbb Z}^2\). Let \(E_h(A)\) and \(E_v(A)\) respectively be the set of horizontal and vertical edges in \(E(A)\). Write \(x \sim y\) if \(\{x,y\}\) is an edge in \({\mathbb Z}^2\). Define the exterior boundary of \(A\) by

$$\begin{aligned} {\partial }A =\{ y \in {\mathbb Z}^2 -A: y \sim x\quad \text { for some }\,\, x \in A \}. \end{aligned}$$

Let also

$$\begin{aligned} {\partial }_i A = {\partial }({\mathbb Z}^2 -A). \end{aligned}$$

Finally define balls in the \(\ell ^\infty \) norm by \(B_\infty (x,r)= \{y: ||x-y||_\infty \le r\}\); of course this is just the square with center \(x\) and side \(2r\).

Let \(\{a_n\}_{n\ge 0}\), \(\{ \beta _n\}_{n \ge 1}\) and \(\{b_n\}_{n\ge 1}\) be strictly increasing sequences of positive integers growing to infinity with \(n\), with

$$\begin{aligned} 1=a_0 < b_1 < \beta _1 < a_1 \ll b_2 < \beta _2< a_2 \ll b_3 \dots \end{aligned}$$

We will impose a number of conditions on these sequences in the course of the paper. We collect these conditions here so that the reader can check that all conditions can be satisfied simultaneously. There is some redundancy in the conditions, for easy reference. (Some additional conditions on \(b_n/a_{n-1}\) are needed for the proof in [5] of the full WFCLT for \((X^{(\varepsilon )})\).)

  1. (i)

    \(a_n\) is even for all \(n\).

  2. (ii)

    For each \(n \ge 1, \,a_{n-1}\) divides \(b_n\), and \(b_n\) divides \(\beta _n\) and \(a_n\).

  3. (iii)

    \(b_1 \ge 10^{10}\).

  4. (iv)

    \(a_n/\sqrt{2n} \le b_n \le a_n / \sqrt{n} \) for all \(n\), and \(b_n \sim a_n/\sqrt{n}\).

  5. (v)

    \(b_{n+1} \ge 2^n b_n\) for all \(n\).

  6. (vi)

    \(b_n > 40 a_{n-1}\) for all \(n\).

  7. (vii)

    \(b_n\) is large enough so that (5.1) and (6.1) hold.

  8. (viii)

    \(100b_n < \beta _n \le b_n n^{1/4} < 3 \beta _n < a_n/10\) for all \(n\).

These conditions do not define \(a_n\)’s and \(b_n\)’s uniquely. It is easy to check that there exist constants that satisfy all the conditions: if \(a_i,b_i,\beta _i\) have been chosen for all \(i\in \{1,\ldots , n-1\}\), then if \(b_n\) is chosen large enough [with care on respecting the divisibility condition in (ii)], it will satisfy all the conditions imposed on it with respect to constants of smaller indices. Then one can choose \(a_n\) and \(\beta _n\) so that the remaining conditions are satisfied.

We set

$$\begin{aligned} m_n = \frac{a_n}{a_{n-1}}, \qquad \ell _n = \frac{a_n}{b_n}. \end{aligned}$$
(3.1)

We begin our construction by defining a collection of squares in \({\mathbb Z}^2\). Let

$$\begin{aligned} B_n&= [0, a_n]^2, \\ B_n'&= [0, a_n-1]^2 \cap {\mathbb Z}^2,\\ \mathcal {S}_n(x)&= \{ x + a_n y + B_n': \, y \in {\mathbb Z}^2 \}. \end{aligned}$$

Thus \(\mathcal {S}_n(x)\) gives a tiling of \({\mathbb Z}^2\) by disjoint squares of side \(a_n-1\) and period \(a_n\). We say that the tiling \(\mathcal {S}_{n-1}(x_{n-1})\) is a refinement of \(\mathcal {S}_n(x_n)\) if every square \(Q \in \mathcal {S}_n(x_n)\) is a finite union of squares in \(\mathcal {S}_{n-1}(x_{n-1})\). It is clear that \(\mathcal {S}_{n-1}(x_{n-1})\) is a refinement of \(\mathcal {S}_n(x_n)\) if and only if \(x_n = x_{n-1}+ a_{n-1}y\) for some \(y \in {\mathbb Z}^2\).

Take \(\mathcal {O}_1\) uniform in \(B'_1\), and for \(n\ge 2\) take \(\mathcal {O}_n\), conditional on \((\mathcal {O}_1, \dots , \mathcal {O}_{n-1})\), to be uniform in \(B'_n \cap ( \mathcal {O}_{n-1} + a_{n-1}{\mathbb Z}^2)\). We now define random tilings by letting

$$\begin{aligned} \mathcal {S}_n = \mathcal {S}_n( \mathcal {O}_n), \quad n \ge 1. \end{aligned}$$

Let \(\eta _n, \,K_n\) be positive constants; we will have \(\eta _n \ll 1 \ll K_n\). We define conductances on \(E_2\) as a limit of conductances for \(n=1,2\ldots \), as follows. For each \(n\), conductances on a tile of \(\mathcal {S}_n\) will be the same for each tile. Recall that \(a_n\) is even, and let \(a_n' = \frac{1}{2} a_n\). Let

$$\begin{aligned} C_n = \{ (x,y) \in B_n \cap {\mathbb Z}^2: y \ge x, x+y \le a_n \}. \end{aligned}$$

We first define conductances \(\nu ^{0,n}_e\) for \(e \in E(C_n)\). Let

$$\begin{aligned} D_n^{00}&= \{ (a'_n - \beta _n,y), a'_n - 10 b_n \le y \le a'_n + 10 b_n \}, \\ D_n^{01}&= \{ (x, a'_n + 10 b_n), (x, a'_n + 10 b_n + 1), (x, a'_n - 10 b_n), (x, a'_n - 10 b_n -1), \\&\quad \quad \quad a'_n -\beta _n -b_n \le x \le a'_n -\beta _n + b_n \}. \end{aligned}$$

Thus the set \(D^{00}_n \cup D_n^{01}\) resembles the letter I (see Fig. 1).

For an edge \(e \in E(C_n)\) we set

$$\begin{aligned} \begin{array}{lll} \nu ^{n,0}_{e} = \eta _n \quad &{}\quad \text {if }\, e \in E_v(D^{01}_n), \\ \nu ^{n,0}_{e} = K_n \quad &{}\quad \text {if }\, e \in E(D^{00}_n), \\ \nu ^{n,0}_{e} = 1 \quad &{}\quad \text {otherwise.} \end{array} \end{aligned}$$
Fig. 1
figure 1

The set \(D^{00}_n \cup D_n^{01}\) resembles the letter I. The short vertical (blue) edges at thetop and bottom of the I have very low conductance. The central (red) line represents edges with very high conductance. Drawing not to scale (color figure online)

Fig. 2
figure 2

The obstacle set \(D_n^0\). Each obstacle is a copy, in some cases a rotated one, of the obstacle set given in Fig. 1

We then extend \(\nu ^{n,0}\) by symmetry to \(E(B_n)\). More precisely, for \(z =(x,y) \in B_n\), let \(R_1 z=( y,x)\) and \(R_2z = (a_n-y,a_n-x)\), so that \(R_1\) and \(R_2\) are reflections in the lines \(y=x\) and \(x+y=a_n\). We define \(R_i\) on edges by \(R_i (\{x,y\}) = \{R_i x, R_i y \}\) for \(x,y \in B_n\). We then extend \(\nu ^{0,n}\) to \(E( B_n)\) so that \(\nu ^{0,n}_e = \nu ^{0,n}_{R_1 e }=\nu ^{0,n}_{R_2 e }\) for \(e \in E(B_n)\). We define the obstacle set \(D_n^0\) by setting (see Fig. 2),

$$\begin{aligned} D_{n}^{0} = \bigcup _{i=0}^1 ( D_n^{0,i} \cup R_1(D_n^{0,i}) \cup R_2(D_n^{0,i}) \cup R_1R_2 (D_n^{0,i} ) ). \end{aligned}$$

Note that \(\nu ^{n,0}_e=1\) for every edge adjacent to the boundary of \(B_n\), or indeed within a distance \( a_n/4\) of this boundary. If \(e=(x,y)\), we will write \(e-z = (x-z,y-z)\). Next we extend \(\nu ^{n,0}\) to \(E_2\) by periodicity, i.e., \(\nu ^{n,0}_e = \nu ^{n,0}_{e+ a_n x}\) for all \(x\in {\mathbb Z}^2\). Finally, we define the conductances \(\nu ^n\) by translation by \(\mathcal {O}_n\), so that

$$\begin{aligned} \nu ^n_e =\nu ^{n,0}_{e-\mathcal {O}_n}, \quad e \in E_2. \end{aligned}$$

We also define the obstacle set at scale \(n\) by

$$\begin{aligned} D_n = \bigcup _{ x \in {\mathbb Z}^2} (a_n x + \mathcal {O}_n + D^0_n ). \end{aligned}$$

We illustrate two levels of construction in Fig. 3.

Fig. 3
figure 3

Two levels of the obstacle set. Drawing not to scale

We define the environment \(\mu ^n_e\) inductively by

$$\begin{aligned} \mu ^n_e&= \nu ^{n}_e \qquad \ \text { if }\,\, \nu ^n_e \ne 1, \\ \mu ^n_e&= \mu ^{n-1}_e \quad \text { if }\,\, \nu ^n_e=1. \end{aligned}$$

Once we have proved the limit exists, we will set

$$\begin{aligned} \mu _e = \lim _n \mu ^n_e. \end{aligned}$$
(3.2)

Theorem 3.1

  1. (a)

    For each \(n\) the environments \((\nu ^n_e, e\in E_2), \,(\mu ^n_e, e\in E_2)\) are stationary, symmetric and ergodic.

  2. (b)

    The limit (3.2) exists \(\mathrm{\mathbb {P} }\)-a.s.

  3. (c)

    The environment \((\mu _e, e \in E_2)\) is stationary, symmetric in the sense of Definition 1.2, and ergodic with respect to the group of translations of \({\mathbb Z}^2\).

Proof

(a) For \(x=(x_1,x_2) \in {\mathbb Z}^2\) define the modulo \(a\) value of \(x\) as the unique \((y_1,y_2)\in [0,a-1]^2\) such that \(x_1\equiv y_1\) (mod \(a\)) and \(x_2\equiv y_2\) (mod \(a\)). We say that \(x,y\in {\mathbb Z}^2\) are equivalent modulo \(a\) if their modulo \(a\) values are the same, and denote it by \(x\equiv y\) mod \(a\).

Let \(\mathcal {K}_n\) be the set of \(n\)-tuples \((x_1,\ldots , x_n)\) with \(x_i\in (x_{i-1}+a_{i-1}{\mathbb Z}^2)\cap [0,a_i-1]^2\) (with the convention \(a_0=1, x_0=0\)). Denote the uniform measure on \(\mathcal {K}_n\) by \(\mathrm{\mathbb {P} }_n\). Note that \((\mathcal {O}_1,\ldots ,\mathcal {O}_n)\) is distributed according to \(\mathrm{\mathbb {P} }_n\).

Let \(U_n\) be a uniformly chosen element of \([0,a_n-1]^2\cap {\mathbb Z}^2\). Then since each \(a_{i-1}\) divides \(a_i\), the distribution of \((U_n+a_1 {\mathbb Z}^2,\ldots , U_n +a_n {\mathbb Z}^2)\) is stationary, symmetric and ergodic with respect to the isometries \((\hat{T}_t, t \in {\mathbb Z}^2)\) defined by

$$\begin{aligned} \hat{T}_t: (U_n+a_1 {\mathbb Z}^2,\ldots , U_n +a_n {\mathbb Z}^2) \rightarrow (t+U_n+a_1 {\mathbb Z}^2,\ldots , t+ U_n +a_n {\mathbb Z}^2). \end{aligned}$$

Let \(\beta \) be the bijection between \([0,a_n-1]^2\,\cap \,{\mathbb Z}^2\) and \(\mathcal {K}_n\) defined as \(\beta (t)=(x_1,\ldots , x_n)\), where \(x_i\) is the mod \(a_i\) value of \(t\). The push-forward of the uniform measure for \(U_n\) is then the uniform measure on \(\mathcal {K}_n\). Furthermore, \(\beta \) commutes with translations in the sense that if \(\beta (t)=(x_1,\ldots , x_n)\) and \(\tau \in {\mathbb Z}\), then \(\beta (t+\tau ) =(x_1+\tau ,\ldots , x_n+\tau )\), where addition in the \(i\)’th coordinate is understood modulo \(a_i\). Similarly, \(\beta \) commutes with rotations and reflections. Hence symmetry, stationarity and ergodicity of \((O_1+a_1{\mathbb Z}^2, \ldots , O_n +a_n{\mathbb Z}^2)\) follows from that of \((U_n+a_1{\mathbb Z}^2, \ldots , U_n+a_n{\mathbb Z}^2)\). Symmetry, stationarity and ergodicity of \((\nu ^n_e, e\in E_2)\) and \((\mu ^n_e, e\in E_2)\) follows from the fact that \((\nu ^n_e, e\in E_2)\) and \((\mu ^n_e, e\in E_2)\) are deterministic functions of \((O_1+a_1{\mathbb Z}^2, \ldots , O_n +a_n{\mathbb Z}^2)\), and these functions commute with graph isomorphisms of \({\mathbb Z}^2\).

(b) \(B_n\) contains more than \(2a_n^2\) edges, of which less than \(100 b_n\) are such that \(\nu ^{n,0}_e\ne 1\). So by the stationarity of \(\nu ^n\),

$$\begin{aligned} \mathrm{\mathbb {P} }( \nu ^n_e \ne 1) \le \frac{50 b_n}{a_n^2} \le \frac{c}{2^n}. \end{aligned}$$

The convergence in (3.2) then follows by the Borel–Cantelli lemma.

(c) The definition (3.2) and (a) show that \((\mu _e, e\in E_2)\) is stationary and symmetric, so all that remains to be proved is ergodicity.

Denote by \(\mathcal {K}_\infty \) the family of sequences \((x_1,x_2,\ldots )\), satisfying \(x_i\in (x_{i-1}+a_{i-1}{\mathbb Z}^2)\,\cap \,[0,a_i-1]^2\) for every \(i\). Let \(\mathcal {G}_\infty \) be the \(\sigma \)-field generated by \((\mathcal {O}_1,\mathcal {O}_2,\ldots )\), and (by a slight abuse of notation) for the rest of this proof let \(\mathrm{\mathbb {P} }\) be the law of \((\mathcal {O}_1,\mathcal {O}_2,\ldots )\). Let \(\mathcal {G}_n\) be the sub-\(\sigma \)-field of \(\mathcal {G}_\infty \) generated by \((\mathcal {O}_1,\ldots ,\mathcal {O}_n)\).

If \((x_1,x_2,\ldots )\in \mathcal {K}_\infty , \,t\in {\mathbb Z}^2\), define the \(\mathrm{\mathbb {P} }\)-preserving transformation \(t+(x_1,x_2,\ldots )\) as \((t+x_1,t+x_2,\ldots )\), where in the \(i\)’th coordinate is modulo \(a_i\). To show ergodicity of \((\mu _e, e \in E_2)\), it is enough to prove ergodicity of \((\mathcal {O}_1,\mathcal {O}_2,\ldots )\), because \((\mu _e, e \in E_2)\) is a deterministic function of it, and this function commutes with graph isomorphisms of \({\mathbb Z}^2\).

Now let \(A\in \mathcal {G}_\infty \) be invariant, and suppose by contradiction that there is some \(\varepsilon >0\) such that \(\varepsilon < \mathrm{\mathbb {P} }(A)< 1-\varepsilon \). There exists some \(n\) and \(B\in \mathcal {G}_n\) with the property that \(\mathrm{\mathbb {P} }(A\triangle B)<\varepsilon /4\) (where \(\triangle \) is the symmetric difference operator). This also implies that \(3\varepsilon /4<\mathrm{\mathbb {P} }(B)<1- 3\varepsilon /4\). We have for \(t \in {\mathbb Z}^2\)

$$\begin{aligned} \mathrm{\mathbb {P} }(B\triangle (B+t))&\le \mathrm{\mathbb {P} }(A\triangle B) + \mathrm{\mathbb {P} }(A\triangle (B+t)) = \mathrm{\mathbb {P} }(A\triangle B) + \mathrm{\mathbb {P} }((A+t)\triangle (B+t)) \\&= \mathrm{\mathbb {P} }(A\triangle B) + \mathrm{\mathbb {P} }((A\triangle B)+t) =2\mathrm{\mathbb {P} }(A\triangle B)< \varepsilon /2. \end{aligned}$$

We now show that we can choose \(t\) so that \(\mathrm{\mathbb {P} }(B\triangle (B+t)) \ge 2\mathrm{\mathbb {P} }(B)\mathrm{\mathbb {P} }(\mathcal {K}_\infty {\setminus }B)\ge \varepsilon /2\), giving a contradiction.

For an \(E\in \mathcal {G}_n\) denote by \(E_n\) the subset of \(\mathcal {K}_n\) such that \((\mathcal {O}_1,\mathcal {O}_2,\ldots )\in E\) if and only if \((\mathcal {O}_1,\ldots , \mathcal {O}_n)\in E_n\). Note that \(\mathrm{\mathbb {P} }(E)=\mathrm{\mathbb {P} }_n (E_n)\). So we want to show that for any \(B\in \mathcal {G}_n\) there exists a \(t\) such that \(\mathrm{\mathbb {P} }_n (B_n\triangle (B_n+t)) \ge 2\mathrm{\mathbb {P} }_n(B_n)\mathrm{\mathbb {P} }_n(\mathcal {K}_n{\setminus }B_n)\).

Consider the following average:

$$\begin{aligned} \frac{1}{a_n^2} \sum _{t\in [0,a_n-1]^2} \mathrm{\mathbb {P} }_n (B_n\triangle (B_n+t))&= \frac{2}{a_n^2} \sum _{t\in [0,a_n-1]^2} \mathrm{\mathbb {P} }_n (B_n{\setminus }(B_n+t)) \nonumber \\&=\frac{2}{a_n^4} \sum _{t\in [0,a_n-1]^2}\sum _{x\in \mathcal {K}_n} 1\!\!1(x\in B_n{\setminus }(B_n+t)). \end{aligned}$$
(3.3)

Use

$$\begin{aligned} \sum _{x\in \mathcal {K}_n} 1\!\!1(x\in B_n{\setminus }(B_n+t))= \sum _{x\in B_n} 1\!\!1(x\in B_n{\setminus }(B_n+t))= \sum _{x\in B_n} 1\!\!1(x-t\not \in B_n) \end{aligned}$$

and change the order of summation to obtain

$$\begin{aligned}&\frac{2}{a_n^4}\sum _{t\in [0,a_n-1]^2}\sum _{x\in \mathcal {K}_n} 1\!\!1(x\in B_n{\setminus }(B_n+t))= \frac{2}{a_n^4}\sum _{x\in B_n}\sum _{t\in [0,a_n-1]^2} 1\!\!1(x-t\not \in B_n) \nonumber \\&\quad = \frac{2}{a_n^4}\sum _{x\in B_n} (a_n^2-|B_n|) =\frac{2}{a_n^4}|B_n|(a_n^2-|B_n|)=2\mathrm{\mathbb {P} }_n(B_n)\mathrm{\mathbb {P} }_n(\mathcal {K}_n{\setminus }B_n). \end{aligned}$$
(3.4)

It follows from (3.3)–(3.4) that there exists a \(t\in [0,a_n-1]^2\) such that \(\mathrm{\mathbb {P} }_n (B_n\triangle (B_n+t)) \ge 2\mathrm{\mathbb {P} }_n(B_n)\mathrm{\mathbb {P} }_n(\mathcal {K}_n{\setminus }B_n)\). \(\square \)

4 Choice of \(K_n\) and \(\eta _n\)

Let

$$\begin{aligned} \mathcal {L}_n f(x) = \tfrac{1}{2} \sum _{y} \mu ^n_{xy} (f(y)-f(x)), \end{aligned}$$
(4.1)

and \(X^n\) be the associated Markov process.

Proposition 4.1

For each \(n \ge 1\) there exists a constant \(\sigma _n\), depending only on \(\eta _i, \,K_i, \,1\le i \le n\), such that the QFCLT holds for \(X^n\) with limit \(\sigma _n W\).

Proof

Since \(\mu _e^n\) is stationary, symmetric and ergodic, and \(\mu ^n_e\) is uniformly bounded and bounded away from 0, the result follows from [4, Theorem6.1]; see also Remarks 6.2 and 6.5 in that paper. (In fact, while [18, Theorem1.1] is stated for the i.i.d. case, the argument there also works in the ergodic case.) \(\square \)

Next, we recall (from [6] for example) how \(\sigma _n\) is connected with the electrical conductivity across a square of side \(a_n\). Let \(k \in \{a_{n-1}, b_n, a_n\}\), and let

$$\begin{aligned} \mathcal {Q}_k = \{ [0,k]^2 + z, z \in k{\mathbb Z}^2 \}. \end{aligned}$$

Thus \(\mathcal {Q}_k\) gives a tiling of \({\mathbb Z}^2\) by squares of side \(k\) which are disjoint except for their boundaries. Given \(Q \in \mathcal {Q}_k\) and \(m \in \{n-1,n\}\) set

$$\begin{aligned} \widetilde{\mu }^{Q,m}_{xy} = {\left\{ \begin{array}{ll} \frac{1}{2}\mu ^m_{xy} &{} \hbox { if }\,\, x,y \in {\partial }_i(Q), \\ \mu ^m_{xy} &{} \hbox { otherwise}. \end{array}\right. } \end{aligned}$$

For \(f: Q \rightarrow \mathbb {R}\) set

$$\begin{aligned} {\tilde{\mathcal {E}}^{m}_{Q}} (f,f)&= \frac{1}{2}\sum _{x,y \in Q} \widetilde{\mu }^{Q,m}_{xy} (f(y)-f(x))^2, \nonumber \\ \mathcal {H}_n&= \{ f:B_n \rightarrow \mathbb {R}\,\, \text { s.t. }\,\, f(x,0)=0, f(x,a_n)=1,\quad 0\le x\le a_n\}, \nonumber \\ \kappa _n&= \inf \{ \tilde{\mathcal {E}}^n_{B_n}(f,f): f \in \mathcal {H}_n \}. \end{aligned}$$
(4.2)

Thus \(\kappa _n^{-1}\) is just the effective resistance across the square \(B_n\) when bonds are assigned conductivities \(\tilde{\mu }^{B_n,n}\).

Fix \(n \ge 1\) and for simplicity consider the environment \(\mu ^n\) in the case when \(\mathcal {O}_n=0\). Then \(\mu ^n\) has period \(a_n\) (in both coordinate directions), and \(\mu ^n_{xy}\) for \(x,y \in B_n\) is symmetric with respect to all the symmetries on the square \(B_n\). Because of this symmetry, the limiting conductance matrix will be a multiple \(\sigma _n\) of the identity, and it is sufficient to calculate the variance of \(X^n\) in one coordinate direction.

We wish to construct an \(\mathcal {L}_n\)-harmonic function \(h_n: {\mathbb Z}^2 \rightarrow \mathbb {R}\) so that for all \((x_1, x_2)\in {\mathbb Z}^2\) we have:

$$\begin{aligned} h_n(x_1, j a_n ) = j a_n ,\,\, j \in {\mathbb Z}, \quad h_n(x_1+a_n, x_2) = h_n(x_1, x_2+a_n)-a_n = h_n(x_1,x_2). \end{aligned}$$
(4.3)

It is easy to see by the maximum principle that if such a function exists it is unique. Given such a function \(h_n\), writing \(X^n_t=(X^{n,1}_t, X^{n,2}_t)\) we have \(| h_n(X^{n}_t) - X^{n,2}_t| \le a_n\) and \(h_n(X^n)\) is a martingale. Set

$$\begin{aligned} g_n(x) =\frac{1}{2}\sum _{y \in {\mathbb Z}^2} \mu ^n_{xy} (h_n(x)-h_n(y))^2. \end{aligned}$$

The function \(g_n\) also has period \(a_n\) on \({\mathbb Z}^2\), i.e. \(g(x) = g(x')\) if \(x-x' \in a_n {\mathbb Z}^2\).

Recall that \(B'_n=[0,a_n-1]^2 \cap {\mathbb Z}^2\), and let \(\psi : {\mathbb Z}^2 \rightarrow B'_n\) be the natural function which maps \({\mathbb Z}^2\) onto the torus \(B'_n\). So \(\psi \) is the identity on \(B'_n\) and has period \(a_n\). Let \(Y_t = \psi (X^n_t)\); then \(Y\) is a Markov process on \(B'_n\) with stationary measure \(\nu _x = a_n^{-2}\) for each \(x \in B'_n\). Then

$$\begin{aligned} \langle h(X^n)\rangle _t = \int _0^t g_n(X^n_s) ds = \int _0^t g_n(Y_s) ds. \end{aligned}$$

So, by the ergodic theorem for \(Y\),

$$\begin{aligned} \sigma _n^2 = \lim _{t \rightarrow \infty } \frac{ \langle h(X^n)\rangle _t }{t}= a_n^{-2} \sum _{y \in B'_n} g_n(y) = \tfrac{1}{2} a_n^{-2} \sum _{y \in B'_n} \sum _{x \in {\mathbb Z}^2} \mu ^n_{xy} (h_n(x)-h_n(y))^2.\nonumber \\ \end{aligned}$$
(4.4)

To construct \(h_n\) we use the resistance problem (4.2) in the square \(Q=B_n\). Let \(f_n\) be the minimising function for (4.2). By the maximum principle \(f_n\) is unique, and so using the symmetry of \(\mu ^n\) with respect to reflections in the lines \(x_1=a_n/2\) and \(x_2=a_n/2\) we deduce that for \((x_1,x_2) \in B_n\),

$$\begin{aligned} f_n( a_n- x_1,x_2) = f(x_1,x_2), \quad f_n(x_1, a_n-x_2) = 1 - f(x_1,x_2). \end{aligned}$$

Given this function \(f_n\) we construct \(h_n\) by setting

$$\begin{aligned}&h_n(x) = a_n f_n(x),\quad \, x \in B_n, \\&h_n(x + i a_n e_1 +j a_n e_2) = h(x) + j a_n,\quad \, x \in B_n, i,\quad j \in {\mathbb Z}. \end{aligned}$$

where \(e_1=(1,0)\) and \(e_2=(0,1)\). The function \(h\) satisfies (4.3) and is clearly \(\mathcal {L}_n\)-harmonic in the interior of \(B\). Some straightforward calculations show that it is also harmonic at points \(x \in {\partial }_i B_n\), and consequently it is harmonic on \({\mathbb Z}^2\). Since \(h_n\) is constant on the lines \(\{(i,j a_n ), 0\le i \le a_n\}\) for \(j=0,1\) we have, using the symmetries of \(h_n\), that

$$\begin{aligned} \sum _{y \in B'_n} \sum _{x \in {\mathbb Z}^2} \mu _{xy} (h_n(x)-h_n(y))^2= 2a_n^2 \tilde{\mathcal {E}}^n_{B_n}(f_n,f_n). \end{aligned}$$

Thus from (4.4)

$$\begin{aligned} \sigma _n^2 = \tilde{\mathcal {E}}^{n}_{B_n}(f_n,f_n) = \inf \{ \tilde{\mathcal {E}}^{n}_{B_n}(f,f): f \in \mathcal {H}_n \} = \kappa _n. \end{aligned}$$
(4.5)

We now set

$$\begin{aligned} \eta _n = b_n^{-(1+1/n)},\quad \, n \ge 1. \end{aligned}$$
(4.6)

Theorem 4.2

There exist constants \(K_n \in [1, 50b_n]\) such that \(\sigma _n=1\) for all \(n\).

Proof

Let \(n \ge 1\); we can assume that \(K_i, 1\le i \le n-1\) have been chosen so that \(\sigma _i=2\) for \(i \le n-1\).

Since \(\sigma _n\) is non-random, we can simplify our notation and avoid the need for translations by assuming that \(\mathcal {O}_k=0\) for \(k=1, \dots , n\); note that this event has strictly positive probability. For \(K \in [0,\infty )\) let \(\kappa ^{2}_n(K)\) be the effective conductance across \(B_n\) as given by (4.2) if we take \(K_n=K\). Since \(B_n\) is finite, \(\kappa _n^2(K)\) is a continuous non-decreasing function of \(K\). We will show that \(\kappa _n^2(1) \le 1\) and \(\kappa ^2_n(K)>1\) for sufficiently large \(K\); by continuity it follows that there exists a \(K_n\) such that \(\kappa ^2_n(K_n)=1\), and thus \(\sigma ^2_n(K_n)=1\).

If \(K=1\) then we have \(\mu ^n_e \le \mu ^{n-1}_e\), with strict inequality for the edges in \(D_n\). We thus have \(\kappa ^2_n(1) \le 1\). To obtain a lower bound on \(\kappa _n^2(K)\), we use the dual characterization of effective resistance in terms of flows of minimal energy—see [13], and [3] for use in a similar context to the one here.

Let \(Q\) be a square in \(\mathcal {Q}_k\), with lower left corner \(w=(w_1,w_2)\). Let \(Q'\) be the rectangle obtained by removing the top and bottom rows of \(Q\):

$$\begin{aligned} Q'= \{ (x_1,x_2): w_1 \le x_1 \le w_1+ k, w_1+1 \le x_2 \le w_1 + k-1\}. \end{aligned}$$

A flow on \(Q\) is an antisymmetric function \(I\) on \(Q \times Q\) which satisfies \(I(x,y)=0\) if \(x \not \sim y, \,I(x,y)=-I(y,x)\), and

$$\begin{aligned} \sum _{y \sim x} I(x,y)=0 \quad \text { if }\,\, x \in Q'. \end{aligned}$$

Let \({\partial }^+ Q =\{ (x_1, w_2+k): w_1 \le x_1 \le w_1+k \}\) be the top of \(Q\). The flux of a flow \(I\) is

$$\begin{aligned} F(I) = \sum _{x\in {\partial }^+ Q} \sum _{y \sim x} I(x,y). \end{aligned}$$

For a flow \(I\) and \(m \in \{n-1,n\}\) set

$$\begin{aligned} E^m_Q(I,I) = \frac{1}{2}\sum _{x\in Q} \sum _{y\in Q} (\widetilde{\mu }^{Q,m}_{xy})^{-1} I(x,y)^2. \end{aligned}$$

This is the energy of the flow \(I\) in the electrical network given by \(Q\) with conductances \((\widetilde{\mu }^{m,Q}_e)\). If \(\mathcal {I}(Q)\) is the set of flows on \(Q\) with flux 1, then

$$\begin{aligned} \kappa _n(K)^{-2} = \inf \{ E^n_{B_n}(I,I): I \in \mathcal {I}(B_n) \}. \end{aligned}$$

Let \(I_{n-1}\) be the optimal flow for \(\kappa ^{-2}_{n-1}\). The square \(B_n\) consists of \(m_n^2 = a_n^2/a_{n-1}^2\) copies of \(B_{n-1}\); define a preliminary flow \(I'\) by placing a replica of \(m_n^{-1} I_{n-1}\) in each of these copies. For each square \(Q \in \mathcal {Q}_{a_{n-1}}\) with \(Q\subset B_n\) we have \(E^{n-1}_Q(I',I')=m_n^{-2}\), and since there are \(m_n^2\) of these squares we have \(E^{n-1}_{B_n}(I',I')=1\).

We now look at the tiling of \(B_n\) by squares in \(\mathcal {Q}_{b_n}\); recall that \( \ell _n = a_n/b_n\) and that \(\ell _n\) is an integer. For each \(Q \in \mathcal {Q}_{b_n}\) we have \(E^{n-1}_Q(I',I')= \ell _n^{-2}\). Label these squares by \((i,j)\) with \(1\le i,j\le \ell _n\).

We now describe modifications to the flow \(I'\) in a square \(Q\) of side \(b_n\). For simplicity, take first \(Q=[0, b_n]^2\). Set \(A_1 = \{ x =(x_1, x_2) \in Q: x_1 \ge x_2\}\), and \(A_2 = \{ x =(x_1, x_2) \in Q: x_2 \ge x_1\}\). Given any edge \(e=(x,y)\) in \(E(Q)\), either \(x,y \in A_1\) or else \(x,y \in A_2\). For \(x =(x_1,x_2) \in Q\) set \(r(x)=(x_2,x_1)\). Define a new flow by

$$\begin{aligned} I^*(x,y) = {\left\{ \begin{array}{ll} I(x,y) &{} \hbox { if }\,\, x,y \in A_1, \\ I(r(x),r(y)) &{} \hbox { if }\,\, x,y \in A_2. \end{array}\right. } \end{aligned}$$
(4.7)

The flow \(I'\) runs from bottom to top of the square, and the modified flow \(I^*\) begins at the bottom, and emerges on the left side of the square. As in [3, Proposition3.2] we have \(E_Q(I^*,I^*)\le E_Q(I',I')=\ell _n^{-2}\). Thus ‘making a flow turn a corner’ costs no more, in terms of energy, than letting it run on straight.

Suppose we now consider the flow \(I'\) in a column \((i_1, j), 1\le j \le \ell _n\), and we wish to make the flow avoid an obstacle square \((i_1, j_1)\). Then we can make the flow make a left turn in \((i_1, j_1-1)\), and then a right turn in \((i_1-1, j_1-1)\) so that it resumes its overall vertical direction. This then gives rise to two flows in \((i_1-1, j_1-1)\): the original flow \(I'\) plus the new flow: as in [3] the combined flow in the square \((i_1-1, j_1-1)\) has energy less than \(4 \ell _n^{-2}\). If we carry the combined flow vertically through the square \((i_1-1,j_1)\), and make the similar modifications above the obstacle, then we obtain overall a new flow \(J'\) which matches \(I'\) except on the 6 squares \((i,j), i_1\le i \le i_1, j_1-1\le j \le j_1+1\). The energy of the original flow in these 6 squares is \(6\ell _n^{-2}\), while the new flow will have energy less than \(14\ell _n^{-2}\): we have a ‘cost’ of at most \(4\ell _n^{-2}\) in the 3 squares \((i_1-1,j), j_1-1\le j \le j_1+1\), zero in \((i_1,j_1)\) and at most \(\ell _n^{-2}\) in the two remaining squares. Thus the overall energy cost of the diversion is at most \(8 \ell _n^{-2}\) (see Fig. 4).

Fig. 4
figure 4

Diversion of current around an obstacle square

We now use a similar procedure to construct a modification of \(I'\) in \(B_n\) with conductances \((\tilde{\mu }_e^{B_n,n})\). We have four obstacles, two oriented vertically and resembling an \(I\), and two horizontal ones. The crossbars on the \(I\), that is the sets \(D^{01}\), contain vertical edges with conductance \(\eta _n \ll 1\). We therefore modify \(I'\) to avoid these edges, and the squares with side \(b_n\) which contain them.

Consider the left vertical \(I\), which has center \((a_n'-\beta _n, a_n')\). Let \((i_1,j_1)\) be the square which contains at the top the bottom left branch of the \(I\), so that this square has top right corner \((a'_n-\beta _n, a'_n-10b_n)\). The top of this square contains vertical edges with conductance \(\eta _n\), so we need to build a flow which avoids these. We therefore (as above) make the flow in the column \(i_1\) take a left turn in square \((i_1,j_1-1)\), a right turn in \((i_1-1,j_1-1)\), carry it vertically through \((i_1-1,j_1)\), take a right turn in \((i_1-1,j_1+1)\) and carry it horizontally through \((i_1,j_1+1)\) into the edges of high conductance at the right side of \((i_1,j_1+1)\). The same pattern is then repeated on the other 3 branches of the left obstacle \(I\), and on the other vertical obstacle.

We now bound the energy of the new flow \(J\), and initially will make the calculations just for the change in columns \(i_1-1\) and \(i_1\) below and to the left of the point \((a_n'-\beta _n, a_n')\). Write \(M=10\) for the half of the overall height of the obstacle. There are \(2(M+2)\) squares in this region where \(I'\) and \(J\) differ; these have labels \((i,j)\) with \(i=i_1-1, i_1\) and \(j_1-1\le j \le j_1+ M\). We begin by calculating the energy if \(K=\infty \). In 3 of these squares the new flow \(J\) has energy at most \(4 \ell _n^{-2}\), in \(M+1\) of them it has energy at most \(\ell _n^{-2}\), and in the remaining \(M\) it has zero energy. So writing \(R\) for this region we have \(E_R(I',I')= (2M+4)\ell _n^{-2} \), while

$$\begin{aligned} E_R(J,J) \le (3\cdot 4 + M+1 )\ell _n^{-2} = (13+M)\ell _n^{-2}. \end{aligned}$$

So

$$\begin{aligned} E_R(J,J) -E_R(I',I') \le (9-M) \ell _n^{-2} = - \ell _n^{-2}<0. \end{aligned}$$
(4.8)

This is if \(K=\infty \). Now suppose that \(K<\infty \). The vertical edge in the obstacle carries a current \(2 /\ell _n\) and has height \(M b_n\), so the energy of \(J\) on these edges is at most

$$\begin{aligned} E'= \frac{4 \ell _n^{-2} M b_n}{K}\le \frac{4 M b_n}{Kn }. \end{aligned}$$
(4.9)

The last inequality holds because \(\ell _n \ge \sqrt{n}\). Finally it is necessary to modify \(I'\) near the 4 ends of the two horizontal obstacles. For this, we just modify \(I'\) in squares of side \(a_{n-1}\), and arguments similar to the above show that for the new flow \(J\) in this region \(R'\), which consists of \(4+ 2 b_n/a_{n-1}\) squares of side \(a_{n-1}\), we have

$$\begin{aligned} E_{R'}(J,J) - E_{R'}(I',I') \le \frac{9 b_n }{ a_{n-1} m_n^2} = \frac{ 9 a_{n-1} }{ b_n } \ell _n^{-2}. \end{aligned}$$
(4.10)

The new flow \(J\) avoids the edges where \(\mu ^n_e=\eta _n\). Combining these terms we obtain for the whole square \(B_n\), using (4.8)–(4.10),

$$\begin{aligned} E^n_{B_n}(J,J) - E^{n-1}_{B_n}(I',I')&\le - 8 \ell _n^{-2} + \frac{ 16 M b_n}{n K} + \frac{ 40 a_{n-1} }{ b_n } \ell _n^{-2} \\&\le - 7 \ell _n^{-2} + \frac{ 16 M b_n}{n K} < -\frac{7}{2n} + \frac{160 b_n}{nK}. \end{aligned}$$

So if \(K' = 50 b_n\), we have

$$\begin{aligned} \kappa _n^{-2}(K') \le E^n_{B_n}(J,J) \le 1 - c n^{-1}< 1. \end{aligned}$$

Hence there exists \(K_n \in [1,50 b_n)\) such that \(\kappa _n^2(K_n)=1\). \(\square \)

Lemma 4.3

Let \(p<1\). Then \(\mathrm{\mathbb {E} }\mu _e^p < \infty \), and \(\mathrm{\mathbb {E} }\mu _e^{-p} <\infty \).

Proof

Since \(\mu _e^n = \eta _n =b_n^{-1-1/n}\) on a proportion \(cb_n/a_n^2\) of the edges in \(B_n\), we have

$$\begin{aligned} \mathrm{\mathbb {E} }\mu _e^{-p} \le c \sum _n b_n^{p(1+1/n)} \frac{b_n}{a_n^2} \le c \sum _n b_n^{p+p/n -1} < \infty . \end{aligned}$$

Here we used the fact that \(b_n \ge 2^{n}\). Similarly,

$$\begin{aligned} \mathrm{\mathbb {E} }\mu _e^p \le c \sum _n K_n^p \frac{b_n}{a_n^2} \le c \sum _n \frac{b_n^{1+p}}{a_n^2} < \infty . \end{aligned}$$

\(\square \)

Remark 4.4

Using (4.5) and the methods of [3], one can show that for small enough \(\delta \, \kappa _n^2(\delta b_n) < 1\), so that \(K_n \asymp b_n\) and consequently \(\mathrm{\mathbb {E} }\mu _e =\infty \). Note that we also have

$$\begin{aligned} \limsup _{n \rightarrow \infty } n \mathrm{\mathbb {P} }( \mu _e > n) = \limsup _{k \rightarrow \infty } b_k \mathrm{\mathbb {P} }( \mu _e > c b_k) = \lim _{k \rightarrow \infty } \frac{b_k^2}{a_k^2} =0. \end{aligned}$$
(4.11)

From now on we take \(K_n\) to be such that \(\sigma _n=1\) for all \(n\).

5 Weak invariance principle

Let \(X=(X_t, t \in \mathbb {R}_+, P^x_{\omega }, x \in {\mathbb Z}^d)\) be the process with generator (1.1) associated with the environment \((\mu _e)\). Recall (4.1) and the definition of \(X^n\), and define \(X^{(n,\varepsilon )}\) by

$$\begin{aligned} X^{(n,\varepsilon )}_t = \varepsilon X^n_{\varepsilon ^{-2} t}, \quad \, t \ge 0. \end{aligned}$$

Let \(P^{\omega }_n(\varepsilon )\) be the law of \(X^{(n,\varepsilon )}\) on \(\mathcal {D}=\mathcal {D}_1\), and \(P^{\omega }(\varepsilon )\) be the law of \(X^{(\varepsilon )}\).

Recall that the Prokhorov distance \({d_P}\) between probability measures on \(\mathcal {D}_1\) is defined as follows (see [8, p. 238]). For \(A \subset \mathcal {D}\), let \(\mathcal {B}(A,\varepsilon ) = \{x\in \mathcal {D}: d_S (x, A) < \varepsilon \}\). For probability measures \(P\) and \(Q\) on \(\mathcal {D}, \,{d_P}(P,Q) \) is the infimum of \(\varepsilon >0\) such that \(P(A) \le Q(\mathcal {B}(A,\varepsilon )) + \varepsilon \) and \(Q(A) \le P(\mathcal {B}(A,\varepsilon )) + \varepsilon \) for all Borel sets \(A \subset \mathcal {D}\). Recall that convergence in the metric \({d_P}\) is equivalent to the weak convergence of measures.

To prove the WFCLT it is sufficient to prove:

Theorem 5.1

There exists a sequence \((b_n)\) such that if \(\varepsilon _n = 1/b_n\) then \( \lim _{n \rightarrow \infty } {d_P}( P^{\omega }(\varepsilon _n), P_{\text {BM}}) =0\) in \(\mathrm{\mathbb {P} }\)-probability.

Proof

Let \(n \ge 1\) and suppose that \(a_k, b_k\) have been chosen for \(k \le n-1\). By Proposition 4.1 we have for each \({\omega }\) that \({d_P}( P^{\omega }_{n-1}(\varepsilon ), P_{\text {BM}}) \rightarrow 0\). Note that the environment \(\mu ^{n-1}\) takes only finitely many values. So we can choose \(b_n\) large enough so that

$$\begin{aligned} {d_P}( P^{\omega }_{n-1}(\varepsilon ), P_{\text {BM}}) < n^{-1} \quad \hbox { for }\,\, 0< \varepsilon \le b_n^{-1}\,\, \hbox { and all }\,\, {\omega }. \end{aligned}$$
(5.1)

Now for \(\lambda >1\) set

$$\begin{aligned} G(\lambda ) = \left\{ w \in \mathcal {D}_1: \sup _{0\le s\le 1} |w(s)| \le \lambda \right\} . \end{aligned}$$

We have

$$\begin{aligned} P_{\text {BM}}(G(\lambda )^c) \le \exp ( - c' \lambda ^2 ). \end{aligned}$$

We can couple the processes \(X^{n-1}\) and \(X\) so that the two processes agree up to the first time \(X^{n-1}\) hits the obstacle set \(\bigcup _{k=n}^\infty D_k\). Let \(\xi _n({\omega }) = \min \{ |x| : x \in \bigcup _{k=n}^\infty D_k({\omega })\} \), and

$$\begin{aligned} F_n =\{ \xi _n > \lambda b_n \}. \end{aligned}$$

Let \(m \ge n\), and consider the probability that 0 is within a distance \(\lambda b_n\) of \(D_m\). Then \(\mathcal {O}_m\) has to lie in a set of area \(c \lambda b_n b_m\), and so

$$\begin{aligned} \mathrm{\mathbb {P} }\left( \min _{x \in D_m} |x| \le \lambda b_n \right) \le \frac{c b_n b_m}{a_m^2} \le \frac{ c b_n}{ m b_m}. \end{aligned}$$

Thus

$$\begin{aligned} \mathrm{\mathbb {P} }(F^c_n) \le c \sum _{m=n}^\infty \frac{b_n}{ m b_m} \le \frac{c}{n}\left( 1 + \sum _{m=n+1}^\infty \frac{b_n}{b_m} \right) \le \frac{c'}{n}. \end{aligned}$$

Suppose that \({\omega }\in F_n\) and \(n\ge 2\) so that \(n^{-1} < \lambda /2\). Then using the coupling above, we have

$$\begin{aligned} {d_P}( P^{\omega }(\varepsilon _n), P^{\omega }_{n-1}(\varepsilon _n) )&\le P^{\omega }_0\left( \sup _{0\le s \le b_n^2} |X^{(n-1)}_s| > \lambda b_n \right) \\&\le {d_P}( P^{\omega }_{n-1}(\varepsilon _n), P_{\text {BM}}) + P_{\text {BM}}(G(\lambda /2)^c). \end{aligned}$$

If now \(\delta >0\), choose \(\lambda >1\) such that \(P_{\text {BM}}(G(\lambda /2)^c) < \delta /2\), and then \(N> 2/ \delta \) large enough so that \(\mathrm{\mathbb {P} }(F_n^c) < \delta \) for \(n \ge N\). Then combining the estimates above, if \(n \ge N\) and \({\omega }\in F_n, \,{d_P}( P^{\omega }(\varepsilon _n), P_{\text {BM}}) < \delta \), so for \(n \ge N, \,\mathrm{\mathbb {P} }( {d_P}( P^{\omega }(\varepsilon _n), P_{\text {BM}}) > \delta ) \le \mathrm{\mathbb {P} }(F_n^c) < \delta , \) which proves the convergence in probability. \(\square \)

6 Quenched invariance principle does not hold

We will prove that the QFCLT does not hold for the processes \(X^{(\varepsilon _n)}\), and will argue by contradiction. If the QFCLT holds for \(X\) with limit \(\Sigma W\) then since the WFCLT holds for \(X^{(\varepsilon _n)}\) with diffusion constant 1 in every direction (by isotropy of the environment), \(\Sigma \) must be the identity matrix.

Let \(w^0_n=( a'_n - 10 b_n -1, a'_n-\beta _n)\) be the centre point on the left edge of the lowest of the four \(n\)th level obstacles in the set \(D^0_n\), and let \(z^0_n=w_n - \left( \tfrac{1}{2} b_n,0\right) \). Thus \(z^0_n\) is situated a distance \(\frac{1}{2}b_n\) to the left of \(w^0_n\)—see Fig. 5. Let

$$\begin{aligned} H_n^0(\lambda ) = B_\infty ( z^0_n, \lambda b_n) , \quad H_n(\lambda ) = \bigcup _{x\in a_n{\mathbb Z}^2} ( x+ \mathcal {O}_n + H_n^0(\lambda )). \end{aligned}$$
Fig. 5
figure 5

The square represents \(H_n^0(\frac{1}{4})\)

Lemma 6.1

For \(\lambda >0\) the event \(\{0 \in H_n(\lambda ) \}\) occurs for infinitely many \(n, \,\mathrm{\mathbb {P} }\)-a.s.

Proof

Let \(\mathcal {G}_k=\sigma (\mathcal {O}_1, \dots , \mathcal {O}_k)\). Given the values of \(\mathcal {O}_1, \dots , \mathcal {O}_{n-1}\), the r.v. \(\mathcal {O}_n\) is uniformly distributed over \(m_n^2\) points, with spacing \(a_{n-1}\), and has to lie in a square with side \(2 \lambda b_n \) in order for the event \(\{0 \in H_n(\lambda ) \}\) to occur. Thus approximately \(( 2\lambda b_n/a_{n-1})^2\) of these values of \(\mathcal {O}_n\) will cause \(\{0 \in H_n(\lambda ) \}\) to occur. So

$$\begin{aligned} \mathrm{\mathbb {P} }( 0 \in H_n(\lambda ) \mid \mathcal {G}_{n-1} ) \ge c \frac{ ( 2\lambda b_n/ a_{n-1})^2 }{ (a_n/a_{n-1})^2} = c' \frac{ b_n^2}{a_n^2} \ge \frac{c''}{n}. \end{aligned}$$

The conclusion then follows from an extension of the second Borel–Cantelli Lemma. \(\square \)

Lemma 6.2

With \(\mathrm{\mathbb {P} }\)-probability 1, the event \(G_n(\lambda ) = \{ H_n(\lambda ) \cap (\bigcup _{m=n+1}^\infty D_m ) \ne \emptyset \}\) occurs for only finitely many \(n\).

Proof

Let \(m>n\). Then as in the previous lemma, by considering possible positions of \(\mathcal {O}_m\), we have

$$\begin{aligned} \mathrm{\mathbb {P} }( H_n(\lambda ) \cap D_m \ne \emptyset ) \le c \frac{ b_m b_n}{a_m^2} \le c \frac{ b_n}{b_m}. \end{aligned}$$

Since \(b_{m} \ge 2^m b_{m-1} > 2^m b_n\),

$$\begin{aligned} \mathrm{\mathbb {P} }\left( H_n(\lambda ) \cap \left\{ \bigcup _{m=n+1}^\infty D_m \ne \emptyset \right\} \right) \le \sum _{m=n+1}^\infty c \frac{ b_n}{b_m} \le c 2^{-n}, \end{aligned}$$

and the conclusion follows by Borel–Cantelli. \(\square \)

The first two Lemmas have shown, first that 0 is close to a \(n\)th level obstacle infinitely often, and next that higher level obstacles do not interfere. Our final task is to show that in this situation, the process \(X\) is unlikely to cross the strip of low conductance edges.

Lemma 6.3

Suppose that \(0 \in H_n(1/8)\) and \(H_n(4) \cap \big (\bigcup _{m=n+1}^\infty D_m\big ) =\emptyset \). Write \(X_t=(X^1_t, X^2_t)\), and let

$$\begin{aligned} F =\{ |X^2_t| \le 3b_n/4, |X^1_t| \le 2b_n, 0\le t\le b_n^2, X^1_{b_n^2} > 3b_n/4 \}. \end{aligned}$$

Then there exists a constant \(A_{n-1}=A_{n-1}(\eta _1, K_1, \dots , \eta _{n-1}, K_{n-1})\) such that

$$\begin{aligned} P^0_{\omega }( F) \le c b_n^{-1/n} A_{n-1} \log A_{n-1}. \end{aligned}$$

Proof

Let \(w_n=(x_n,y_n)\) be the element of \(\{ w^0_n + \mathcal {O}_n + a_n x, x\in {\mathbb Z}^2\}\) which is closest to 0. Then, under the hypotheses of the Lemma, we have \(3 b_n/8 \le x_n \le 5 b_n/8\), and \(|y_n| \le b_n/8\). Thus the square \(B_\infty (0,2b_n)\) intersects the obstacle set \(D_n\), but does not intersect \(D_m\) for any \(m>n\). Hence if \(F\) holds then we can couple \(X^n\) and \(X\) so that \(X^n_t=X_t\) for \(0 \le t\le b_n^2\).

Let \(\mathbb {H}=\{ (x,y): x \le x_n \}\), and \(J=B \cap {\partial }_i \mathbb {H}\). If \(F\) holds then \(X^n\) has to cross the line \(J\), and therefore has to cross an edge of conductance \(\eta _n\). Let \(Y\) be the process with edge conductances \(\mu '_e\), where \(\mu '_e=\mu ^{n-1}_e\) except that \(\mu '_e=0\) if \(e=\{ (x_n,y), (x_n+1,y)\}\) for \(y \in {\mathbb Z}\). Thus the line \({\partial }_i \mathbb {H}\) is a reflecting barrier for \(Y\). Let

$$\begin{aligned} L_t = \int _0^t 1_{ (Y_s \in J) }ds \end{aligned}$$

be the amount of time spent by \(Y\) in \(J\), and

$$\begin{aligned} G= \{ |Y^2_t| \le 3b_n/4, |Y^1_t| \le 2b_n, 0\le t\le b_n^2 \}. \end{aligned}$$

Assuming that \(G\) holds, let \(\xi _1\) be a standard \(\mathtt{exp(1)}\) r.v., set \(T= \inf \{s: L_s > \xi _1/\eta _n \}\), and let \(X^n_t=Y_t\) on \([0,T)\), and \(X^n_T = Y_T + (1,0)\). Note that one can complete the definition of \(X^n_t\) for \(t\ge T\) in such a way that the process \(X^n\) has the same distribution as the process defined by (4.1). We have

$$\begin{aligned} P^0_{\omega }( G \cap \{ X^n_s = Y^n_s, 0\le s \le b_n^2 \}) = E^0_{\omega }( 1_G \exp ( - \eta _n L_{b_n^2} ) ). \end{aligned}$$

So

$$\begin{aligned} P^0_{\omega }( G \cap \{ T \le b_n^2 \}) = E^0_{\omega }( 1_G (1-\exp ( - \eta _n L_{b_n^2} ) ) \le E^0_{\omega }( 1_G \eta _n L_{b_n^2}) \le \eta _n E^0_{\omega }L_{b_n^2}. \end{aligned}$$

The process \(Y\) has conductances bounded away from 0 and infinity on \(\mathbb {H}\), so by [11] \(Y\) has a transition probability \(p_t(w,z)\) which satisfies

$$\begin{aligned} p_t(w,z) \le A t^{-1} \exp (- A^{-1} |w-z|^2/t ), \quad \, w,z \in \mathbb {H},\quad \, t \ge |w-z|. \end{aligned}$$

In addition if \(r=|w-z|\ge A\) then \(p_t(w,z) \le p_r(w,z)\). Here \(A=A_{n-1}\) is a possibly large constant which depends on \((\eta _i, K_i, 1\le i \le n-1)\). We can take \(A \ge 10\). For \(w\in J\) we have \(|w| \ge b_n/4\) and so provided \(b_n \ge 8A\),

$$\begin{aligned} E^0_{\omega }\int _0^{b_n^2} 1_{( Y_s =w )} ds&= \int _0^{b_n^2} p_t(0,w) dt \le b_n p_{b_n} (0,w) + \int _{b_n}^{b_n^2} p_t(0,w) dt \\&\le c A e^{- b_n/A} + A \int _{0}^{b_n^2} t^{-1} \exp ( -b_n^2/(16 At) )dt \le c A \log (A). \end{aligned}$$

So since \(|J| \le 2b_n\),

$$\begin{aligned} P^0_{\omega }( G \cap \{ T \le b_n^2 \} )\le c \eta _n b_n A \log A \le c b_n^{-1/n} A \log A. \end{aligned}$$

Finally, the construction of \(X^n\) from \(Y\) gives that \(P^0_{\omega }(F) \le P^0_{\omega }( G \cap \{ T \le b_n^2 \} )\). \(\square \)

Proof of Theorem 1.4(b)

We now choose \(b_n\) large enough so that for all \(n\ge 2\),

$$\begin{aligned} b_n^{-1/n} A_{n-1} \log A_{n-1} < n^{-1}. \end{aligned}$$
(6.1)

Let \(W_t = (W^1_t, W^2_t)\) denote two-dimensional Brownian motion with \(W_0=0\), and let \(P_{\text {BM}}\) denote its distribution. For a two-dimensional process \(Z=(Z^1, Z^2)\), define the event

$$\begin{aligned} F(Z) = \{ |Z^2_s| < 3/4, |Z^1_s| \le 2, 0\le s \le 1, Z^1_1 > 1 \}. \end{aligned}$$

The support theorem implies that \(p_1 := P_{\text {BM}}(F(W)) >0\). Write \(F_n = F(X^{(\varepsilon _n)})\).

Let \(N_1 = N_1({\omega })\) be such that the event \(G_n(4)\) defined in Lemma 6.2 does not occur for \(n \ge N_1\). Let \(\Lambda =\Lambda ({\omega })\) be the set of \(n > N_1\) such that \(0 \in H_n\left( \tfrac{1}{8}\right) \). Then \(\mathrm{\mathbb {P} }(\Lambda \hbox { is infinite})=1\) by Lemma 6.1. By Lemma 6.3 and the choice of \(b_n\) in (6.1) we have \( P^0_{\omega }( F_n) < cn^{-1}\) for \(n\in \Lambda \). So

$$\begin{aligned} P^0_{\omega }( F_n ) \rightarrow 0\quad \hbox { as }\,\, n \rightarrow \infty \,\, \hbox { with }\,\, n \in \Lambda . \end{aligned}$$

Thus whenever \(\Lambda ({\omega })\) is infinite the sequence of processes \( (X^{(\varepsilon _n)}_t, t \in [0,1], P^0_{\omega }), \, n \ge 1, \) cannot converge to \(W\), and the QFCLT therefore fails. \(\square \)

Remark 6.4

We can construct similar obstacle sets in \({\mathbb Z}^d\) with \(d \ge 3\), and we now outline briefly the main differences from the \(d=2\) case.

We take \(b_n = a_n n^{-1/d}\), so that \(\sum b_n^d/a_n^d =\infty \), and the analogue of Lemma 6.2 holds. In a cube side \(a_n\) we take \(2d\) obstacle sets, arranged in symmetric fashion around the centre of the cube. Each obstacle has an associated ‘direction’ \(i \in \{1, \dots , d\}\). An obstacle of direction \(i\) consists of a \(2 b_n^{d-1}\) edges of low conductance \(\eta _n\), arranged in two \(d-1\) dimensional ‘plates’ a distance \(M b_n\) apart, with each edge in the direction \(i\). The two plates are connected by \(d-1\) dimensional plates of high conductance \(K_n\). Thus the total number of edges in the obstacles is \(c b_n^{d-1}\), so taking \(a_n/a_{n-1}\) large enough, we have \(\sum b_n^{d-1}/a_n^d<\infty \), and the same arguments as in Sect. 3 show that the environment is well defined, stationary and ergodic.

The conductivity across a cube side \(N\) in \({\mathbb Z}^d\) is \(N^{d-2}\). Thus if we write \(\sigma ^2_n(\eta _n, K_n)\) for the limiting diffusion constant of the process \(X^n\), and \(R_n=R_n(\eta _n,K_n)\) for the effective resistance across a cube side \(a_n\), then (4.5) is replaced by:

$$\begin{aligned} \sigma _n^2(\eta _n, K_n) = a_n^{2-d} R_n^{-1}. \end{aligned}$$
(6.2)

For the QFCLT to fail, we need \(\eta _n = o(b_n^{-1})\), as in the two-dimensional case. With this choice we have \(R_n(\eta _n, 0)^{-1} < a_n^{d-2}\), and as in Theorem 4.2 we need to show that if \(K_n\) is large enough then \(R_n(\eta _n, K_n)^{-1} > a_n^{d-2}\).

Recall that \(\ell _n=a_n/b_n\). Let \(I'\) be as in Theorem 4.2; then \(I'\) has flux \(\ell _n^{-d+1}\) across each sub-cube \(Q'\) of side \(b_n\). If the sub-cube does not intersect the obstacles at level \(n\), then \(E_{Q'}(I',I')= \ell _n^{-d} a_n^{2-d}\). The ‘cost’ of diverting \(I'\) around a low conductance obstacle is therefore of order \(c \ell _n^{-d} a_n^{2-d}= c b_n^{-d+2} \ell _n^{-2d+2}\)—see [17]. As in Theorem 4.2 we divert the flow onto the regions of high conductance, so as to obtain some cubes in which the new flow has zero energy. To estimate the energy in the high conductance bonds, note that we have \(2(d-1)b_n^{d-2}\) sets of parallel paths of edges of high conductance, and each path is of length \(M b_n\), so the flow in each edge is \(F_n= \ell _n^{-d+1}/ b_n^{d-2}(2d-2)\). Hence the total energy dissipation in the high conductance edges is

$$\begin{aligned} K^{-1} M F_n^2= \frac{ c'K^{-1} M b_n^{d-1}}{ \ell _n^{2d-2} b_n^{2d-4}} = \frac{c'K^{-1} M }{ \ell _n^{2d-2} b_n^{d-3}}. \end{aligned}$$

We therefore need

$$\begin{aligned} \frac{c'K^{-1} M }{ \ell _n^{2d-2} b_n^{d-3}} < \frac{c}{ b_n^{d-2} \ell _n^{2d-2}}, \end{aligned}$$

that is we need to choose \(K_n > c M b_n\) for some constant \(c\). Since

$$\begin{aligned} \mathrm{\mathbb {E} }\mu _e^p \asymp \sum _n \frac{K_n^p b_n^{d-1}}{a_n^d} \asymp M \sum _n \frac{b_n^{d-1+p}}{a_n^d}, \end{aligned}$$

we find that in \(d\ge 3\) our example also has \(\mathrm{\mathbb {E} }\mu _e^{\pm p}<\infty \) if and only if \(p<1\).