1 Introduction

Let \((\Omega ,(\mathcal{F}_t)_{t\ge 0},(X_t)_{t\ge 0},(P_t)_{t\ge 0},(\mathbb {P}_x)_{x\in E\cup \{\partial \}})\) be a time homogeneous Markov process with state space \(E\cup \{\partial \}\) [31, Definition III.1.1], where \((E,\mathcal{E})\) is a measurable space and \(\partial \not \in E\). We recall that \(\mathbb {P}_x(X_0=x)=1, P_t\) is the transition function of the process satisfying the usual measurability assumptions and Chapman–Kolmogorov equation. The family \((P_t)_{t\ge 0}\) defines a semi-group of operators on the set \(\mathcal{B}(E\cup \{\partial \})\) of bounded Borel functions on \(E\cup \partial \) endowed with the uniform norm. We will also denote by \(p(x;t,dy)\) its transition kernel, i.e. \(P_t f(x)=\int _{E\cup \{\partial \}} f(y) p(x;t,dy)\) for all \(f\in \mathcal{B}(E\cup \{\partial \})\). For all probability measure \(\mu \) on \(E\cup \{\partial \}\), we will use the notation

$$\begin{aligned} \mathbb {P}_\mu (\cdot ):=\int _{E\cup \{\partial \}}\mathbb {P}_x(\cdot )\mu (dx). \end{aligned}$$

We shall denote by \(\mathbb {E}_x\) (resp. \(\mathbb {E}_\mu \)) the expectation corresponding to \(\mathbb {P}_x\) (resp. \(\mathbb {P}_\mu \)).

We consider a Markov processes absorbed at \(\partial \). More precisely, we assume that \(X_s=\partial \) implies \(X_t=\partial \) for all \(t\ge s\). This implies that

$$\begin{aligned} \tau _\partial :=\inf \{t\ge 0,X_t=\partial \} \end{aligned}$$

is a stopping time. We also assume that \(\tau _\partial <\infty \) \(\mathbb {P}_x\)-a.s. for all \(x\in E\) and for all \(t\ge 0\) and \(\forall x\in E, \mathbb {P}_x(t<\tau _\partial )>0\).

Our first goal is to prove that Assumption (A) below is a necessary and sufficient criterion for the existence of a unique quasi-limiting distribution \(\alpha \) on \(E\) for the process \((X_t,t\ge 0)\), i.e. a probability measure \(\alpha \) such that for all probability measure \(\mu \) on \(E\) and all \(A\in \mathcal{E}\),

$$\begin{aligned} \lim _{t\rightarrow +\infty }\mathbb {P}_\mu (X_t\in A\mid t<\tau _\partial )=\alpha (A), \end{aligned}$$
(1.1)

where, in addition, the convergence is exponential and uniform with respect to \(\mu \) and \(A\). In particular, \(\alpha \) is also the unique quasi-stationary distribution [28], i.e. the unique probability measure \(\alpha \) such that \(\mathbb {P}_\alpha (X_t\in \cdot \mid t<\tau _\partial )=\alpha (\cdot )\) for all \(t\ge 0\).

Assumption (A)

There exists a probability measure \(\nu \) on \(E\) such that

  1. (A1)

    there exists \(t_0,c_1>0\) such that for all \(x\in E\),

    $$\begin{aligned} \mathbb {P}_x(X_{t_0}\in \cdot \mid t_0<\tau _\partial )\ge c_1\nu (\cdot ); \end{aligned}$$
  2. (A2)

    there exists \(c_2>0\) such that for all \(x\in E\) and \(t\ge 0\),

    $$\begin{aligned} \mathbb {P}_\nu (t<\tau _\partial )\ge c_2\mathbb {P}_x(t<\tau _\partial ). \end{aligned}$$

Theorem 1.1

Assumption (A) implies the existence of a probability measure \(\alpha \) on \(E\) such that, for any initial distribution \(\mu \),

$$\begin{aligned} \left\| \mathbb {P}_\mu (X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot )\right\| _{TV}\le 2(1-c_1c_2)^{\lfloor t/t_0\rfloor }, \end{aligned}$$
(1.2)

where \(\lfloor \cdot \rfloor \) is the integer part function and \(\Vert \cdot \Vert _{TV}\) is the total variation norm.

Conversely, if there is uniform exponential convergence for the total variation norm in (1.1), then Assumption (A) holds true.

Stronger versions of this theorem and of the other results presented in the introduction will be given in the next sections.

The quasi-stationary distribution describes the distribution of the process on the event of non-absorption. It is well known (see [28]) that when \(\alpha \) is a quasi-stationary distribution, there exists \(\lambda _0>0\) such that, for all \(t\ge 0\),

$$\begin{aligned} \mathbb {P}_{\alpha }(t<\tau _{\partial })=e^{-\lambda _0 t}. \end{aligned}$$
(1.3)

The following proposition characterizes the limiting behaviour of the absorption probability for other initial distributions.

Proposition 1.2

There exists a non-negative function \(\eta \) on \(E\cup \{\partial \}\), positive on \(E\) and vanishing on \(\partial \), such that

$$\begin{aligned} \mu (\eta )=\lim _{t\rightarrow \infty } e^{\lambda _0 t}\mathbb {P}_\mu (t<\tau _\partial ), \end{aligned}$$

where the convergence is uniform on the set of probability measures \(\mu \) on \(E\).

Our second goal is to study consequences of Assumption (A) on the behavior of the process \(X\) conditioned to never be absorbed, usually referred to as the \(Q\)-process (see [1] in discrete time and for example [5] in continuous time).

Theorem 1.3

Assumption (A) implies that the family \((\mathbb {Q}_x)_{x\in E}\) of  probability measures on \(\Omega \) defined by

$$\begin{aligned} \mathbb {Q}_x(A)=\lim _{t\rightarrow +\infty }\mathbb {P}_x(A\mid t<\tau _\partial ),\ \forall A\in \mathcal{F}_s,\ \forall s\ge 0, \end{aligned}$$

is well defined and the process \((\Omega ,(\mathcal{F}_t)_{t\ge 0},(X_t)_{t\ge 0},(\mathbb {Q}_x)_{x\in E})\) is an \(E\)-valued homogeneous Markov process. In addition, this process admits the unique invariant distribution

$$\begin{aligned} \beta (dx)=\frac{\eta (x)\alpha (dx)}{\int _E \eta (y)\alpha (dy)} \end{aligned}$$

and, for any \(x\in E\),

$$\begin{aligned} \left\| \mathbb {Q}_{x}(X_t\in \cdot )-\beta \right\| _{TV}\le 2(1-c_1c_2)^{\lfloor t/t_0\rfloor }. \end{aligned}$$

The study of quasi-stationary distributions goes back to [38] for branching processes and [12, 13, 33] for Markov chains in finite or denumerable state spaces, satisfying irreducibility assumptions. In these works, the existence and the convergence to a quasi-stationary distribution are proved using spectral properties of the generator of the absorbed Markov process. This is also the case for most further works. For example, a extensively developed tool to study birth and death processes is based on orthogonal polynomials techniques of [23], applied to quasi-stationary distributions in [6, 22, 34]. For diffusion processes, we can refer to [30] and more recently [4, 5, 26], all based on the spectral decomposition of the generator. Most of these works only study one-dimensional processes, whose reversibility helps for the spectral decomposition. Processes in higher dimensions were studied either assuming self-adjoint generator in [5], or using abstract criteria from spectral theory like in [10, 30] (the second one in infinite dimension). Other formulations in terms of abstract spectral theoretical criteria were also studied in [24]. The reader can refer to [11, 28, 36] for introductory presentations of the topic.

Most of the previously cited works do not provide convergence results nor estimates on the speed of convergence. The articles studying these questions either assume abstract conditions which are very difficult to check in practice [24, 33], or prove exponential convergence for very weak norms [4, 5, 26].

More probabilistic methods were also developed. The older reference is based on a renewal technique [21] and proves the existence and convergence to a quasi-stationary distribution for discrete processes for which Assumption (A1) is not satisfied. More recently, one-dimensional birth and death processes with a unique quasi-stationary distribution have been shown to satisfy (1.1) with uniform convergence in total variation [27]. Convergence in total variation for processes in discrete state space satisfying strong mixing conditions was obtained in [9] using Fleming–Viot particle systems whose empirical distribution approximates conditional distributions [37]. Sufficient conditions for exponential convergence of conditioned systems in discrete time can be found in [15] with applications of discrete generation particle techniques in signal processing, statistical machine learning, and quantum physics. We also refer the reader to [1618] for approximations techniques of non absorbed trajectories in terms of genealogical trees.

In this work, we obtain in Sect. 2 necessary and sufficient conditions for exponential convergence to a unique quasi-stationary distribution for general (virtually any) Markov processes (we state a stronger form of Theorem 1.1). We also obtain spectral properties of the infinitesimal generator as a corollary of our main result. Our non-spectral approach and results fundamentally differ from all the previously cited references, except [9, 27] which only focus on very specific cases. In Sect. 3, we show, using penalisation techniques [32], that the same conditions are sufficient to prove the existence of the \(Q\)-process and its exponential ergodicity, uniformly in total variation. This is the first general result showing the link between quasi-stationary distributions and \(Q\)-processes, since we actually prove that, for general Markov processes, the uniform exponential convergence to a quasi-stationary distribution implies the existence and ergodicity of the \(Q\)-process.

Section 4 is devoted to applications of the previous results to specific examples of processes. Our goal is not to obtain the most general criteria, but to show how Assumption (A) can be checked in different practical situations. We first obtain necessary and sufficient conditions for one-dimensional birth and death processes with catastrophe in Sect. 4.1.1. We show next how the method of the proof can be extended to treat several multi-dimensional examples in Sect. 4.1.2. One of these examples is infinite-dimensional (as in [10]) and assumes Brownian mutations in a continuous type space. Our last example is the neutron transport process in a bounded domain, absorbed at the boundary (Sect. 4.2). This example belongs to the class of piecewise-deterministic Markov processes, for which up to our knowledge no results on quasi-stationary distributions are known. In this case, the absorption rate is unbounded, in the sense that the absorption time cannot be stochastically dominated by an exponential random variable with constant parameter. Other examples of Markov processes with unbounded absorption rate can be studied thanks to Theorem 2.1. For example, the study of diffusion processes on \(\mathbb {R}^+\) absorbed at 0, or on \(\mathbb {R}_+^d\), absorbed at 0 or \(\mathbb {R}_+^d{\setminus }(\mathbb {R}_+^*)^d\) is relevant for population dynamics (see for example [4, 5]) and is studied in [8]. More generally, the great diversity of applications of the very similar probabilistic criterion for processes without absorption (see all the works building on [29]) indicates the wide range of applications and extensions of our criteria that can be expected.

The paper ends with the proof of the main results of Sects. 2 and 3 in Sects. 5 and 6.

2 Existence and uniqueness of a quasi-stationary distribution

2.1 Assumptions

We begin with some comments on Assumption (A).

When \(E\) is a Polish space, Assumption (A1) implies that \(X_t\) comes back fast in compact sets from any initial conditions. Indeed, there exists a compact set \(K\) of \(E\) such that \(\nu (K)>0\) and therefore, \(\inf _{x\in E}\mathbb {P}_x(\tau _{K\cup \{\partial \}}<t_0)>0\), where \(\tau _{K\cup \{\partial \}}\) is the first hitting time of \(K\cup \{\partial \}\) by \(X_t\). When \(E=(0,+\infty )\) or \(\mathbb {N}\) and \(\partial =0\), this is implied by the fact that the process \(X\) comes down from infinity [4] (see Sect. 4.1 for the discrete case).

Assumption (A2) means that the highest non-absorption probability among all initial points in \(E\) has the same order of magnitude as the non-absorption probability starting from distribution \(\nu \). Note also that (A2) holds true when, for some \(A\in \mathcal{E}\) such that \(\nu (A)>0\) and some \(c'_2>0\),

$$\begin{aligned} \inf _{y\in A}\mathbb {P}_y\left( t<\tau _\partial \right) \ge c'_2 \sup _{x\in E}\mathbb {P}_x\left( t<\tau _\partial \right) . \end{aligned}$$

We now introduce the apparently weaker assumption (A\('\)) and the stronger assumption (A\(''\)), proved to be equivalent in Theorem 2.1 below.

Assumption

(A’) There exists a family of probability measures \((\nu _{x_1,x_2})_{x_1,x_2\in E}\) on \(E\) such that,

(\(\hbox {A}1'\)):

there exists \(t_0,c_1>0\) such that, for all \(x_1,x_2\in E\),

$$\begin{aligned} \mathbb {P}_{x_i}(X_{t_0}\in \cdot \mid t_0<\tau _\partial )\ge c_1\nu _{x_1,x_2}(\cdot )\quad \text {for }i=1,2; \end{aligned}$$
(\(\hbox {A}2'\)):

there exist a constant \(c_2>0\) such that for all \(x_1,x_2\in E\) and \(t\ge 0\),

$$\begin{aligned} \mathbb {P}_{\nu _{x_1,x_2}}(t<\tau _\partial )\ge c_2\sup _{x\in E}\mathbb {P}_x(t<\tau _\partial ). \end{aligned}$$

Assumption

(A”) Assumption (A1) is satisfied and

(\(\hbox {A}2''\)):

for any probability measure \(\mu \) on \(E\), the constant \(c_2(\mu )\) defined by

$$\begin{aligned} c_2(\mu ):= \inf _{t\ge 0,\,\rho \in \mathcal{M}_1(E)}\frac{\mathbb {P}_{\mu }(t<\tau _\partial )}{\mathbb {P}_\rho (t<\tau _\partial )} \end{aligned}$$

is positive, where \(\mathcal{M}_1(E)\) is the set of probability measures on \(E\).

2.2 Results

The next result is a detailed version of Theorem 1.1.

Theorem 2.1

The following conditions (i)–(vi) are equivalent.

(i):

Assumption \((A)\).

(ii):

Assumption \((A')\).

(iii):

Assumption \((A'')\).

(iv):

There exist a probability measure \(\alpha \) on \(E\) and two constants \(C,\gamma >0\) such that, for all initial distribution \(\mu \) on \(E\),

$$\begin{aligned} \left\| \mathbb {P}_\mu (X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot ) \right\| _{TV}\le C e^{-\gamma t},\,\forall t\ge 0. \end{aligned}$$
(2.1)
(v):

There exist a probability measure \(\alpha \) on \(E\) and two constants \(C,\gamma >0\) such that, for all \(x\in E\),

$$\begin{aligned} \left\| \mathbb {P}_x(X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot )\right\| _{TV}\le C e^{-\gamma t},\,\forall t\ge 0. \end{aligned}$$
(vi):

There exists a probability measure \(\alpha \) on \(E\) such that

$$\begin{aligned} \int _0^\infty \sup _{x\in E}\left\| \mathbb {P}_x(X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot )\right\| _{TV}dt<\infty . \end{aligned}$$
(2.2)

In this case, \(\alpha \) is the unique quasi-stationary distribution for the process. In addition, if Assumption \((A')\) is satisfied, then (iv) holds with the explicit bound

$$\begin{aligned} \left\| \mathbb {P}_\mu (X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot )\right\| _{TV}\le 2(1-c_1c_2)^{\lfloor t/t_0\rfloor }. \end{aligned}$$
(2.3)

This result and the others of this section are proved in Sect. 5.

One can expect that the constant \(C\) in (iv) might depend on \(\mu \) proportionally to \(\Vert \mu -\alpha \Vert _{TV}\). This is indeed the case, but with a constant of proportionality depending on \(c_2(\mu )\), defined in Assumption (A2\(''\)), as shown by the following result.

Corollary 2.2

Hypotheses \((i-vi)\) imply that, for all probability measures \(\mu _1,\mu _2\) on \(E\), and for all \(t>0\),

$$\begin{aligned} \left\| \mathbb {P}_{\mu _1}(X_t\in \cdot \mid t<\tau _\partial )\!-\!\mathbb {P}_{\mu _2}(X_t\in \cdot \mid t<\tau _\partial )\right\| _{TV}\le \frac{(1-c_1 c_2)^{\lfloor t/t_0\rfloor }}{c_2(\mu _1)\wedge c_2(\mu _2)}\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

Remark 1

It immediately follows from (1.3) and (A\(''\)) that

$$\begin{aligned} e^{-\lambda _0 t}\le \sup _{\rho \in \mathcal{M}_1(E)}\mathbb {P}_{\rho }(t<\tau _\partial ) \le \frac{e^{-\lambda _0 t}}{c_2(\alpha )}. \end{aligned}$$
(2.4)

In the proof of Theorem 2.1, we actually prove that one can take

$$\begin{aligned} c_2(\alpha )= \sup _{s>0} \exp \left( -\lambda _0s-\frac{Ce^{(\lambda _0-\gamma )s}}{1-e^{-\gamma s}}\right) \!\!, \end{aligned}$$
(2.5)

where \(C\) and \(\gamma \) satisfy (2.1).

Remark 2

In the case of Markov processes without absorption, Meyn and Tweedie [29, Chapter 16] give several equivalent criteria for the exponential ergodicity with respect to \(\Vert \cdot \Vert _{TV}\), among which are unconditioned versions of (iv) and (v). The last results can be interpreted as an extension of these criteria to conditioned processes. Several differences remain.

  1. 1.

    In the case without absorption, the equivalence between the unconditioned versions of criteria (iv) and (v) is obvious. In our case, the proof is not immediate.

  2. 2.

    In the case without absorption, the unconditioned version of criterion (vi) can be replaced by the weaker assumption \(\sup _{x\in E}\Vert \mathbb {P}_x(X_t\in \cdot )-\alpha \Vert _{TV}\rightarrow 0\) when \(t\rightarrow +\infty \). Whether (vi) can be improved in such a way remains an open problem in general. However, if one assumes that there exists \(c'_2>0\) such that for all \(t\ge 0\),

    $$\begin{aligned} \inf _{\rho \in \mathcal{M}_1(E)}\mathbb {P}_\rho (t<\tau _\partial )\ge c'_2 \sup _{\rho \in \mathcal{M}_1(E)}\mathbb {P}_\rho (t<\tau _\partial ), \end{aligned}$$

    then one can adapt the arguments of Corollary 2.2 to prove that

    $$\begin{aligned} \sup _{x\in E}\left\| \mathbb {P}_x(X_t\in \cdot \mid t<\tau _\partial )-\alpha (\cdot )\right\| _{TV}\xrightarrow [t\rightarrow \infty ]{}0 \end{aligned}$$

    implies (i–vi).

  3. 3.

    The extension to quasi-stationary distributions of criteria based on Lyapunov functions as in [29] requires a different approach because the survival probability and conditional expectations can not be expressed easily in terms of the infinitesimal generator.

  4. 4.

    In the irreducible case, there is a weaker alternative to the Dobrushin-type criterion of Hypothesis (A1) known as Doeblin’s condition: there exist \(\mu \in \mathcal{M}_1(E),\, \varepsilon <1, t_0,\delta >0\) such that, for all measurable set \(A\) satisfying \(\mu (A)>\varepsilon \),

    $$\begin{aligned} \inf _{x\in E}\mathbb {P}_x(X_{t_0}\in A)\ge \delta . \end{aligned}$$

    It is possible to check that the conditional version of this criterion implies the existence of a probability measure \(\nu \ne \mu \) such that (A1) is satisfied. Unfortunately \(\nu \) is far from being explicit in this case and (A2), which must involve the measure \(\nu \), is no more a tractable condition, unless one can prove directly (A2\(''\)) instead of (A2).

It is well known (see [28]) that when \(\alpha \) is a quasi-stationary distribution, there exists \(\lambda _0>0\) such that, for all \(t\ge 0\),

$$\begin{aligned} \mathbb {P}_{\alpha }(t<\tau _{\partial })=e^{-\lambda _0 t}\quad \text {and}\quad e^{\lambda _0 t}\alpha P_t=\alpha . \end{aligned}$$
(2.6)

The next result is a detailed version of Proposition 1.2.

Proposition 2.3

There exists a non-negative function \(\eta \) on \(E\cup \{\partial \}\), positive on \(E\) and vanishing on \(\partial \), defined by

$$\begin{aligned} \eta (x)=\lim _{t\rightarrow \infty } \frac{\mathbb {P}_x(t<\tau _\partial )}{\mathbb {P}_\alpha (t<\tau _\partial )}=\lim _{t\rightarrow +\infty } e^{\lambda _0 t}\mathbb {P}_x(t<\tau _\partial ), \end{aligned}$$

where the convergence holds for the uniform norm on \(E\cup \{\partial \}\) and \(\alpha (\eta )=1\). Moreover, the function \(\eta \) is bounded, belongs to the domain of the infinitesimal generator \(L\) of the semi-group \((P_t)_{t\ge 0}\) on \((\mathcal{B}(E\cup \{\partial \}),\Vert \cdot \Vert _\infty )\) and

$$\begin{aligned} L\eta =-\lambda _0\eta . \end{aligned}$$

In the irreducible case, exponential ergodicity is known to be related to a spectral gap property (see for instance [25]). Our results imply a similar property under the assumptions (i–vi) for the infinitesimal generator \(L\) of the semi-group on \((\mathcal{B}(E\cup \{\partial \}),\Vert \cdot \Vert _{\infty })\).

Corollary 2.4

If \(f\in \mathcal{B}(E\cup \{\partial \})\) is a right eigenfunction for \(L\) for an eigenvalue \(\lambda \), then either

  1. 1.

    \(\lambda =0\) and \(f\) is constant,

  2. 2.

    or \(\lambda =-\lambda _0\) and \(f=\alpha (f)\eta \),

  3. 3.

    or \(\lambda \le -\lambda _0-\gamma , \alpha (f)=0\) and \(f(\partial )=0\).

3 Existence and exponential ergodicity of the \(Q\)-process

We now study the behavior of the \(Q\)-process. The next result is a detailed version of Theorem 1.3.

Theorem 3.1

Assumption \((\)A\()\) implies the three following properties.

  1. (i)

    Existence of the Q-process. There exists a family \((\mathbb {Q}_x)_{x\in E}\) of probability measures on \(\Omega \) defined by

    $$\begin{aligned} \lim _{t\rightarrow +\infty }\mathbb {P}_x(A\mid t<\tau _\partial )=\mathbb {Q}_x(A) \end{aligned}$$

    for all \(\mathcal{F}_s\)-measurable set \(A\). The process \((\Omega ,(\mathcal{F}_t)_{t\ge 0},(X_t)_{t\ge 0},(\mathbb {Q}_x)_{x\in E})\) is an \(E\)-valued homogeneous Markov process. In addition, if \(X\) is a strong Markov process under \(\mathbb {P}\), then so is \(X\) under \(\mathbb {Q}\).

  2. (ii)

    Transition kernel. The transition kernel of the Markov process \(X\) under \((\mathbb {Q}_x)_{x\in E}\) is given by

    $$\begin{aligned} \tilde{p}(x;t,dy)=e^{\lambda _0 t}\frac{\eta (y)}{\eta (x)}p(x;t,dy). \end{aligned}$$

    In other words, for all \(\varphi \in \mathcal{B}(E)\) and \(t\ge 0\),

    $$\begin{aligned} \tilde{P}_t\varphi (x)=\frac{e^{\lambda _0 t}}{\eta (x)}P_t(\eta \varphi )(x) \end{aligned}$$
    (3.1)

    where \((\tilde{P}_t)_{t\ge 0}\) is the semi-group of \(X\) under \(\mathbb {Q}\).

  3. (iii)

    Exponential ergodicity. The probability measure \(\beta \) on \(E\) defined by

    $$\begin{aligned} \beta (dx) =\eta (x)\alpha (dx). \end{aligned}$$

    is the unique invariant distribution of \(X\) under \(\mathbb {Q}\). Moreover, for any initial distributions \(\mu _1,\mu _2\) on \(E\),

    $$\begin{aligned} \left\| \mathbb {Q}_{\mu _1}(X_t\in \cdot )-\mathbb {Q}_{\mu _2}(X_t\in \cdot )\right\| _{TV}\le (1-c_1c_2)^{\lfloor t/t_0\rfloor }\Vert \mu _1-\mu _2\Vert _{TV}, \end{aligned}$$

    where \(\mathbb {Q}_\mu =\int _E \mathbb {Q}_x\,\mu (dx)\).

Note that, as an immediate consequence of Theorem 2.1, the uniform exponential convergence to a quasi-stationary distribution implies points (i–iii) of Theorem 3.1.

We investigate now the characterization of the \(Q\)-process in term of its weak infinitesimal generator (see [19], Ch I.6]). Let us recall the definition of the bounded pointwise convergence: for all \(f_n, f\) in \(\mathcal{B}(E\cup \{\partial \})\), we say that

$$\begin{aligned} \text {b.p.-}\lim _{n\rightarrow \infty } f_n=f \end{aligned}$$

if and only if \(\sup _{n} \Vert f_n\Vert _{\infty }<\infty \) and for all \(x\in E\cup \{\partial \},\,f_n(x)\rightarrow f(x)\).

The weak infinitesimal generator \(L^w\) of \((P_t)\) is defined as

$$\begin{aligned} L^w f= \text {b.p.}-\lim _{h\rightarrow 0} \frac{P_h f-f}{h}, \end{aligned}$$

for all \(f\in \mathcal{B}(E\cup \{\partial \})\) such that the above b.p.–limit exists and

$$\begin{aligned} \text {b.p.}-\lim _{h\rightarrow 0} P_h L^w f=L^w f. \end{aligned}$$

We call weak domain and denote by \(\mathcal{D}(L^w)\) the set of such functions \(f\). We define similarly the b.p.–limit in \(\mathcal{B}(E)\) and the weak infinitesimal generator \(\tilde{L}^w\) of \((\tilde{P}_t)\) and its weak domain \(\mathcal{D}(\tilde{L}^w)\).

Theorem 3.2

Assume that (A) is satisfied. Then

$$\begin{aligned} \mathcal{D}(\tilde{L}^w)=\left\{ f\in \mathcal{B}(E),\;\eta f\in \mathcal{D}(L^w)\text { and }\frac{L^w(\eta f)}{\eta }\text {is bounded}\right\} \end{aligned}$$
(3.2)

and, for all \(f\in \mathcal{D}(\tilde{L}^w)\),

$$\begin{aligned} \tilde{L}^wf=\lambda _0 f+\frac{L^w(\eta f)}{\eta }. \end{aligned}$$

If in addition \(E\) is a topological space and \(\mathcal E\) is the Borel \(\sigma \)-field, and if for all open set \(U\subset E\) and \(x\in U\),

$$\begin{aligned} \lim _{h\rightarrow 0} p(x;h,U) = \lim _{h\rightarrow 0} P_h 1\!\!1_{U}(x)=1, \end{aligned}$$
(3.3)

then the semi-group \((\tilde{P}_t)\) is uniquely determined by its weak infinitesimal generator \(\tilde{L}^w\).

Let us emphasize that (3.3) is obviously satisfied if the process \(X\) is almost surely càdlàg.

Remark 3

One can wonder if the weak infinitesimal generator can be replaced in the previous result by the standard one. Then Hille-Yoshida Theorem would give necessary and sufficient condition for a strongly continuous contraction semi-group on a Banach space \(B\) to be characterized by its standard infinitesimal generator (see for example [20], Thm 1.2.6, Prop 1.2.9]). However this is an open question that we couldn’t solve. To understand the difficulty, observe that even the strong continuity of \(\tilde{P}\) cannot be easily deduced from the strong continuity of \(P\): in view of (3.1), if \(\eta f\in B\), we have

$$\begin{aligned} \left\| \tilde{P}_t f-f\right\| _\infty \xrightarrow [t\rightarrow 0]{} 0 \quad \Leftrightarrow \quad \left\| \frac{1}{\eta }\left( P_t(\eta f)-\eta f\right) \right\| _\infty \xrightarrow [t\rightarrow 0]{} 0. \end{aligned}$$

We don’t know whether the last convergence can be deduced from the strong continuity of \(P\) or if counter examples exist.

4 Applications

This section is devoted to the application of Theorems 2.1 and 3.1 to discrete and continuous examples. Our goal is to show how Assumption (A) can be checked in different practical situations.

4.1 Generalized birth and death processes

Our goal is to apply our results to generalized birth and death processes. In Sect. 4.1.1, we extend known criteria to one dimensional birth and death processes with catastrophe. In Sect. 4.1.2, we apply a similar method to multi-dimensional and infinite dimensional birth and death processes.

4.1.1 Birth and death processes with catastrophe

We consider an extension of classical birth and death processes with possible mass extinction. Our goal is to extend the recent result from [27] on the characterisation of exponential convergence to a unique quasi-stationary distribution. The existence of quasi-stationary distributions for similar processes was studied in [35].

Let \(X\) be a birth and death process on \(\mathbb {Z}_+\) with birth rates \((b_n)_{n\ge 0}\) and death rates \((d_n)_{n\ge 0}\) with \(b_0=d_0=0\) and \(b_k,d_k>0\) for all \(k\ge 1\). We also allow the process to jump to \(0\) from any state \(n\ge 1\) at rate \(a_n\ge 0\). In particular, the jump rate from \(1\) to \(0\) is \(a_1+d_1\). This process is absorbed in \(\partial =0\).

Theorem 4.1

Assume that \(\sup _{n\ge 1} {a_n}<\infty \). Conditions (i–vi) of Theorem 2.1 are equivalent to

$$\begin{aligned} S:=\sum _{k\ge 1}\frac{1}{d_k\alpha _k}\sum _{l\ge k} \alpha _l <\infty , \end{aligned}$$
(4.1)

with \( \alpha _k=\left( \prod _{i=1}^{k-1} b_i\right) \!/\! \left( \prod _{i=1}^{k} d_i\right) .\)

Moreover, there exist constants \(C,\gamma >0\) such that

$$\begin{aligned} \left\| \mathbb {P}_{\mu _1}(X_t\in \cdot \mid t<\tau _\partial )-\mathbb {P}_{\mu _2}(X_t\in \cdot \mid t<\tau _\partial )\right\| _{TV}\le Ce^{-\gamma t}\Vert \mu _1-\mu _2\Vert _{TV} \end{aligned}$$
(4.2)

for all \(\mu _1,\mu _2\in \mathcal {M}_1(E)\) and \(t\ge 0\).

The last inequality and the following corollary of Theorem 3.1 are original results, even in the simpler case of birth and death processes without catastrophes.

Corollary 4.2

Under the assumption that \(\sup _{n\ge 1} a_n<\infty \) and \(S<\infty \), the family \((\mathbb {Q}_x)_{x\in E}\) of probability measures on \(\Omega \) defined by

$$\begin{aligned} \lim _{t\rightarrow +\infty }\mathbb {P}_x(A\mid t<\tau _\partial )=\mathbb {Q}_x(A),\ \forall A\in \mathcal{F}_s,\ \forall s\ge 0, \end{aligned}$$
(4.3)

is well defined. In addition, the process \(X\) under \((\mathbb {Q}_x)\) admits the unique invariant distribution

$$\begin{aligned} \beta (dx)=\eta (x)\alpha (dx) \end{aligned}$$

and there exist constants \(C,\gamma >0\) such that, for any \(x\in E\),

$$\begin{aligned} \left\| \mathbb {Q}_{x}(X_t\in \cdot )-\beta \right\| _{TV}\le Ce^{-\gamma t}. \end{aligned}$$
(4.4)

Remark 4

In view of Point 2. in Remark 2, we actually also have the following property. Conditionally on non-extinction, \(X\) converges uniformly in total variation to some probability measure \(\alpha \) if and only if it satisfies (i–vi).

Proof of Theorem 4.1

Let \(Y\) be the birth and death process on \(\mathbb {Z}_+\) (without catastrophe) with birth and death rates \(b_n\) and \(d_n\) from state \(n\). The process \(X\) and \(Y\) can be coupled such that \(X_t=Y_t\), for all \(t<\tau _\partial \) almost surely.

We recall (see [34]) that \(S<\infty \) if and only if the birth and death process \(Y\) comes down from infinity, in the sense that

$$\begin{aligned} \sup _{n\ge 0}\mathbb {E}_n(\tau '_\partial )<\infty , \end{aligned}$$

where \(\tau '_\partial =\inf \{t\ge 0,\ Y_t=0\}\). More precisely, for any \(z\ge 0\),

$$\begin{aligned} \sup _{n\ge z}\mathbb {E}_n(T'_z)= \sum _{k\ge z+1}\frac{1}{d_k\alpha _k}\sum _{l\ge k} \alpha _l <\infty , \end{aligned}$$
(4.5)

where \(T'_z=\inf \{t\ge 0,\ Y_t\le z\}\). Note that this equality remains true even when the sum is infinite.

Let us first assume that (A1) is satisfied. This will be sufficient to prove that \(S<\infty \). Let \(z\) be large enough so that \(\nu (\{1,\ldots ,z\})>0\). Then, for all \(n\ge 1\),

$$\begin{aligned} \mathbb {P}_n(Y_{t_0}\le z)&\ge \mathbb {P}_n(X_{t_0}\le z\,\text { and }\,t_0\le \tau _\partial )\\&\ge c_1 \nu (\{1,\ldots ,z\})\mathbb {P}_n(t_0\le \tau _\partial ). \end{aligned}$$

Since the jump rate of \(X\) to \(0\) from any state is always smaller than \(\overline{q}=d_1+\sup _{n\ge 1} {a_n}\), the absorption time dominates an exponential r.v. of parameter \(\overline{q}\). Thus \(\mathbb {P}_n(t_0\le \tau _\partial )\ge e^{-\overline{q} t_0}\) and hence

$$\begin{aligned} \inf _{n\ge 1}\mathbb {P}_n(Y_{t_0}\le z)\ge ce^{-\overline{q} t_0}, \end{aligned}$$

for some \(c>0\). Defining \(\theta =\inf \{n\ge 0,\ Y_{nt_0}\le z\}\), we deduce from the Markov property that, for all \(n\ge 1\) and \(k\ge 0\),

$$\begin{aligned} \mathbb {P}_n(\theta > k+1\mid \theta > k)\le \sup _{m>z}\mathbb {P}_m(Y_{t_0}\ge z)\le 1-ce^{-\overline{q} t_0}. \end{aligned}$$

Thus \(\mathbb {P}_n(\theta > k)\le (1-ce^{-\overline{q} t_0})^k\) and \(\sup _{n\ge z}\mathbb {E}_n(T'_z)\le \sup _{n\ge z}\mathbb {E}_n(t_0\theta )<\infty \). By (4.5), this entails \(S<\infty \).

Conversely, let us assume that \(S<\infty \). For all \(\varepsilon >0\) there exists \(z\) such that

$$\begin{aligned} \sup _{n\ge z}\mathbb {E}_n(T'_z)\le \varepsilon . \end{aligned}$$

Therefore, \(\sup _{n\ge z}\mathbb {P}_n(T'_z\ge 1)\le \varepsilon \) and, applying recursively the Markov property, \(\sup _{n\ge z}\mathbb {P}_n(T'_z\ge k)\le \varepsilon ^k\). Then, for all \(\lambda >0\), there exists \(z\ge 1\) such that

$$\begin{aligned} \sup _{n\ge 1}\mathbb {E}_n(e^{\lambda T'_z})<+\infty . \end{aligned}$$
(4.6)

Fix \(x_0\in E\) and let us check that this exponential moment implies (A2) and then (A1) with \(\nu =\delta _{x_0}\). We choose \(\lambda =1+\overline{q}\) and apply the previous construction of \(z\). Defining the finite set \(K=\{1,2,\ldots ,z\}\cup \{x_0\}\) and \(\tau _K=\inf \{t\ge 0,\,X_t\in K\}\), we thus have

$$\begin{aligned} A:=\sup _{x\in E}\mathbb {E}_x(e^{\lambda \tau _K\wedge \tau _\partial })<\infty . \end{aligned}$$
(4.7)

Let us first observe that for all \(y,z\in K, \mathbb {P}_y(X_1=z)\mathbb {P}_z(t<\tau _\partial )\le \mathbb {P}_y(t+1<\tau _\partial )\le \mathbb {P}_y(t<\tau _\partial )\). Therefore, the constant \(C^{-1}:=\inf _{y,z\in K}\mathbb {P}_y(X_1=z)>0\) satisfies the following inequality:

$$\begin{aligned} \sup _{x\in K}\mathbb {P}_x(t<\tau _\partial )\le C\inf _{x\in K}\mathbb {P}_x(t<\tau _\partial ),\quad \forall t\ge 0. \end{aligned}$$
(4.8)

Moreover, since \(\lambda \) is larger than the maximum absorption rate \(\overline{q}\), for \(t\ge s\),

$$\begin{aligned} e^{-\lambda s}\mathbb {P}_{x_0}(t-s<\tau _\partial )\le \mathbb {P}_{x_0}(t-s<\tau _\partial )\inf _{x\ge 1}\mathbb {P}_{x}(s<\tau _\partial )\le \mathbb {P}_{x_0}(t<\tau _\partial ). \end{aligned}$$

For all \(x\in E\), we deduce from Chebyshev’s inequality and (4.7) that

$$\begin{aligned} \mathbb {P}_x(t<\tau _K\wedge \tau _\partial )\le Ae^{-\lambda t}. \end{aligned}$$

Using the last three inequalities and the strong Markov property, we have

$$\begin{aligned} \mathbb {P}_x(t<\tau _\partial )&=\mathbb {P}_x(t<\tau _K\wedge \tau _\partial )+\mathbb {P}_x(\tau _K\wedge \tau _\partial \le t<\tau _\partial )\\&\le Ae^{-\lambda t}+\int _0^t \sup _{y\in K\cup \{\partial \}}\mathbb {P}_y(t-s<\tau _\partial )\mathbb {P}_x(\tau _K\wedge \tau _\partial \in ds) \\&\le A\mathbb {P}_{x_0}(t<\tau _\partial )+ C\int _0^t \mathbb {P}_{x_0}(t-s<\tau _\partial )\mathbb {P}_x(\tau _K\wedge \tau _\partial \in ds) \\&\le A\mathbb {P}_{x_0}(t<\tau _\partial )+C\,\mathbb {P}_{x_0}(t<\tau _\partial )\int _0^t e^{\lambda s}\,\mathbb {P}_x(\tau _K\wedge \tau _\partial \in ds)\\&\le A(1+C)\mathbb {P}_{x_0}(t<\tau _\partial ). \end{aligned}$$

This shows (A2) for \(\nu =\delta _{x_0}\).

Let us now show that \((A1)\) is satisfied. We have, for all \(x\in E\),

$$\begin{aligned} \mathbb {P}_x(\tau _K<t)&= \mathbb {P}_x(\tau _K<t\wedge \tau _\partial )\ge \mathbb {P}_x(t<\tau _\partial )-\mathbb {P}_x(t<\tau _K\wedge \tau _\partial )\\&\ge e^{-\overline{q}t}-Ae^{-\lambda t}. \end{aligned}$$

Since \(\lambda >\overline{q}\), there exists \(t_0>0\) such that

$$\begin{aligned} \inf _{x\in E} \mathbb {P}_x(\tau _K<t_0-1) >0. \end{aligned}$$

But the irreducibility of \(X\) and the finiteness of \(K\) imply that \(\inf _{y\in K}\mathbb {P}_y(X_1=x_0)>0\), thus the Markov property entails

$$\begin{aligned} \inf _{x\in E}\mathbb {P}_x(X_{t_0}=x_0)\ge \inf _{x\in E}\mathbb {E}_x[1\!\!1_{\tau _K<t_0-1}\inf _{y\in K}\mathbb {P}_y(X_1=x_0)e^{-q_{x_0}(t_0-1-\tau _K)}]>0, \end{aligned}$$

where \(q_{x_0}=a_{x_0}+b_{x_0}+d_{x_0}\) is the jump rate from state \(x_0\), which implies (A1) for \(\nu =\delta _{x_0}\). Finally, using Theorem 2.1, we have proved that (i–vi) holds.

In order to conclude the proof, we use Corollary 2.2 and the fact that

$$\begin{aligned} \inf _{x\in E} c_2(\delta _x)\ge \inf _{x\in E}\mathbb {P}_x(X_{t_0}=x_0)c_2(\delta _{x_0})>0. \end{aligned}$$

This justifies the last part of Theorem 4.1. \(\square \)

4.1.2 Extensions to multi-dimensional birth and death processes

In this section, our goal is to illustrate how the previous result and proof apply in various multi-dimensional cases, using comparison arguments. We focus here on a few instructive examples in order to illustrate the tools of our method and its applicability to a wider range of models. We will consider three models of multi-specific populations, with competitive or cooperative interaction within and among species.

Our first example deals with a birth and death process in \(\mathbb {Z}_+^d\), where each coordinate represent the number of individuals of distinct types (or in different geographical patches). We will assume that mutations (or migration) from each type (or patch) to each other is possible at birth, or during the life of individuals. In this example, the absorbing state \(\partial =0\) corresponds to the total extinction of the population.

We consider in our second example a cooperative birth and death process without mutation (or migration), where extinct types remain extinct forever. In this case, the absorbing states are \(\partial =\mathbb {Z}_+^d{\setminus }\mathbb {N}^d\), where \(\mathbb {N}=\{1,2,\ldots \}\).

Our last example shows how these techniques apply to discrete populations with continuous type space and Brownian genetical drift. Such multitype birth and death processes in continuous type space naturally arise in evolutionary biology [7] and the existence of a quasi-stationary distribution of similar processes has been studied in [10].

Example 1

(Birth and death processes with mutation or migration). We consider a \(d\)-dimensional birth and death process with type-dependent individual birth and death rates \(X\), where individuals compete with each others with type dependent coefficients. We denote by \(\lambda _{ij}> 0\) the birth rate of an individual of type \(j\) from an individual of type \(i, \mu _i> 0\) the death rate of an individual of type \(i\) and by \(c_{ij}>0\) the death rate of an individual of type \(i\) from competition with an individual of type \(j\). More precisely, if \(x\in \mathbb {Z}_+^d\), denoting by \(b^i(x)\) (resp. \(d^i(x)\)) the birth (resp. death) rate of an individual of type \(i\) in the population \(x\), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} b^i(x)=\sum _{j=1}^d \lambda _{ji}x_j, \\ d^i(x)=\mu _i x_i + \sum _{j=1}^d c_{ij} x_ix_j. \end{array}\right. } \end{aligned}$$

Note that \(\partial =0\) is the only absorbing state for this process.

The process \(X\) can be coupled with a birth and death process \(Y\) such that \(|X_t|\le Y_t\) with birth and death rates

$$\begin{aligned} b_n&:= nd\sup _{i,j}\lambda _{ij} \ge \sup _{x\in \mathbb {Z}_+^d,\ |x|=n} \sum _{i=1}^d b^i(x) \end{aligned}$$
(4.9)
$$\begin{aligned} d_n&:= n\inf _{i}\mu _i+n^2\inf _{i,j}c_{ij}\le \inf _{x\in \mathbb {Z}_+^d,\ |x|=n} \sum _{i=1}^d d^i(x), \end{aligned}$$
(4.10)

where \(|x|=x_1+\cdots +x_d\). We can check that \(S\), defined in (4.1), is finite and hence one obtains (4.6) and (4.7) exactly as in the proof of Theorem 4.1. From these inequalities, the proof of (A2) and (A1) is the same as for Theorem 4.1, with \(K=\{x\in E,\,|x|\le z\}\cup \{x_0\}\) and \(\overline{q}=\max _{i}\mu _i+c_{ii}<\infty \).

Hence there exists a unique quasi-stationary distribution. Moreover, conditions (i–vi) hold as well as (4.2), (4.3) and (4.4).

Example 2

(Weak cooperative birth and death process without mutation) We consider a \(d\)-dimensional birth and death process with type-dependent individual birth and death rates, where individuals of the same type compete with each others and where individuals of different types cooperate with each others. Denoting by \(\lambda _i\ge 0\) and \(\mu _i\ge 0\) the individual birth and death rates, by \(c_{ii}>0\) the intra-type individual competition rate and by \(c_{ij}\ge 0\) the inter-type individual cooperation rate for any types \(i\ne j\), we thus have the following multi-dimensional birth and death rates:

$$\begin{aligned} {\left\{ \begin{array}{ll} b^i(x)=\lambda _i x_i + \sum _{j\ne i} c_{ij} x_ix_j\\ d^i(x)=\mu _i x_i + c_{ii} x_i^2. \end{array}\right. } \end{aligned}$$

This process is absorbed in \(\partial =\mathbb {Z}_+^d{\setminus }\mathbb {N}^d\).

We assume that the inter-type cooperation is weak, relatively to the competition between individuals of same types. More formally, we assume that, for all \(i\in \{1,\ldots ,d\}\),

$$\begin{aligned} \left( 1-\frac{1}{d}\right) \max _{i\ne j} \frac{c_{ij}+c_{ji}}{2} < \frac{1}{\beta } \end{aligned}$$
(4.11)

where \(\beta =\sum _{j=1}^d \frac{1}{c_{jj}}\).

We claim that there exists a unique quasi-stationary distribution and that conditions (i–vi) hold as well as (4.2), (4.3) and (4.4).

Indeed the same coupling argument as in the last example can be used with \(b_n\) and \(d_n\) defined as

$$\begin{aligned} b_n&:= n\max _{i\in \{1,\ldots ,d\}} \lambda _i + n^2\left( 1-\frac{1}{d}\right) \max _{i< j}\frac{c_{ij}+c_{ji}}{2},\\ d_n&:= n \min _{i\in \{1,\ldots ,d\}}\mu _i + \frac{n^2}{\beta }. \end{aligned}$$

From this definition, one can check that

$$\begin{aligned} \sup _{x\in \mathbb {Z}_+^d,\,|x|=n}&\sum _{i=1}^d b^i(x)\le \max _{i\in \{1,\ldots ,d\}} \lambda _i \sum _{i=1}^d x_i + \max _{i< j} \frac{c_{ij}+c_{ji}}{2}\sum _{i\ne j}x_ix_j\\&\le n \max _{i\in \{1,\ldots ,d\}} \lambda _i+ \max _{i< j} \frac{c_{ij}+c_{ji}}{2} \left( n^2-n^2\min _{y\in \mathbb {R}_+^d, |y|=1} \sum _{i=1}^d y_i^2\right) = b_n \end{aligned}$$

and

$$\begin{aligned} \inf _{x\in \mathbb {Z}_+^d,\,|x|=n}\sum _{i=1}^d d^i(x) \ge n \min _{i\in \{1,\ldots ,d\}}\mu _i + n^2 \min _{y\in \mathbb {R}_+^d, |y|=1} \sum _{i=1}^d c_{ii}y_i^2. \end{aligned}$$

Since the function \(y\mapsto \sum _{i=1}^d c_{ii}y_i^2\) on \(\{y\in \mathbb {R}_+^d, |y|=1\}\) reaches its minimum at \((1/c_{11},\ldots ,1/c_{dd})/\beta \), we have

$$\begin{aligned} \inf _{x\in \mathbb {Z}_+^d,\,|x|=n}\sum _{i=1}^d d^i(x) \ge d_n. \end{aligned}$$

Now we deduce from (4.11) that \(b_n/d_{n+1}\) converges to a limit smaller than 1. Hence,

$$\begin{aligned} S':=\sum _{k\ge d}\frac{1}{d_k\alpha '_k}\sum _{l\ge k} \alpha '_l<\infty , \end{aligned}$$

with \(\alpha '_k=\left( \prod _{i=d}^{k-1}b_i\right) \!/\! \left( \prod _{i=d}^{k} d_i\right) \). This implies that, for any \(\lambda >0\), there exists \(z\) such that (4.6) holds. Because of the cooperative assumption, the maximum absorption rate is given by \(\overline{q}=\max _{i} \mu _i+c_{ii}<\infty \) and we can conclude following the proof of Theorem 4.1, as in Example 1.

Example 3

(Infinite dimensional birth and death processes) We consider a birth and death process \(X\) evolving in the set \(\mathcal{M}\) of finite point measures on \(\mathbb {T}\), where \(\mathbb {T}\) is the set of individual types. We refer to [10] for a study of the existence of quasi-stationary distributions for similar processes.

If \(X_t=\sum _{i=1}^n \delta _{x_i}\), the population at time \(t\) is composed of \(n\) individuals of types \(x_1,\ldots ,x_n\). For simplicity, we assume that \(\mathbb {T}\) is the unit torus of \(\mathbb {R}^d, d\ge 1\), and that each individual’s type is subject to mutation during its life according to independent standard Brownian motions in \(\mathbb {T}\).

We denote by \(\lambda (x)> 0\) the birth rate of an individual of type \(x\in \mathbb {T}, \mu (x)> 0\) its death rate and by \(c(x,y)>0\) the death rate of an individual of type \(x\) from competition with an individual of type \(y\), where \(\lambda , \mu \) and \(c\) are continuous functions. More precisely, if \(\xi =\sum _{i=1}^n \delta _{x_i}\in \mathcal{M}{\setminus }\{0\}\), denoting by \(b(x_i,\xi )\) (resp. \(d(x_i,\xi )\)) the birth (resp. death) rate of an individual of type \(x_i\) in the population \(\xi \), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} b(x_i,\xi )=\lambda (x_i), \\ d(x_i,\xi )=\mu (x_i) + \int _{\mathbb {T}}c(x_i,y)d\xi (y). \end{array}\right. } \end{aligned}$$

This corresponds to clonal reproduction. Similarly to Example 1, we assume that the process is absorbed at \(\partial =0\) (see [7] for the construction of this process).

We claim that there exists a unique quasi-stationary distribution and that conditions (i–vi) hold as well as (4.2), (4.3) and (4.4).

Indeed, one can check as in example 1 that there exists \(t_0\ge 0\) such that

$$\begin{aligned} \inf _{\xi \in \mathcal{M}}\mathbb {P}_{\xi }(|\text {Supp }X_{t_0}|=1)>0. \end{aligned}$$

Observe that for all measurable set \(\Gamma \subset \mathbb {T}\),

$$\begin{aligned} \mathbb {P}_{\delta _x}(X_{1}=\delta _y,\text { with }y\in \Gamma )\ge e^{-\sup _{x\in \mathbb {T}}(b(x,\delta _x)+d(x,\delta _x))}\mathbb {P}_x(\tilde{B}_1\in \Gamma ), \end{aligned}$$

where \(\tilde{B}\) is a standard Brownian motion in \(\mathbb {T}\). Hence, defining \(\nu \) as the law of \(\delta _U\), where \(U\) is uniform on \(\mathbb {T}\), there exists \(c_1>0\) such that, for all measurable set \(A\subset \mathcal{M}\),

$$\begin{aligned} \inf _{\xi \in \mathcal{M}}\mathbb {P}_{\xi }(X_{t_0+1}\in A)\ge c_1\nu (A). \end{aligned}$$

This entails (A1) for the measure \(\nu \). As in the two previous examples, (A2) follows from similar computations as in the proof of Theorem 4.1. In particular, there exists \(n_0\) such that \(\sup _{\xi \in \mathcal {M}}\mathbb {E}_\xi (e^{\lambda \tau _{K_{n_0}}\wedge \tau _\partial })<\infty \), where \(\lambda =1+\sup _{x\in \mathbb {T}}[\mu (x)+c(x,x)]\) and

$$\begin{aligned} K_{n_0}=\{\xi \in \mathcal {M},\ |\text {Supp}(\xi )|\le n_0\}. \end{aligned}$$

The new difficulty is to prove (4.8). Since absorption occurs only from states with one individual, this is equivalent to: there exists a constant \(C>0\) such that, for all \(t\ge 0, x_0\in \mathbb {T}\),

$$\begin{aligned} \mathbb {P}_{\delta _{x_0}}(t<\tau _\partial )\ge C\sup _{\xi \in K_{n_0}}\mathbb {P}_\xi (t<\tau _\partial ). \end{aligned}$$
(4.12)

If this holds, we conclude the proof as for Theorem 4.1.

To prove (4.12), let us first observe that the jump rate from any state of \(K_{n_0}\) is uniformly bounded from above by a constant \(\rho <\infty \). Hence, we can couple the process \(X_t\) with an exponential r.v. \(\tau \) with parameter \(\rho \) independent of the Brownian motions driving the mutations in such a way that \(X\) does not jump in the time interval \([0,\tau ]\). For any \(x\in \mathbb {T}^n\) and any Brownian motion \(B\) on \(\mathbb {T}^n\) independant of \(\tau \), we have

$$\begin{aligned} \mathbb {P}(x+B_{\tau \wedge 1}\in \Gamma )\le \mathbb {P}(x+B_{\tau }\in \Gamma )+\mathbb {P}(x+B_{1}\in \Gamma )\le C\text {Leb}(\Gamma ),\quad \forall \Gamma \in \mathcal {B}(\mathbb {T}^n), \end{aligned}$$

where the last inequality follows from the explicit density of \(B_{\tau }\) [3, Eq. 1.0.5]. From this it is easy to deduce that there exists \(C'<\infty \) such that, for all \(1\le n\le n_0, A\in \mathcal {B}(K_n{\setminus }K_{n-1})\) and \(\xi \in K_n{\setminus }K_{n-1}\),

$$\begin{aligned} \mathbb {P}_\xi (X_{\tau \wedge 1}\in A)\le C'\mathcal {U}_n(A), \end{aligned}$$

where \(\mathcal {U}_n\) is the law of \(\sum _{i=1}^n\delta _{U_i}\), where \(U_1,\ldots ,U_n\) are i.i.d. uniform r.v. on \(\mathbb {T}\). Since one also has

$$\begin{aligned} \mathbb {P}_{\delta _{x_0}}(X_1\in A)&\ge \mathbb {P}_{\delta _{x_0}}(|\text {Supp}(X_{1/2}) |=n,\, X_1\in A)\\&\ge C''\mathcal {U}_n(A),\quad \forall A\in \mathcal {B}(K_n{\setminus } K_{n-1}) \end{aligned}$$

for a constant \(C''\) independent of \(A\), we have proved that

$$\begin{aligned} \mathbb {P}_{\delta _{x_0}}(X_1\in A)\ge C \sup _{\xi \in K_n{\setminus } K_{n-1}}\mathbb {P}_\xi (X_{\tau \wedge 1}\in A),\quad \forall A\in \mathcal {B}(K_n{\setminus } K_{n-1}) \end{aligned}$$

for a constant \(C\) independent of \(A\) and \(n\le n_0\). We can now prove (4.12): for all fixed \(\xi \in K_n{\setminus } K_{n-1}\),

$$\begin{aligned} \mathbb {P}_{\delta _{x_0}}(t+1<\tau _\partial )&\ge \int _{K_n{\setminus } K_{n-1}}\mathbb {P}_{\zeta }(t<\tau _\partial )\mathbb {P}_{\delta _{x_0}}(X_1\in d\zeta ) \\&\ge C\int _{K_n{\setminus }K_{n-1}}\mathbb {P}_{\zeta }(t<\tau _\partial )\mathbb {P}_\xi (X_{\tau \wedge 1}\in d\zeta ) \\&= C\mathbb {P}_\xi (\tau \wedge 1+t<\tau _\partial )\ge C\mathbb {P}_\xi (t+1<\tau _\partial ). \end{aligned}$$

4.2 Absorbed neutron transport process

The propagation of neutrons in fissible media is typically modeled by neutron transport systems, where the trajectory of the particle is composed of straight exponential paths between random changes of directions [14, 39]. An important problem to design nuclear devices is the so-called shielding structure, aiming to protect humans from ionizing particles. It is in particular crucial to compute the probability that a neutron exits the shielding structure \(D\) before its absorption by the medium [2]. This question is of course related to the quasi-stationary behavior of neutron transport, where absorption corresponds to the exit of a neutron from \(D\).

We consider a piecewise-deterministic process of neutron transport with constant velocity. Let \(D\) be an open connected bounded domain of \(\mathbb {R}^2\), let \(S^2\) be the unit sphere of \(\mathbb {R}^2\) and \(\sigma (du)\) be the uniform probability measure on \(S^2\). We consider the Markov process \((X_t,V_t)_{t\ge 0}\) in \(D\times S^2\) constructed as follows: \(X_t=\int _0^t V_s\,ds\) and the velocity \(V_t\in S^2\) is a pure jump Markov process, with constant jump rate \(\lambda >0\) and uniform jump probability distribution \(\sigma \). In other words, \(V_t\) jumps to i.i.d. uniform values in \(S^2\) at the jump times of a Poisson process. At the first time where \(X_t\not \in D\), the process immediately jumps to the cemetery point \(\partial \), meaning that the process is absorbed at the boundary of \(D\). An example of path of the process \((X,V)\) is shown in Fig. 1. For all \(x\in D\) and \(u\in S^2\), we denote by \(\mathbb {P}_{x,u}\) (resp. \(\mathbb {E}_{x,u}\)) the distribution of \((X,V)\) conditionned on \((X_0,V_0)=(x,u)\) (resp. the expectation with respect to \(\mathbb {P}_{x,u}\)).

Fig. 1
figure 1

A sample path of the neutron transport process \((X,V)\). The times \(J_1<J_2<\ldots \) are the successive jump times of \(V\)

Remark 5

The assumptions of contant velocity, uniform jump distribution, uniform jump rates and on the dimension of the process can be relaxed but we restrict here to the simplest case to illustrate how conditions (A1) and (A2) can be checked. In particular, it is easy to extend our results to variants of the process where, for instance, the jump measure for \(V\) may depend on the state of the process, provided this measure is absolutely continuous w.r.t. \(\sigma \) with density uniformly bounded from above and below.

We denote by \(\partial D\) the boundary of the domain \(D, \text {diam}(D)\) its diameter, and for all \(A\subset \mathbb {R}^2\) and \(x\in \mathbb {R}^2\), by \(d(x,A)\) the distance of \(x\) to the set \(A\): \(d(x,A)=\inf _{y\in A}|x-y|\). We also denote by \(B(x,r)\) the open ball of \(\mathbb {R}^2\) centered at \(x\) with radius \(r\). We assume that the domain \(D\) is connected and smooth enough, in the following sense.

Assumption

(B) We assume that there exists \(\varepsilon >0\) such that

  1. (B1)

    \(D_{\varepsilon }:=\{x\in D:d(x,\partial D)>\varepsilon \}\) is non-empty and connected;

  2. (B2)

    there exists \(0<s_\varepsilon <t_\varepsilon \) and \(\underline{\sigma }>0\) such that, for all \(x\in D{\setminus }D_\varepsilon \), there exists \(K_x\subset S^2\) measurable such that \(\sigma (K_x)\ge \underline{\sigma }\) and for all \(u\in K_x, x+su\in D_\varepsilon \) for all \(s\in [s_\varepsilon ,t_\varepsilon ]\) and \(x+su\not \in \partial D\) for all \(s\in [0,s_\varepsilon ]\).

As illustrated by Fig. 2, assumption (B2) means that, for all \(x \in D{\setminus }D_\varepsilon \), the set

$$\begin{aligned} L_x:=\Bigg \{y\in \mathbb {R}^2:|y-x|\in [s_\varepsilon ,t_\varepsilon ]\, \text { and }\,\frac{y-x}{|y-x|}\in K_x\Bigg \} \end{aligned}$$

is included in \(D_\varepsilon \) and has Lebesgue measure larger than \(\frac{\underline{\sigma }}{2}(t_\varepsilon ^2-s_\varepsilon ^2)>0\).

Fig. 2
figure 2

The sets \(K_x\) and \(L_x\) of Assumption (B2)

These assumptions are true for example if \(\partial D\) is a \(C^2\) connected compact manifold, since then the so-called interior sphere condition entails the existence of a cone \(K_x\) satisfying (B2) provided \(\varepsilon \) is small enough compared to the maximum curvature of the manifold.

Theorem 4.3

Assumption \((B)\) implies (i–vi) in Theorem 2.1.

Proof

In all the proof, we will make use of the following notation: for all \(k\ge 1\), let \(J_k\) be the \(k\)-th jump time of \(V_t\) (the absorption time is not considered as a jump, so \(J_{k+1}=\infty \) if \(J_k<\infty \) and \(X_t\) hits \(\partial D\) after \(J_k\) and before the \((k+1)\)-th jump of \(V_t\)). \(\square \)

Let us first prove (A1). The following properties are easy consequences of the boundedness of \(D\) and Assumption (B).

Lemma 4.4

(i):

There exists \(n\ge 1\) and \(x_1,\ldots ,x_n\in D_{\varepsilon }\) such that \(D_{\varepsilon }\subset \bigcup _{i=1}^n B(x_i,\varepsilon /16)\).

(ii):

For all \(x,y\in D_{\varepsilon }\), there exists \(m\le n\) and \(i_1,\ldots , i_m\) distinct in \(\{1,\ldots ,n\}\) such that \(x\in B(x_{i_1},\varepsilon /16), y\in B(x_{i_m},\varepsilon /16)\) and for all \(1\le j\le m-1, B(x_{i_j},\varepsilon /16)\cap B(x_{i_{j+1}},\varepsilon /16)\not =\emptyset \).

The next lemma is proved just after the current proof.

Lemma 4.5

For all \(x\in D, u\in S^2\) and \(t>0\) such that \(d(x,\partial D)>t\), the law of \((X_t,V_t)\) under \(\mathbb {P}_{x,u}\) satisfies

$$\begin{aligned} \mathbb {P}_{x,u}(X_t\in dz,\,V_t\in dv)\ge \frac{\lambda ^2 e^{-\lambda t}}{4\pi t}\,\frac{(t-|z-x|)^2}{t+|z-x|}1\!\!1_{z\in B(x,t)}\Lambda (dz)\sigma (dv), \end{aligned}$$

where \(\Lambda \) is Lebesgue’s measure on \(\mathbb {R}^2\).

This lemma has the following immediate consequence. Fix \(i\not =j\) in \(\{1,\ldots ,n\}\) such that \(B(x_i,\varepsilon /16)\cap B(x_j,\varepsilon /16)\not =\emptyset \). Then, for all \(x\in B(x_i,\varepsilon /16)\) and \(u\in S^2\),

$$\begin{aligned} \mathbb {P}_{x,u}(X_{\varepsilon /2}\in dz,\,V_{\varepsilon /2}\in dv)\ge C_\varepsilon 1\!\!1_{B(x_j,\varepsilon /8)\cup B(x_i,\varepsilon /8)}(z)\Lambda (dz)\,\sigma (dv), \end{aligned}$$

for a constant \(C_\varepsilon >0\) independent of \(x, i\) and \(j\).

Combining this result with Lemma 4.4, one easily deduces that for all \(x\in D_\varepsilon , u\in S^2\) and \(m\ge n\),

$$\begin{aligned} \mathbb {P}_{x,u}(X_{m\varepsilon /2}\in dz,\,V_{m\varepsilon /2}\in dv)\ge C_\varepsilon c^{m-1}_\varepsilon 1\!\!1_{D_\varepsilon }(z) \Lambda (dz)\,\sigma (dv), \end{aligned}$$

where \(c_\varepsilon =C_\varepsilon \Lambda (B(\varepsilon /16)) =C_\varepsilon \pi \varepsilon ^2/256\). Proceeding similarly, but with a first time step of length in \([\varepsilon /2,\varepsilon )\), we can also deduce from Lemma 4.5 that, for all \(t\ge n\varepsilon /2\),

$$\begin{aligned} \mathbb {P}_{x,u}(X_{t}\in dz,\,V_{t}\in dv)\ge C'_\varepsilon c^{\lfloor 2t/\varepsilon \rfloor -1}_\varepsilon 1\!\!1_{D_\varepsilon }(z) \Lambda (dz)\,\sigma (dv) \end{aligned}$$
(4.13)

for a contant \(C'_\varepsilon >0\).

This entails (A1) with \(\nu \) the uniform probability measure on \(D_\varepsilon \times S^2\) and any \(t_0\ge n\varepsilon /2\), but only for initial conditions in \(D_\varepsilon \times S^2\).

Now, assume that \(x\in D{\setminus }D_\varepsilon \) and \(u\in S^2\). Let

$$\begin{aligned} s=\inf \{t\ge 0: x+tu\in D_\varepsilon \cup \partial D\}. \end{aligned}$$

If \(x+su\in \partial D_\varepsilon \), then \(\mathbb {P}_{x,u}(X_{s}\in \partial D_\varepsilon )\ge e^{-\lambda s}\ge e^{-\lambda \,\text {diam}(D)}\), and thus, combining this with (4.13), for all \(t>n\varepsilon /2\),

$$\begin{aligned} \mathbb {P}_{x,u}(X_{s+t}\in dz,\,V_{s+t}\in dv\mid s+t<\tau _\partial )\ge \mathbb {P}_{x,u}(X_{s+t}\in dz,\,V_{s+t}\in dv) \\ \ge e^{-\lambda \,\text {diam}(D)} C'_\varepsilon c^{\lfloor 2t/\varepsilon \rfloor -1}_\varepsilon 1\!\!1_{D_\varepsilon }(z)\Lambda (dz)\,\sigma (dv). \end{aligned}$$

If \(x+su\in \partial D\),

$$\begin{aligned} \mathbb {P}_{x,u}(X_{t_\varepsilon }\in D_\varepsilon ,\,t_\varepsilon <\tau _\partial )&\ge \mathbb {P}(J_1<s\wedge (t_\varepsilon -s_\varepsilon ),\,V_{J_1}\in K_{x+uJ_1},\,J_2>t_\varepsilon ) \\&\ge \underline{\sigma }e^{-\lambda t_\varepsilon } \mathbb {P}(J_1<s\wedge (t_\varepsilon -s_\varepsilon )). \end{aligned}$$

Hence (4.13) entails, for all \(t\ge n\varepsilon /2\) such that \(t+t_\varepsilon \ge s\),

$$\begin{aligned}&\mathbb {P}_{x,u}(X_{t_\varepsilon +t}\in dz,\,V_{t_\varepsilon +t}\in dv\mid t_\varepsilon +t<\tau _\partial ) \\&\qquad \ge \frac{\mathbb {P}_{x,u}(X_{t_\varepsilon }\in D_\varepsilon ,\,t_\varepsilon <\tau _\partial ,\,X_{t_\varepsilon +t}\in dz,\,V_{t_\varepsilon +t}\in dv)}{\mathbb {P}_{x,u}(t+t_\varepsilon <\tau _\partial )}\\&\qquad \ge \frac{\mathbb {P}(J_1<s\wedge (t_\varepsilon -s_\varepsilon ))}{\mathbb {P}(J_1<s)} \underline{\sigma }e^{-\lambda t_\varepsilon } c^{\lfloor 2t/\varepsilon \rfloor +1}_\varepsilon 1\!\!1_{D_\varepsilon }(z)\Lambda (dz)\,\sigma (dv). \end{aligned}$$

Since \(t_\varepsilon \le \text {diam}(D)\), we have for all \(0<s\le \text {diam}(D)\)

$$\begin{aligned} \frac{\mathbb {P}(J_1<s\wedge (t_\varepsilon -s_\varepsilon ))}{\mathbb {P}(J_1<s)}\ge \frac{1-e^{-\lambda (t_\varepsilon -s_\varepsilon )}}{1-e^{-\lambda \,\text {diam}(D)}}>0. \end{aligned}$$

Hence, we have proved (A1) with \(\nu \) the uniform probability measure on \(D_\varepsilon \times S^2\) and \(t_0=\frac{n\varepsilon }{2}+\text {diam}(D)\).

Now we come to the proof of (A2). This can be done in two steps: first, we prove that for all \(x\in D\) and \(u\in S^2\),

$$\begin{aligned} \mathbb {P}_{x,u}(J_4<\infty ,\,X_{J_4}\in dz)\le C1\!\!1_D(z)\Lambda (dz) \end{aligned}$$
(4.14)

for some constant \(C\) independent of \(x\) and \(u\); second

$$\begin{aligned} \mathbb {P}_\nu (J_1<\infty ,\,X_{J_1}\in dz)\ge c1\!\!1_D(z)\Lambda (dz) \end{aligned}$$
(4.15)

for some constant \(c>0\).

Since for all \(k\ge 1\), conditionally on \(\{J_k<\infty \}, V_{J_k}\) is uniformly distributed on \(S^2\) and independent of \(X_{J_k}\), this is enough to conclude as follows: by (4.14) and the inequality \(J_4\le 4\,\text {diam}(D)\) a.s. on \(\tau _\partial >t\), for all \(t\ge 4\,\text {diam}(D)\),

$$\begin{aligned} \mathbb {P}_{x,u}(t<\tau _\partial )&\le \mathbb {E}_{x,u}[\mathbb {P}_{X_{J_4},V_{J_4}}(t-4\, \text {diam}(D)<\tau _\partial )]\\&\le C\iint _D\int _{S^2}\mathbb {P}_{z,v}(t-4\,\text {diam}(D)<\tau _\partial ) \sigma (dv)\Lambda (dz). \end{aligned}$$

Similarly, (4.15) entails that, for all \(t\ge \text {diam}(D)\),

$$\begin{aligned} \mathbb {P}_\nu (t<\tau _\partial )\ge c\iint _D\int _{S^2}\mathbb {P}_{z,v}(t<\tau _\partial )\sigma (dv)\Lambda (dz), \end{aligned}$$

and thus, for all \(t\ge 5\,\text {diam}(D)\),

$$\begin{aligned} \mathbb {P}_{x,u}(t<\tau _\partial )\le \frac{C}{c}\mathbb {P}_\nu (t-4\,\text {diam}(D)<\tau _\partial ). \end{aligned}$$

Now, it follows from (A1) that \(\mathbb {P}_\nu ((X_{t_0},V_{t_0})\in \cdot )\ge c_1\mathbb {P}_\nu (t_0<\tau _\partial )\nu (\cdot )\) and thus

$$\begin{aligned} \mathbb {P}_\nu (t-4\,\text {diam}(D)+t_0<\tau _\partial )&=\mathbb {E}_\nu [\mathbb {P}_{X_{t_0},V_{t_0}}(t-4\,\text {diam}(D)<\tau _\partial )] \\&\ge c_1\mathbb {P}_\nu (t-4\,\text {diam}(D)<\tau _\partial ). \end{aligned}$$

Iterating this inequality as needed completes the proof of (A2).

So it only remains to prove (4.14) and (4.15). We start with (4.14). We denote by \((\hat{X}_t,\hat{V}_t)_{t\ge 0}\) the neutron transport process in \(\hat{D}=\mathbb {R}^2\), coupled with \((X,V)\) such that \(\hat{X}_t=X_t\) and \(\hat{V}_t=V_t\) for all \(t<\tau _\partial \). We denote by \(\hat{J}_1<\hat{J}_2<\ldots \) the jumping times of \(\hat{V}\). It is clear that \(\hat{J}_k=J_k\) for all \(k\ge 1\) such that \(J_k<\infty \).

Now, \(\hat{X}_{\hat{J}_4}=Y_1+Y_2+Y_3+Y_4\), where the r.v. \(Y_1,\ldots ,Y_4\) are independent, \(Y_1=x+u Z\), where \(Z\) is an exponential r.v. of parameter \(\lambda \), and \(Y_2,Y_3,Y_4\) are all distributed as \(V Z\), where \(V\) is uniform on \(S^2\) and independent of \(Z\). Using the change of variable from polar to Cartesian coordinates, one checks that \(VZ\) has density \(g(z)=\frac{\lambda e^{-\lambda |z|}}{2\pi |z|}\) w.r.t. \(\Lambda (dz)\), so that \(g\in L^{3/2}(\mathbb {R}^2)\). Applying twice Young’s inequality, one has \(g*g*g\in L^\infty (\mathbb {R}^2)\). Hence, for all \(f\in \mathcal {B}(\mathbb {R}^2)\),

$$\begin{aligned} \mathbb {E}_{x,u}[f(X_{J_4});\,J_4<\tau _\partial ]\le \mathbb {E}[f(\hat{X}_{\hat{J}_4})]\le \Vert g*g*g\Vert _{\infty }\iint _{\mathbb {R}^2}f(z)\Lambda (dz). \end{aligned}$$

Hence (4.14) is proved.

We finally prove (4.15). For all \(x,y\in \mathbb {R}^2\), we denote by \([x,y]\) the segment delimited by \(x\) and \(y\). For all \(f\in \mathcal {B}(\mathbb {R}^2)\),

$$\begin{aligned} \mathbb {E}_\nu [f(X_{J_1});\,J_1<\infty ]&=\iint _{D_\varepsilon }\frac{\Lambda (dx)}{\Lambda (D_\varepsilon )}\int _{S^2}\sigma (du)\int _0^\infty 1\!\!1_{[x,x+su]\subset D}\,\lambda e^{-\lambda s}f(x+su)\,ds \\&=\iint _{D_\varepsilon }\frac{\Lambda (dx)}{\Lambda (D_\varepsilon )}\iint _{D}\Lambda (dz) 1\!\!1_{[x,z]\subset D}\,\frac{\lambda e^{-\lambda |z-x|}}{2\pi |z-x|}f(z) \\&\ge \frac{\lambda e^{-\lambda \,\text {diam}(D)}}{2\pi \text {diam}(D) \Lambda (D_\varepsilon )}\iint _{D}\Lambda (dz) f(z)\iint _{D_\varepsilon }\Lambda (dx)1\!\!1_{[x,z]\subset D}. \end{aligned}$$

Now, for all \(z\in D{\setminus }D_\varepsilon \), using assumption (B2),

$$\begin{aligned} \iint _{D_\varepsilon }1\!\!1_{[x,z]\subset D}\,\Lambda (dx)\ge \Lambda (L_z) \ge \frac{\underline{\sigma }}{2}(t_\varepsilon ^2-s_\varepsilon ^2), \end{aligned}$$

and for all \(z\in D_\varepsilon \),

$$\begin{aligned} \iint _{D_\varepsilon }1\!\!1_{[x,z]\subset D}\,\Lambda (dx)\ge \Lambda (D_\varepsilon \cap B(z,\varepsilon )). \end{aligned}$$

Since the map \(z\mapsto \Lambda (D_\varepsilon \cap B(z,\varepsilon ))\) is continuous and positive on the compact set \(\overline{D_\varepsilon }\), we have proved (4.15).

Proof of Lemma 4.5

Using twice the relation: for all bounded measurable \(f, k\ge 1, t\ge 0, x\in D\) and \(u\in S^2\),

$$\begin{aligned}&\mathbb {E}_{x,u}[f(X_t,V_t);J_k\le t<J_{k+1}] \\&\quad =\int _0^t ds\lambda e^{-\lambda s}\int _{S^2}\sigma (dv) \mathbb {E}_{x+su,v}[f(X_{t-s},V_{t-s});J_{k-1}\le t-s< J_k], \end{aligned}$$

we obtain

$$\begin{aligned}&\mathbb {E}_{x,u}[f(X_t,V_t);J_2\le t< J_3]=\lambda ^2 e^{-\lambda t}\\&\quad \int _{S^2}\sigma (dv) \int _{S^2}\sigma (dw) \int _0^t ds\int _0^{t-s} d\theta f(x+su+\theta v+(t-s-\theta )w,w). \end{aligned}$$

For all \(x,y,z\in \mathbb {R}^2\), we denote by \([x,y,z]\) the triangle of \(\mathbb {R}^2\) delimited by \(x, y\) and \(z\). Using the well-known fact that a point in \([x,y,z]\) with barycentric coordinates distributed uniformly on the simplex is distributed uniformly on \([x,y,z]\), we deduce that

$$\begin{aligned}&\mathbb {E}_{x,u}[f(X_t,V_t);J_2\le t< J_3] \\&\quad =\frac{\lambda ^2t^2}{2} e^{-\lambda t} \int _{S^2}\sigma (dv) \int _{S^2}\sigma (dw)\iint _{[u,v,w]} f(x+tz,w)\,\frac{\Lambda (dz)}{\Lambda ([u,v,w])}. \end{aligned}$$

Now, for all \(u,v,w\in S^2\),

$$\begin{aligned} \Lambda ([u,v,w])=\frac{1}{2}|u-w|\,|v-v'|\le |u-w|, \end{aligned}$$

where \(v'\) is the orthogonal projection of \(v\) on the line \((u,w)\) (see fig. 3), and where we used the fact that \(|v-v'|\le 2\).

Fig. 3
figure 3

The triangle \([u,v,w]\) and the point \(v'\)

Moreover, for fixed \(t\ge 0, x\in D, u,w\in S^2\),

where \(v_\theta =(\cos \theta ,\sin \theta )\in \mathbb {R}^2, B(r)\) is the ball centered at 0 of radius \(r\) of \(\mathbb {R}^2, u_z\) (resp. \(w_z\)) is the symmetric of \(u\) (resp. \(w\)) with respect to \(z\) in \(S^2\) (see Fig. 4) and is the length of the arc between \(u\) and \(v\) in \(S^2\).

Fig. 4
figure 4

Definition of \(u_z, w_z\), , \(z_0, u_0\) and \(w_0\). The angle marked is larger than the one marked \()\)

Fix \(0<\delta <1\), and let \(z_0\) be the farthest point in \(B(\delta )\) from \(\{u,w\}\) (this point is unique except when the segment \([u,w]\) between \(u\) and \(w\) is a diameter of \(S^2\)). We set \(u_0=u_{z_0}\) and \(w_0=w_{z_0}\) (see Fig. 4). Note that Thales’ theorem implies that

$$\begin{aligned} |u_0-w_0|=\frac{1-\delta }{1+\delta }|u-w|. \end{aligned}$$

Then, for any \(z\in B(\delta )\), we have \(\widehat{uz_0w}\le \widehat{uzw}\), where \(\widehat{xyz}\) is the measure of the angle formed by the segments \([y,x]\) and \([y,z]\). Since in addition \(|z_0-u_0|=|z_0-w_0|\le 1\) and \(|z-u_z|\wedge |z-w_z|\ge 1-\delta \), Thales’ theorem yields

$$\begin{aligned} \frac{|u_z-w_z|}{1-\delta }\ge |u_0-w_0|. \end{aligned}$$

Putting everything together, we deduce that

$$\begin{aligned}&\mathbb {E}_{x,u}[f(X_t,V_t); J_2\le t< J_3] \\&\quad \ge \frac{\lambda ^2t^2}{4\pi } e^{-\lambda t} \int _{S^2}\sigma (dw)\iint _{B(1)}f(x+tz,w)\frac{|u_z-w_z|}{|u-w|}\Lambda (dz)\\&\quad \ge \frac{\lambda ^2t^2}{4\pi } e^{-\lambda t} \int _{S^2}\sigma (dw)\iint _{B(1)}f(x+tz,w)\frac{(1-|z|)^2}{1+|z|}\Lambda (dz). \end{aligned}$$

This ends the proof of Lemma 4.5. \(\square \)

5 Quasi-stationary distribution: proofs of the results of Sect. 2

This section is devoted to the proofs of Theorem 2.1 (Sects. 5.15.2), Corollary 2.2 (Sect. 5.3), Proposition 2.3 (Sect. 5.4) and Corollary 2.4 (Sect. 5.5). In Theorem 2.1, the implications (iii)\(\Rightarrow \)(i)\(\Rightarrow \)(ii) and (iv)\(\Rightarrow \)(v)\(\Rightarrow \)(vi) are obvious so we only need to prove (ii)\(\Rightarrow \)(iv) and (vi)\(\Rightarrow \)(iii).

5.1 (ii) implies (iv)

Assume that \(X\) satisfies Assumption (A\('\)). We shall prove the result assuming (A1\('\)) holds for \(t_0=1\). The extension to any \(t_0\) is immediate.

Step 1: Control of the distribution at time 1, conditionally on non-absorption at a later time.

Let us show that, for all \(t\ge 1\) and for all \(x_1,x_2\in E\), there exists a probability measure \(\nu ^t_{x_1,x_2}\) on \(E\) such that, for all measurable set \(A\subset E\),

$$\begin{aligned} \mathbb {P}_{x_i}\left( X_1\in A\mid t<\tau _\partial \right) \ge c_1c_2\nu ^t_{x_1,x_2}(A),\quad \text {for }i=1,2. \end{aligned}$$
(5.1)

Fix \(x_1,x_2\in E, i\in \{1,2\}, t\ge 1\) and a measurable subset \(A\subset E\). Using the Markov property, we have

$$\begin{aligned} \mathbb {P}_{x_i}\left( X_1\in A\text { and }t<\tau _\partial \right)&=\mathbb {E}_{x_i}\left[ 1\!\!1_A(X_1)\mathbb {P}_{X_1}\left( t-1<\tau _{\partial }\right) \right] \\&=\mathbb {E}_{x_i}\left[ 1\!\!1_A(X_1)\mathbb {P}_{X_1}\left( t-1<\tau _{\partial }\right) \mid 1<\tau _{\partial }\right] \mathbb {P}_{x_i}\left( 1<\tau _{\partial }\right) \\&\ge c_1\nu _{x_1,x_2}\left( 1\!\!1_A(\cdot )\mathbb {P}_{\cdot }\left( t-1< \tau _{\partial }\right) \right) \mathbb {P}_{x_i}\left( 1<\tau _{\partial }\right) \!, \end{aligned}$$

by Assumption (A1’). Dividing both sides by \(\mathbb {P}_{x_i}\left( t<\tau _{\partial }\right) \), we deduce that

$$\begin{aligned} \mathbb {P}_{x_i}\left( X_1\in A\mid t<\tau _\partial \right)&\ge c_1\nu _{x_1,x_2}\left( 1\!\!1_A(\cdot )\mathbb {P}_{\cdot }\left( t-1<\tau _{\partial }\right) \right) \frac{\mathbb {P}_{x_i}\left( 1<\tau _{\partial }\right) }{\mathbb {P}_{x_i}\left( t<\tau _{\partial }\right) }. \end{aligned}$$

Using again the Markov property, we have

$$\begin{aligned} \mathbb {P}_{x_i}\left( t<\tau _{\partial }\right) \le \mathbb {P}_{x_i}\left( 1<\tau _{\partial }\right) \sup _{y\in E}\mathbb {P}_y\left( t-1<\tau _{\partial }\right) , \end{aligned}$$

so that

$$\begin{aligned} \mathbb {P}_{x_i}\left( X_1\in A\mid t<\tau _\partial \right)&\ge c_1 \frac{\nu _{x_1,x_2}\left( 1\!\!1_A(\cdot )\mathbb {P}_{\cdot }\left( t-1<\tau _{\partial }\right) \right) }{\sup _{y\in E}\mathbb {P}_y\left( t-1<\tau _{\partial }\right) }. \end{aligned}$$

Now Assumption (A2’) implies that the non-negative measure

$$\begin{aligned} B\mapsto \frac{\nu _{x_1,x_2}\left( 1\!\!1_B(\cdot )\mathbb {P}_{\cdot } \left( t-1<\tau _{\partial }\right) \right) }{\sup _{y\in E}\mathbb {P}_y\left( t-1<\tau _{\partial }\right) } \end{aligned}$$

has a total mass greater than \(c_2\). Therefore (5.1) holds with

$$\begin{aligned} \nu ^t_{x_1,x_2}(B)=\frac{\nu _{x_1,x_2}\left( 1\!\!1_B(\cdot )\mathbb {P}_{\cdot } \left( t-1<\tau _{\partial }\right) \right) }{\mathbb {P}_{\nu _{x_1,x_2}}\left( t-1<\tau _{\partial }\right) } \end{aligned}$$

Step 2: Exponential contraction for Dirac initial distributions

We now prove that, for all \(x,y\in E\) and \(T\ge 0\)

$$\begin{aligned} \left\| \mathbb {P}_x\left( X_T\in \cdot \mid T<\tau _{\partial }\right) -\mathbb {P}_y\left( X_T\in \cdot \mid T<\tau _{\partial }\right) \right\| _{TV}\le 2(1-c_1c_2)^{\lfloor T \rfloor }. \end{aligned}$$
(5.2)

Let us define, for all \(0 \le s \le t\le T\) the linear operator \(R_{s,t}^T\) by

$$\begin{aligned} R_{s,t}^T f(x)&= \mathbb {E}_x(f(X_{t-s})\mid T-s< \tau _{\partial })\\&= \mathbb {E}(f(X_{t})\mid X_s=x,\ T< \tau _{\partial }), \end{aligned}$$

by the Markov property. For any \(T>0\), the family \((R_{s,t}^T)_{0\le s\le t\le T}\) is a Markov semi-group: we have, for all \(0\le u\le s\le t\le T\) and all bounded measurable function \(f\),

$$\begin{aligned} R_{u,s}^T (R_{s,t}^T f)(x) = R_{u,t}^T f(x). \end{aligned}$$

This can be proved as Lemma 12.2.2 in [15] or by observing that a Markov process conditioned by an event in its tail \(\sigma \)-field remains Markovian (but no longer time-homogeneous).

For any \(x_1,x_2\in E\), we have by (5.1) that \(\delta _{x_i} R_{s,s+1}^T-c_1 c_2\nu ^{T-s}_{x_1,x_2}\) is a positive measure whose mass is \(1-c_1 c_2\), for \(i=1,2\). We deduce that

$$\begin{aligned}&\left\| \delta _{x_1} R_{s,s+1}^T-\delta _{x_2} R_{s,s+1}^T \right\| _{TV}\\&\quad \le \Vert \delta _{x_1} R_{s,s+1}^T - c_1 c_2\nu ^{T-s}_{x_1,x_2}\Vert _{TV} +\Vert \delta _{x_2} R_{s,s+1}^T - c_1 c_2 \nu ^{T-s}_{x_1,x_2}\Vert _{TV}\\&\quad \le 2(1-c_1 c_2). \end{aligned}$$

Let \(\mu _1,\mu _2\) be two mutually singular probability measures on \(E\) and any \(f\ge 0\), we have

$$\begin{aligned} \left\| \mu _1 R_{s,s+1}^T-\mu _2 R_{s,s+1}^T \right\| _{TV}&\le \iint _{E^2} \left\| \delta _{x} R_{s,s+1}^T-\delta _{y} R_{s,s+1}^T \right\| _{TV} d\mu _1\otimes d\mu _2(x,y)\\&\le 2(1-c_1c_2)=(1-c_1c_2)\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

Now if \(\mu _1\) and \(\mu _2\) are any two different probability measures (not necessarily mutually singular), one can apply the previous result to the mutually singular probability measures \(\frac{(\mu _1-\mu _2)_+}{(\mu _1-\mu _2)_+(E)}\) and \(\frac{(\mu _1-\mu _2)_-}{(\mu _1-\mu _2)_-(E)}\). Then

$$\begin{aligned}&\left\| \frac{(\mu _1-\mu _2)_+}{(\mu _1-\mu _2)_+(E)} R_{s,s+1}^T-\frac{(\mu _1-\mu _2)_-}{(\mu _1-\mu _2)_-(E)} R_{s,s+1}^T \right\| _{TV}\\&\quad \le (1-c_1 c_2)\left\| \frac{(\mu _1-\mu _2)_+}{(\mu _1-\mu _2)_+(E)} -\frac{(\mu _1-\mu _2)_-}{(\mu _1-\mu _2)_-(E)}\right\| _{TV}. \end{aligned}$$

Since \(\mu _1(E)=\mu _2(E)=1\), we have \((\mu _1-\mu _2)_+(E)=(\mu _1-\mu _2)_-(E)\). So multiplying the last inequality by \((\mu _1-\mu _2)_+(E)\), we deduce that

$$\begin{aligned}&\Vert (\mu _1-\mu _2)_+ R_{s,s+1}^T-(\mu _1-\mu _2)_- R_{s,s+1}^T \Vert _{TV}\\&\quad \le (1-c_1c_2) \Vert (\mu _1-\mu _2)_+-(\mu _1-\mu _2)_-\Vert _{TV}. \end{aligned}$$

Since \((\mu _1-\mu _2)_+-(\mu _1-\mu _2)_-=\mu _1-\mu _2\), we obtain

$$\begin{aligned} \Vert \mu _1 R_{s,s+1}^T-\mu _2 R_{s,s+1}^T \Vert _{TV}\le (1-c_1c_2)\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

Using the semi-group property of \((R_{s,t}^T)_{s,t}\), we deduce that, for any \(x,y\in E\),

$$\begin{aligned} \Vert \delta _x R_{0,T}^T - \delta _y R_{0,T}^T\Vert _{TV}&=\Vert \delta _x R^T_{0,T-1} R_{T-1,T}^T - \delta _y R_{0,T-1}^T R_{T-1,T}^T\Vert _{TV}\\&\le \left( 1-c_1c_2\right) \Vert \delta _x R_{0,T-1}^T - \delta _y R_{0,T-1}^T\Vert _{TV}\\&\le \ \ldots \ \le 2 \left( 1-c_1c_2\right) ^{\lfloor T\rfloor }. \end{aligned}$$

By definition of \(R_{0,T}^T\), this inequality immediately leads to (5.2).

Step 3: Exponential contraction for general initial distributions

We prove now that inequality (5.2) extends to any pair of initial probability measures \(\mu _1,\mu _2\) on \(E\), that is, for all \(T\ge 0\),

$$\begin{aligned} \left\| \mathbb {P}_{\mu _1}\left( X_T\in \cdot \mid T<\tau _{\partial }\right) -\mathbb {P}_{\mu _2}\left( X_T\in \cdot \mid T<\tau _{\partial }\right) \right\| _{TV}\le 2(1-c_1c_2)^{\lfloor T \rfloor }. \end{aligned}$$
(5.3)

Let \(\mu _1\) be a probability measure on \(E\) and \(x\in E\). We have

$$\begin{aligned}&\Vert \mathbb {P}_{\mu _1}(X_T\in \cdot \mid T<\tau _\partial )-\mathbb {P}_{x}(X_T\in \cdot \mid T<\tau _\partial )\Vert _{TV}\\&\quad =\frac{1}{\mathbb {P}_{\mu _1}(T<\tau _\partial )}\Vert \mathbb {P}_{\mu _1}(X_T\in \cdot ) -\mathbb {P}_{\mu _1}(T<\tau _\partial )\mathbb {P}_{x}(X_T\in \cdot \mid T<\tau _\partial )\Vert _{TV}\\&\quad \le \frac{1}{\mathbb {P}_{\mu _1}(T<\tau _\partial )} \int _{y\in E} \Vert \mathbb {P}_{y}(X_T\in \cdot )\!-\!\mathbb {P}_{y}(T<\tau _\partial )\mathbb {P}_x(X_T\in \cdot \mid T<\tau _\partial )\Vert _{TV}d\mu _1(y)\\&\quad \le \frac{1}{\mathbb {P}_{\mu _1}(T<\tau _\partial )} \int _{y\in E} \mathbb {P}_{y}(T<\tau _\partial )\Vert \mathbb {P}_{y}(X_T\in \cdot \mid T<\tau _\partial )\\&\quad \quad -\mathbb {P}_x(X_T\in \cdot \mid T<\tau _\partial )\Vert _{TV}d\mu _1(y)\\&\quad \le \frac{1}{\mathbb {P}_{\mu _1}(T<\tau _\partial )} \int _{y\in E} \mathbb {P}_{y}(T<\tau _\partial ) 2 (1-c_1c_2)^{\lfloor T\rfloor }d\mu _1(y)\\&\quad \le 2 (1-c_1c_2)^{\lfloor T\rfloor }. \end{aligned}$$

The same computation, replacing \(\delta _x\) by any probability measure, leads to (5.3).

Step 4: Existence and uniqueness of a quasi-stationary distribution for \(X\).

Let us first prove the uniqueness of the quasi-stationary distribution. If \(\alpha _1\) and \(\alpha _2\) are two quasi-stationary distributions, then we have \(\mathbb {P}_{\alpha _i}(X_t\in \cdot |t< \tau _{\partial })=\alpha _i\) for \(i=1,2\) and any \(t\ge 0\). Thus, we deduce from inequality (5.3) that

$$\begin{aligned} \Vert \alpha _1-\alpha _2\Vert _{TV}\le 2 (1-c_1c_2)^{\lfloor t\rfloor },\, \forall t\ge 0, \end{aligned}$$

which yields \(\alpha _1=\alpha _2\).

Let us now prove the existence of a QSD. By [28, Proposition 1], this is equivalent to prove the existence of a quasi-limiting distribution for \(X\). So we only need to prove that \(\mathbb {P}_x(X_t\in \cdot |t<\tau _{\partial })\) converges when \(t\) goes to infinity, for some \(x\in E\). We have, for all \(s,t\ge 0\) and \(x\in E\),

$$\begin{aligned} \mathbb {P}_x\left( X_{t+s}\in \cdot \mid t+s<\tau _{\partial }\right)&=\frac{\delta _x P_{t+s}}{\delta _x P_{t+s}1\!\!1_E} =\frac{\delta _x P_tP_s}{\delta _x P_t P_s1\!\!1_E}= \frac{\delta _x R_{0,s}^s P_t}{\delta _x R_{0,s}^sP_t1\!\!1_E}\nonumber \\&=\mathbb {P}_{\delta _x R_{0,s}^s}\left( X_t\in \cdot \mid t<\tau _{\partial }\right) , \end{aligned}$$
(5.4)

where we use the identity \(R_{0,s}^sf(x)=\frac{P_s f(x)}{P_s1\!\!1_E(x)}\) for the third equality. Hence

$$\begin{aligned}&\Vert \mathbb {P}_x(X_t\in \cdot |t<\tau _{\partial }) -\mathbb {P}_x(X_{t+s}\in \cdot |t+s<\tau _{\partial })\Vert _{TV}\\&\quad =\Vert \mathbb {P}_x(X_t\in \cdot |t<\tau _{\partial })-\mathbb {P}_{\delta _x R_{0,s}^{s}}(X_{t} \in \cdot |t<\tau _{\partial })\Vert _{TV}\\&\quad \le 2 \left( 1-c_1c_2\right) ^{\lfloor t\rfloor } \xrightarrow [s,t\rightarrow +\infty ]{} 0. \end{aligned}$$

In particular the sequence \((\mathbb {P}_x(X_t\in \cdot \mid t<\tau _{\partial }))_{t\ge 0}\) is a Cauchy sequence for the total variation norm. The space of probability measures on \(E\) equipped with the total variation norm is complete, so \(\mathbb {P}_x(X_t\in \cdot \mid t<\tau _{\partial })\) converges when \(t\) goes to infinity to some probability measure \(\alpha \) on \(E\).

Finally Equation (2.3) follows from (5.3) with \(\mu _1=\mu \) and \(\mu _2=\alpha \). Therefore we have proved (iv) and the last statement of Theorem 2.1 concerning existence and uniqueness of a quasi-stationary distribution and the explicit expression for \(C\) and \(\gamma \).

5.2 (vi) implies (iii)

Assume that (2.2) holds with some probability measure \(\alpha \) on \(E\). Let us define

$$\begin{aligned} \varepsilon (t)=\sup _{x\in E}\left\| \mathbb {P}_x(X_t\in \cdot \mid t<\tau _\partial )-\alpha \right\| _{TV}. \end{aligned}$$

Step 1: \(\varepsilon (\cdot )\) is non-increasing and \(\alpha \) is a quasi-stationary distribution

For all \(s,t\ge 0, x\in E\) and \(A\in \mathcal{E}\),

$$\begin{aligned} |\mathbb {P}_x(X_{t+s}\in A&\mid t+s<\tau _\partial )-\alpha (A)\\&=\left| \frac{\mathbb {E}_x\left\{ 1\!\!1_{t<\tau _\partial }\mathbb {P}_{X_t}(s<\tau _\partial )\left[ \mathbb {P}_{X_t}(X_s\in A\mid s<\tau _\partial )-\alpha (A)\right] \right\} }{\mathbb {P}_x(t+s<\tau _\partial )}\right| \\&\le \frac{\mathbb {E}_x\left\{ 1\!\!1_{t<\tau _\partial }\mathbb {P}_{X_t}(s<\tau _\partial )\left| \mathbb {P}_{X_t}(X_s\in A\mid s<\tau _\partial )-\alpha (A)\right| \right\} }{\mathbb {P}_x(t+s<\tau _\partial )}\\&\le \varepsilon (s). \end{aligned}$$

Taking the supremum over \(x\) and \(A\), we deduce that the function \(\varepsilon (\cdot )\) is non-increasing. By (2.2), this implies that \(\varepsilon (t)\) goes to \(0\) when \(t\rightarrow +\infty \). By Step 4 above, \(\alpha \) is a quasi-stationary distribution and there exists \(\lambda _0>0\) such \(\mathbb {P}_{\alpha }(t<\tau _{\partial })=e^{-\lambda _0 t}\).

Step 2: Proof of (\(\hbox {A}2''\)) for \(\mu =\alpha \)

We define, for all \(s\ge 0\),

$$\begin{aligned} A(s)=\frac{\sup _{x\in E}\mathbb {P}_x(s<\tau _{\partial })}{\mathbb {P}_\alpha (s<\tau _{\partial }).} =e^{\lambda _0 s}\sup _{x\in E}\mathbb {P}_x(s<\tau _{\partial }). \end{aligned}$$

Our goal is to prove that \(A\) is bounded. The Markov property implies, for \(s\le t\),

$$\begin{aligned} \mathbb {P}_x(t<\tau _{\partial })=\mathbb {P}_x(s<\tau _{\partial })\,\mathbb {E}\left( \mathbb {P}_{X_s}(t-s< \tau _{\partial })\mid s<\tau _{\partial }\right) \!\!. \end{aligned}$$

By (2.2), the total variation distance between \(\alpha \) and \(\mathcal{L}_x(X_s\mid s<\tau _{\partial })\) is smaller than \(\varepsilon (s)\), so

$$\begin{aligned} \mathbb {P}_x(t<\tau _{\partial })\le \mathbb {P}_x(s<\tau _{\partial })\,\left( \mathbb {P}_{\alpha }(t-s<\tau _{\partial }) +\varepsilon (s)\sup _{y\in E}\mathbb {P}_y(t-s<\tau _\partial )\right) \!\!. \end{aligned}$$

For \(s\le t\), we thus have

$$\begin{aligned} A(t)\le A(s)\left( 1+\varepsilon (s)A(t-s)\right) \!\!. \end{aligned}$$
(5.5)

The next lemma proves that

$$\begin{aligned} \mathbb {P}_{\alpha }(t<\tau _\partial )=e^{-\lambda _0 t}\ge c_2(\alpha )\sup _{x\in E}\mathbb {P}_x(t<\tau _\partial ) \end{aligned}$$
(5.6)

for the constant \(c_2(\alpha )=1/\sup _{s>0} A(s)\) and concludes Step 2.

Lemma 5.1

A function \(A:\mathbb {R}_+\mapsto \mathbb {R}_+\) satisfying (5.5) for all \(s\le t\) is bounded.

Proof

We introduce the non-decreasing function \( \psi (t)=\sup _{0\le s\le t} A_s.\) It follows from (5.5) that, for all \(s\le u\le t\),

$$\begin{aligned} A(u)\le \psi (s)\left( 1+\varepsilon (s)\psi (t-s)\right) \!\!. \end{aligned}$$

Since this inequality holds also for \(u\le s\), we obtain for all \(s\le t\),

$$\begin{aligned} \psi (t)\le \psi (s)\left( 1+\varepsilon (s)\psi (t-s)\right) \!\!. \end{aligned}$$
(5.7)

By induction, for all \(N\ge 1\) and \(s\ge 0\),

$$\begin{aligned} \psi (Ns)&\le \psi (s)\prod _{k=1}^{N-1}\left( 1+\varepsilon (ks)\psi (s)\right) \nonumber \\&\le \psi (s)\exp \left( \psi (s)\sum _{k=1}^{N-1} \varepsilon (ks)\right) \nonumber \\&\le \psi (s)\exp \left( \psi (s)\sum _{k=1}^{\infty } \varepsilon (ks)\right) \!\!, \end{aligned}$$
(5.8)

where \(\sum _{k=1}^{\infty } \varepsilon (ks)<\infty \) by (2.2).

Since \(\psi \) is non-decreasing, it is bounded. \(\square \)

Remark 6

Note that, under Assumption (v), \(\varepsilon (t)\le Ce^{-\gamma t}\). Using the fact that \(\psi (s)\le e^{\lambda _0 s}\), we deduce from (5.8) that, for all \(s>0\),

$$\begin{aligned} A(Ns)\le \exp \left( \lambda _0s+\frac{Ce^{(\lambda _0-\gamma )s}}{1-e^{-\gamma s}} \right) \!\!. \end{aligned}$$

This justifies (2.5) in Remark 1.

Step 3: Proof of \((\hbox {A}2'')\)

Applying Step 3 of Sect. 5.1 with \(\mu _1=\rho \) and \(\delta _x=\alpha \), we easily obtain

$$\begin{aligned} \varepsilon (t)=\sup _{\rho \in \mathcal{M}_1(E)}\left\| \mathbb {P}_{\rho }(X_t\in \cdot \mid t<\tau _\partial )-\alpha \right\| _{TV}. \end{aligned}$$

Let \(\mu \) be a probability measure on \(E\). For any \(s\ge 0\), let us define \(\mu _s(\cdot )=\mathbb {P}_{\mu }(X_s\in \cdot \mid s<\tau _\partial )\). We have \(\Vert \mu _s-\alpha \Vert _{TV}\le \varepsilon (s)\) and so

$$\begin{aligned} \mathbb {P}_{\mu _s}(t-s<\tau _\partial )&\ge \mathbb {P}_{\alpha }(t-s<\tau _\partial )-\varepsilon (s)\sup _{x\in E}\mathbb {P}_{x}(t-s<\tau _\partial )\\&\ge e^{-\lambda _0(t-s)}-\frac{\varepsilon (s)}{c_2(\alpha )}e^{-\lambda _0(t-s)} \end{aligned}$$

by (5.6). Since \(\varepsilon (s)\) decreases to \(0\), there exists \(s_0\) such that \(\varepsilon (s_0)/c_2(\alpha )=1/2\). Using the Markov property, we deduce that, for any \(t\ge s_0\),

$$\begin{aligned} \mathbb {P}_{\mu }(t<\tau _\partial )&=\mathbb {P}_{\mu _{s_0}}(t-s_0<\tau _\partial )\mathbb {P}_{\mu }(s_0<\tau _\partial )\\&\ge \frac{e^{-\lambda _0 (t-s_0)}}{2} \mathbb {P}_{\mu }(s_0<\tau _\partial ). \end{aligned}$$

Therefore, by (5.6), we have proved that

$$\begin{aligned} \mathbb {P}_{\mu }(t<\tau _\partial )\ge c_2(\mu )\sup _{x\in E}\mathbb {P}_x(t<\tau _\partial ) \end{aligned}$$

for

$$\begin{aligned} c_2(\mu )=\frac{1}{2}c_2(\alpha )e^{\lambda _0 s_0}\mathbb {P}_\mu (s_0<\tau _\partial )>0. \end{aligned}$$

Step 4: Construction of the measure \(\nu \) in (A1)

We define the measure \(\nu \) as the infimum of the family of measures \((\delta _x R_{0,2t}^{2t})_{x\in E}\), as defined in the next lemma, for a fixed \(t\) such that \(c_2(\alpha )\ge 2\varepsilon (t)\). To prove \((A1)\) for \(t_0=2t\), we only need to check that \(\nu \) is a positive measure.

Lemma 5.2

Let \((\mu _x)_{x\in F}\) be a family of positive measures on \(E\) indexed by an arbitrary set \(F\). For all \(A\in \mathcal{E}\), we define

$$\begin{aligned} \mu (A)\!=\!\inf \left\{ \sum _{i=1}^n \mu _{x_i}(B_i) \mid n\!\ge \! 1,\,x_1,\ldots ,x_n\in F,\, B_1,\ldots ,B_n\in \mathcal{E}\text { partition of }A\right\} . \end{aligned}$$

Then \(\mu \) is the largest non-negative measure on \(E\) such that \(\mu \le \mu _x\) for all \(x\in F\) and is called the infimum measure of \((\mu _x)_{x\in F}\).

Proof of Lemma 5.2

Clearly \(\mu (A)\ge 0\) for all \(A\in \mathcal{E}\) and \(\mu (\emptyset )=0\). Let us prove the \(\sigma \)-additivity. Consider disjoints measurable sets \(A_1,A_2,\ldots \) and define \(A=\cup _{k} A_k\). Let \(B_1,\ldots ,B_n\) be a measurable partition of \(A\). Then

$$\begin{aligned} \sum _{i=1}^n \mu _{x_i}(B_i)=\sum _{k=1}^\infty \sum _{i=1}^n \mu _{x_i}(B_i\cap A_k)\ge \sum _{k=1}^\infty \mu (A_k). \end{aligned}$$

Hence \(\mu (A)\ge \sum _{k=1}^\infty \mu (A_k)\).

Fix \(K\ge 1\) and \(\epsilon >0\). For all \(k\in \{1,\ldots ,K\}\), let \((B_i^k)_{i\in \{1,\ldots ,n_k\}}\) be a partition of \(A_i\) and \((x^k_i)_{i\in \{1,\ldots ,n_k\}}\) be such that

$$\begin{aligned} \mu (A_k)\ge \sum _{i=1}^{n_k} \mu _{x_i^k}(B_i^k)-\frac{\epsilon }{2^k}. \end{aligned}$$

Then

$$\begin{aligned} \sum _{k=1}^K\mu (A_k)\ge \sum _{k=1}^K \left( \sum _{i=1}^{n_k} \mu _{x_i^k}(B_i^k)-\frac{\epsilon }{2^k}\right) \ge \mu \left( \bigcup _{k=1}^K A_k\right) -\epsilon . \end{aligned}$$

Since, for any \(x_0\in F\),

$$\begin{aligned} \mu (A)\le \mu \left( \bigcup _{k=1}^K A_k\right) +\mu _{x_0}\left( A{\setminus }\bigcup _{k=1}^K A_k\right) , \end{aligned}$$

choosing \(K\) large enough, \(\mu (A)\le \mu \left( \cup _{k=1}^K A_k\right) +\epsilon \). Combining this with the previous inequality, we obtain

$$\begin{aligned} \mu (A)\ge \sum _{k=1}^\infty \mu (A_k)\ge \mu (A)-2\epsilon . \end{aligned}$$

This concludes the proof that \(\mu \) is a non-negative measure.

Let us now prove that \(\mu \) is the largest non-negative measure on \(E\) such that \(\mu \le \mu _x\) for all \(x\in F\). Let \(\hat{\mu }\) be another measure such that \(\hat{\mu }\le \mu _x\). Then, for all \(A\in \mathcal{E}\) and \(B_1,\ldots ,B_n\) a measurable partition of \(A\) and \(x_1,\ldots ,x_n\in F\),

$$\begin{aligned} \hat{\mu }(A)=\sum _{i=1}^n\hat{\mu }(B_i)\le \sum _{i=1}^n \mu _{x_i}(B_i). \end{aligned}$$

Taking the infimum over \((B_i)\) and \((x_i)\) implies that \(\hat{\mu }\le \mu \). \(\square \)

Let us now prove that \(\nu \) is a positive measure. By (5.4), for any \(x\in E, t\ge 0\) and \(A\subset E\) measurable,

$$\begin{aligned} \delta _x R^{2t}_{0,2t}(A)&=\mathbb {P}_{\delta _x R^t_{0,t}}\left( X_t\in A|t<\tau _{\partial }\right) \\&=\frac{\int _E \mathbb {P}_y(X_t\in A)\delta _x R^t_{0,t}(dy)}{\int _E \mathbb {P}_y(t<\tau _\partial )\delta _x R^t_{0,t}(dy)}. \end{aligned}$$

By Step 2, we have

$$\begin{aligned} \delta _x R^{2t}_{0,2t}(A)&\ge c_2(\alpha )e^{\lambda _0 t}\int _E \mathbb {P}_y(X_t\in A)\delta _x R^t_{0,t}(dy). \end{aligned}$$

We set \(\nu _{t,x}^+=(\alpha -\delta _x R_{0,t}^t)_+\). Using the inequality (between measures) \(\delta _x R_{0,t}^t\ge \alpha -\nu _{t,x}^+\),

$$\begin{aligned} \delta _x R^{2t}_{0,2t}(A)&\ge c_2(\alpha )\alpha (A)- c_2(\alpha )e^{\lambda _0 t}\int _E \mathbb {P}_y(X_t\in A\mid t<\tau _\partial )\mathbb {P}_y(t<\tau _\partial )\nu _{t,x}^+(dy). \end{aligned}$$

Since \(\nu _{t,x}^+\) is a positive measure, Step 2 implies again

$$\begin{aligned} \delta _x R^{2t}_{0,2t}(A)&\ge c_2(\alpha )\alpha (A)- \int _E \mathbb {P}_y(X_t\in A\mid t<\tau _\partial )\nu _{t,x}^+(dy)\\&= c_2(\alpha )\alpha (A)- \int _E \delta _y R^t_{0,t}(A)\nu _{t,x}^+(dy)\\&\ge \left( c_2(\alpha )-\nu _{t,x}^+(E)\right) \alpha (A)-\int _E \left( \delta _y R^t_{0,t}(A)-\alpha (A)\right) \nu _{t,x}^+(dy)\\&\ge \left( c_2(\alpha )-\varepsilon (t)\right) \alpha (A)-\int _E \left( \delta _y R^t_{0,t}(A)-\alpha (A)\right) _+\nu _{t,x}^+(dy), \end{aligned}$$

where the inequality \(\nu _{t,x}^+(E)\le \varepsilon (t)\) follows from (2.2). Moreover \(\nu _{t,x}^+=(\alpha -\delta _x R_{0,t}^t)_+\le \alpha \), therefore

$$\begin{aligned} \delta _x R^{2t}_{0,2t}(A)&\ge \left( c_2(\alpha )-\varepsilon (t)\right) \alpha (A)-\int _E \left( \delta _y R^t_{0,t}(A)-\alpha (A)\right) _+\alpha (dy). \end{aligned}$$

Hence, for all \(B_1,\ldots ,B_n\) a measurable partition of \(E\) and all \(x_1,\ldots ,x_n\in E\),

$$\begin{aligned} \sum _{i=1}^n \delta _{x_i} R_{0,2t}^2t(B_i)\ge \left( c_2(\alpha )-\varepsilon (t)\right) \alpha (E)-\int _E \sum _{i=1}^n\left( \delta _y R^t_{0,t}(B_i)-\alpha (B_i)\right) _+\alpha (dy). \end{aligned}$$

Now

$$\begin{aligned} \sum _{i=1}^n\left( \delta _y R^t_{0,t}(B_i)-\alpha (B_i)\right) _+ \le \left\| \delta _y R^t_{0,t}(B_i)-\alpha (B_i)\right\| _{TV} \le \varepsilon (t). \end{aligned}$$

Therefore \(\nu (E)\ge c_2(\alpha )-2\varepsilon (t)>0\).

This concludes the proof of Theorem 2.1.

5.3 Proof of Corollary 2.2

Assume that the hypotheses of Theorem 2.1 are satisfied and let \(\mu _1,\mu _2\) be two probability measures and \(t\ge 0\). Without loss of generality, we assume that \(\mathbb {P}_{\mu _1}(t<\tau _\partial )\ge \mathbb {P}_{\mu _2}(t<\tau _\partial )\) and prove that

$$\begin{aligned} \left\| \mathbb {P}_{\mu _1}\left( X_t\in \cdot \mid t<\tau _\partial \right) -\mathbb {P}_{\mu _2}\left( X_t\in \cdot \mid t<\tau _\partial \right) \right\| \le \frac{(1-c_1c_2)^{\lfloor t/t_0\rfloor }}{c_2(\mu _1)}\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

Using the relation

$$\begin{aligned} \mu _1 P_t=(\mu _1-(\mu _1-\mu _2)_+)P_t+(\mu _1-\mu _2)_+P_t, \end{aligned}$$

the similar one for \(\mu _2 P_t\) and \(\mu _1-(\mu _1-\mu _2)_+=\mu _2-(\mu _2-\mu _1)_+\), we can write

$$\begin{aligned} \frac{\mu _1 P_t}{\mu _1 P_t1\!\!1_E}-\frac{\mu _2 P_t}{\mu _2 P_t1\!\!1_E}=\alpha _1 P_t-\alpha _2 P_t, \end{aligned}$$
(5.9)

where \(\alpha _1\) and \(\alpha _2\) are the positive measures defined by

$$\begin{aligned} \alpha _1=\frac{(\mu _1-\mu _2)_+}{\mu _1 P_t1\!\!1_E} \end{aligned}$$

and

$$\begin{aligned} \alpha _2=\frac{(\mu _2-\mu _1)_+}{\mu _2 P_t1\!\!1_E}+\left( \frac{1}{\mu _2 P_t1\!\!1_E}-\frac{1}{\mu _1 P_t1\!\!1_E}\right) \times \left( \mu _1-(\mu _1-\mu _2)_+\right) \!\!. \end{aligned}$$

We immediately deduce from(5.9), that \(\alpha _1P_t1\!\!1_E=\alpha _2P_t1\!\!1_E\), so that

$$\begin{aligned} \Vert \alpha _1 P_t-\alpha _2 P_t\Vert _{TV}&=\alpha _1 P_t1\!\!1_E\left\| \frac{\alpha _1}{\alpha _1P_t1\!\!1_E}P_t -\frac{\alpha _2}{\alpha _2P_t1\!\!1_E}P_t\right\| _{TV}\\&\le 2(1-c_1c_2)^{\lfloor t/t_0\rfloor }\alpha _1P_t1\!\!1_E. \end{aligned}$$

Since

$$\begin{aligned} \alpha _1 P_t1\!\!1_E&=\frac{(\mu _1-\mu _2)_+P_t1\!\!1_E}{\mu _1P_t1\!\!1_E}\\&\le (\mu _1-\mu _2)_+(E)\frac{\sup _{\rho \in \mathcal{M}_1(E)}\rho P_t1\!\!1_E}{\mu _1P_t1\!\!1_E}\le \frac{\Vert \mu _1-\mu _2\Vert _{TV}}{2c_2(\mu _1)} \end{aligned}$$

by definition of \(c_2(\mu _1)\), the proof of Corollary 2.2 is complete.

5.4 Proof of Proposition 2.3

Step 1: Existence of \(\eta \).

For all \(x\in E\) and \(t\ge 0\), we set

$$\begin{aligned} \eta _t(x)=\frac{\mathbb {P}_x(t<\tau _\partial )}{\mathbb {P}_\alpha (t<\tau _\partial )}=e^{\lambda _0 t}\mathbb {P}_x(t<\tau _\partial ). \end{aligned}$$

By the Markov property

$$\begin{aligned} \eta _{t+s}(x)&=e^{\lambda _0 (t+s)}\mathbb {E}_x\left( 1\!\!1_{t<\tau _\partial }\mathbb {P}_{X_t}(s<\tau _\partial )\right) \\&=\eta _t(x)\mathbb {E}_x\left( \eta _s(X_t)\mid t<\tau _\partial \right) \!\!. \end{aligned}$$

By (\(\hbox {A}2''\)), \(\int _E\eta _s(y)\rho (dy)\) is uniformly bounded by \(1/c_2(\alpha )\) in \(s\) and in \(\rho \in \mathcal{M}_1(E)\). Therefore, by (2.1),

$$\begin{aligned} \left| \mathbb {E}_x\left( \eta _s(X_t)\mid t<\tau _\partial \right) - \alpha (\eta _s)\right| \le \frac{C}{c_2(\alpha )}e^{-\gamma t}. \end{aligned}$$

Since, \(\alpha (\eta _s)=1\), we obtain

$$\begin{aligned} \sup _{x\in E}\left| \eta _{t+s}(x)-\eta _t(x)\right| \le \frac{C}{c_2(\alpha )^2}e^{-\gamma t}. \end{aligned}$$

That implies that \((\eta _t)_{t\ge 0}\) is a Cauchy family for the uniform norm and hence converges uniformly to a bounded limit \(\eta \). By Lebesgue’s theorem, \(\alpha (\eta )=1\).

It only remains to prove that \(\eta (x)>0\) for all \(x\in E\). This is an immediate consequence of (\(\hbox {A}2''\)).

Step 2: Eigenfunction of the infinitesimal generator.

We prove now that \(\eta \) belongs to the domain of the infinitesimal generator \(\mathcal L\) of the semi-group \((P_t)_{t\ge 0}\) and that

$$\begin{aligned} \mathcal{L}\eta =-\lambda _0\eta . \end{aligned}$$
(5.10)

For any \(h>0\), we have by the dominated convergence theorem and Step 1,

$$\begin{aligned} P_h\eta (x)&=\mathbb {E}_x\left( \eta (X_h)\right) =\lim _{t\rightarrow \infty }\frac{\mathbb {E}_x\left( \mathbb {P}_{X_h}(t<\tau _\partial ) \right) }{\mathbb {P}_{\alpha }(t<\tau _\partial )}. \end{aligned}$$

We have \(\mathbb {P}_{\alpha }(t<\tau _\partial )=e^{-\lambda _0 h}\mathbb {P}_{\alpha }(t+h<\tau _\partial )\). Hence, by the Markov property,

$$\begin{aligned} P_h\eta (x)&= \lim _{t\rightarrow \infty }e^{-\lambda _0 h}\frac{\mathbb {P}_{x}(t+h<\tau _\partial )}{\mathbb {P}_{\alpha }(t+h<\tau _\partial )}\\&= e^{-\lambda _0 h} \eta (x). \end{aligned}$$

Since \(\eta \) is uniformly bounded, it is immediate that

$$\begin{aligned} \frac{P_h\eta -\eta }{h}\xrightarrow [h\rightarrow 0]{\Vert \cdot \Vert _{\infty }}-\lambda _0 \eta . \end{aligned}$$

By definition of the infinitesimal generator, this implies that \(\eta \) belongs to the domain of \(\mathcal L\) and that (5.10) holds.

5.5 Proof of Corollary 2.4

Since \(Lf=\lambda f\), we have \(\mathbb {E}_x(f(X_t))=P_t f(x)=e^{\lambda t} f(x)\). When \(f(\partial )\ne 0\), taking \(x=\partial \), we see that \(\lambda =0\) and, taking \(x\not = \partial \), the left hand side converges to \(f(\partial )\) and thus \(f\) is constant. So let us assume that \(f(\partial )=0\). By property (v),

$$\begin{aligned} \frac{P_t f(x)}{P_t1\!\!1_E(x)}-\alpha (f)\xrightarrow [t\rightarrow +\infty ]{} 0 \end{aligned}$$

uniformly in \(x\in E\) and exponentially fast. Assume first that \(\alpha (f)\ne 0\), then by Proposition 2.3,

$$\begin{aligned} \frac{e^{(\lambda +\lambda _0)t}f(x)}{\eta (x)}\xrightarrow [t\rightarrow +\infty ]{} \alpha (f),\ \forall x\in E. \end{aligned}$$

We deduce that \(\lambda =-\lambda _0\) and \(f(x)=\alpha (f)\eta (x)\) for all \(x\in E\cup \{\partial \}\). Assume finally that \(\alpha (f)=0\), then, using (2.4) to give a lower bound for \(1/P_t1\!\!1_E(x)\), we deduce that

$$\begin{aligned} c_2(\alpha ) e^{(\gamma +\lambda +\lambda _0)t}f(x)\le \frac{e^{\gamma t}P_tf(x)}{P_t1\!\!1_E(x)},\ \forall x\in E, \end{aligned}$$

where the right hand side is bounded by property (v) of Theorem 2.1. Thus \(\gamma +\lambda +\lambda _0\le 0\).

6 \(Q\)-process: proofs of the results of Sect. 3

We first prove Theorem 3.1 in Sect. 6.1 and then Theorem 3.2 in Sect. 6.2.

6.1 Proof of Theorem 3.1

Step 1: Existence of the \(Q\) -process \(\mathbb {Q}_x\) and expression of its transition kernel.

We introduce \(\Gamma _t=1\!\!1_{t<\tau _\partial }\) and define the probability measure

$$\begin{aligned} Q^{\Gamma ,x}_t=\frac{\Gamma _t}{\mathbb {E}_x\left( \Gamma _t\right) }\mathbb {P}_x, \end{aligned}$$

so that the \(Q\)-process exists if and only if \(Q_t^{\Gamma ,x}\) admits a proper limit when \(t\rightarrow \infty \). We have by the Markov property

$$\begin{aligned} \frac{\mathbb {E}_x\left( \Gamma _t\mid \mathcal{F}_s\right) }{\mathbb {E}_x\left( \Gamma _t\right) } =\frac{1\!\!1_{s<\tau _\partial }\mathbb {P}_{X_s}\left( t-s<\tau _\partial \right) }{\mathbb {P}_x\left( t<\tau _\partial \right) }. \end{aligned}$$

By Proposition 2.3, this is uniformly bounded and converges almost surely to

$$\begin{aligned} M_s:=1\!\!1_{s<\tau _\partial }e^{\lambda _0 s}\frac{\eta (X_s)}{\eta (x)}. \end{aligned}$$

By the dominated convergence Theorem, we obtain that

$$\begin{aligned} \mathbb {E}_x\left( M_s\right) =1. \end{aligned}$$

By the penalisation’s theorem of Roynette, Vallois and Yor [32, Theorem 2.1], these two conditions imply that \(M\) is a martingale under \(\mathbb {P}_x\) and that \(Q_t^{\Gamma ,x}(\Lambda _s)\) converges to \(\mathbb {E}_x\left( M_s1\!\!1_{\Lambda _s}\right) \) for all \(\Lambda _s\in \mathcal{F}_s\) when \(t\rightarrow \infty \). This means that \(\mathbb {Q}_x\) is well defined and

$$\begin{aligned} \left. {\frac{d\mathbb {Q}_x}{d\mathbb {P}_x}}\right| _{\mathcal{F}_s}=M_s. \end{aligned}$$

In particular, the transition kernel of the \(Q\)-process is given by

$$\begin{aligned} \tilde{p}(x;t,dy)=e^{\lambda _0 t}\frac{\eta (y)}{\eta (x)}p(x;t,dy). \end{aligned}$$

Let us now prove that \((\mathbb {Q}_x)_{x\in E}\) defines a Markov process, that is

$$\begin{aligned} \mathbb {E}_{\mathbb {Q}_x}\left( f(X_t)\mid \mathcal{F}_s\right)&=\mathbb {E}_{\mathbb {Q}_x}\left( f(X_t)\mid X_s\right) \!\!, \end{aligned}$$

for all \(s<t\) and all \(f\in \mathcal{B}(E)\). One easily checks from the definition of the conditional expectation that

$$\begin{aligned} M_s\mathbb {E}_{\mathbb {Q}_x}\left( f(X_t)\mid \mathcal{F}_s\right)&=\mathbb {E}_{x}\left( M_t f(X_t)\mid \mathcal{F}_s\right) \end{aligned}$$

By definition of \(M_t\) and by the Markov property of \(\mathbb {P}_x\), we deduce that

$$\begin{aligned} M_s \mathbb {E}_{\mathbb {Q}_x}\left( f(X_t)\mid \mathcal{F}_s\right) =\mathbb {E}_x(M_t f(X_t)\mid {X}_s). \end{aligned}$$

Since \(\mathbb {E}_{x}\left( M_t f(X_t)\mid X_s\right) =M_s \mathbb {E}_{\mathbb {Q}_x}\left( f(X_t)\mid X_s\right) \), this implies the Markov property for the \(Q\)-process. The same proof gives the strong Markov property of \(X\) under \((\mathbb {Q}_x)_{x\in E}\) provided that \(X\) is strong Markov under \((\mathbb {P}_x)_{x\in E}\).

This concludes parts \(i)\) and \(ii)\) of Theorem 3.1.

Step 2: Exponential ergodicity of the \(Q\) -process.

Let us first check that \(\beta \) is invariant for \(X\) under \(\mathbb {Q}\). By (2.6) and (3.1), we have for all \(t\ge 0\) and \(\varphi \in \mathcal{B}(E)\)

$$\begin{aligned} \alpha (\eta \tilde{P}_t\varphi )&=e^{\lambda _0 t}\alpha (P_t(\eta \varphi ))=\alpha (\eta \varphi ). \end{aligned}$$

Since \(\eta \) is bounded, \(\beta \) is well defined and is an invariant distribution for \(X\) under \(\mathbb {Q}\).

By (5.1) and the semi-group property of \((R_{s,t}^T)_{s,t}\), we have for all \(\mu _1,\mu _2\in \mathcal{M}_1(E)\) and all \(t\le T\)

$$\begin{aligned} \left\| \mu _1 R^T_{0,t}-\mu _2 R^T_{0,t} \right\| _{TV}\le (1-c_1c_2)^{\lfloor t/t_0\rfloor }\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

By definition of \(R^T_{0,t}\) and by dominated convergence when \(T\rightarrow \infty \), we obtain

$$\begin{aligned} \left\| \mathbb {Q}_{\mu _1}(X_t\in \cdot )-\mathbb {Q}_{\mu _2}(X_t\in \cdot )\right) \Vert _{TV}\le (1-c_1c_2)^{\lfloor t/t_0\rfloor }\Vert \mu _1-\mu _2\Vert _{TV}. \end{aligned}$$

Taking \(\mu _2=\beta \), this implies that \(X\) is exponentially ergodic under \(\mathbb {Q}\) with unique invariant distribution \(\beta \).

6.2 Proof of Theorem 3.2

Step 1: Computation of \(\tilde{L}^w\) and a first inclusion in (3.2).

Let us first show that

$$\begin{aligned} \mathcal{D}(\tilde{L}^w)\supset \left\{ f\in \mathcal{B}(E),\;\eta f\in \mathcal{D}(L^w)\text { and }\frac{L^w(\eta f)}{\eta }\text {is bounded}\right\} \!. \end{aligned}$$

Let \(f\) belong to the set in the r.h.s. of the last equation. We have

$$\begin{aligned} \frac{\tilde{P}_h f-f}{h}&=\frac{e^{\lambda _0 h}}{\eta } \frac{P_h(\eta f)-\eta f}{h}+f\frac{e^{\lambda _0 h}-1}{h}. \end{aligned}$$
(6.1)

So, obviously, \(\frac{\tilde{P}_h f-f}{h}\) converges pointwise to \(\tilde{L}^w f\). By [19], I.1.15.C],

$$\begin{aligned} P_t (\eta f)-\eta f=\int _0^t P_s L^w (\eta f) ds. \end{aligned}$$

Since \(|L^w(\eta f)|\le C\eta \) for some constant \(C>0\) and \(P_s \eta =e^{\lambda _0 s}\eta \), we obtain

$$\begin{aligned} \left| P_t (\eta f)-\eta f\right| \le C \frac{e^{\lambda _0 t}-1}{\lambda _0}\eta . \end{aligned}$$

Therefore the r.h.s. of (6.1) is uniformly bounded. Finally, \(\tilde{L}^w f\) is the b.p.–limit of \(\frac{\tilde{P}_h f-f}{h}\).

It only remains to check that b.p.-\(\lim \tilde{P}_h\tilde{L}^w f=\tilde{L}^wf\). Since \(\tilde{L}^wf\) is bounded, we only have to prove the pointwise convergence. We have

$$\begin{aligned} \tilde{P}_h\tilde{L}^w f&=\lambda _0 \tilde{P}_h f+\tilde{P}_h\frac{L^w(\eta f)}{\eta }= \lambda _0 \tilde{P}_h f+\frac{e^{\lambda _0 h}}{\eta }P_h L^w(\eta f) \end{aligned}$$

where the last equality comes from (3.1). Since \(\eta f\in \mathcal{D}(L^w)\), the pointwise convergence is clear.

Step 2: The second inclusion in (3.2).

Let \(f\in \mathcal{D}(\tilde{L}^w)\). We have b.p.–convergence in (6.1) and, since \(\eta \) and \(f\) are bounded, \(\frac{P_h (\eta f)-\eta f}{h}\) b.p.–converges to some limit \(g\in \mathcal{B}(E\cup \{\partial \})\) (recall that by convention \(\eta f(\partial )=0\)) such that

$$\begin{aligned} g=\eta \tilde{L}^w f-\lambda _0 \eta f. \end{aligned}$$

Let us check that b.p.-\(\lim P_h g=g\). Since \(g\) is bounded, we only have to prove the pointwise convergence. We have

$$\begin{aligned} P_h g&=P_h(\eta \tilde{L}^w f)-\lambda _0 P_h(\eta f)=\eta e^{-\lambda _0 h} \tilde{P}_h \tilde{L}^w f-\lambda _0 P_h(\eta f), \end{aligned}$$

by (3.1). Since \(f\in \mathcal{D}(\tilde{L}^w)\), the first term converges pointwise \(\eta \tilde{L}^w f\). The second term converges to \(-\lambda _0 \eta f\) since we have proved the b.p.–convergence of \(\frac{P_h (\eta f)-\eta f}{h}\). We deduce that

$$\begin{aligned} \text {b.p.-}\lim _{h\rightarrow 0} P_h(g)= \eta \tilde{L}^w f-\lambda _0 \eta f =g. \end{aligned}$$

Thus \(\eta f\in \mathcal{D}(L^w)\) and \(L^w(\eta f)=\eta \tilde{L}^w f-\lambda _0 \eta f\), so that \( |L^w(\eta f)|\le C\eta \) for some constant \(C>0\).

Step 3: Characterization of \((\tilde{P}_t)\) by \(\tilde{L}^w\)

We assume that \(E\) is a topological space and that \(\mathcal E\) is the Borel \(\sigma \)-field. By [19], Thm II.2.3], the result follows if \((\tilde{P}_t)\) is stochastically continuous, i.e., for all open set \(U\subset E\) and all \(x\in U\),

$$\begin{aligned} \tilde{P}_h1\!\!1_U(x)\xrightarrow [h\rightarrow 0]{}1. \end{aligned}$$

By (3.1), we have

$$\begin{aligned} \tilde{P}_t1\!\!1_{U^c}(x)&=\frac{e^{\lambda _0t}}{\eta (x)}P_t(\eta 1\!\!1_{U^c})(x)\le \frac{e^{\lambda _0t}\Vert \eta \Vert _{\infty }}{\eta (x)}P_t(1\!\!1_{U^c})(x). \end{aligned}$$

The results follows from the assumption (3.3).