1 Introduction and Background

We shall mostly study the minimal number, \(N(t,\delta )\), of intervals of length at most \( \delta \) needed to cover the range \(\{ X_s : 0\le s \le t\}\) of a subordinator \((X_s)_{s\ge 0}\). The main result in this paper is a central limit theorem for \(N(t,\delta )\), complementing the almost sure convergence result \( \lim _{\delta \rightarrow 0}U(\delta )N(\delta ,t)=t\), almost surely, where \(U(\delta )\) denotes the renewal function of the subordinator, see [24, Theorem 1.1].

Prior to the results in [24], most works on box-counting dimension focused only on finding the value of \( \lim _{\delta \rightarrow 0} \log (N(t,\delta )) / \log (1/\delta ) \), which defines the box-counting dimension. However, working with \(N(t,\delta )\) itself allows precise understanding of its fluctuations around its mean, inaccessible at the log scale.

We will introduce an alternative “box-counting scheme” to \(N(t,\delta )\), which allows us to understand the dimension of the range in terms of the Lévy measure, complementing results formulated in terms of the renewal function.

The fractal dimensional study of sets such as the range or graph of Lévy processes, and especially subordinators, has a very rich history. There are many works which study the box-counting, Hausdorff, and packing dimensions of sets related to Lévy processes [4, 6, 9, 11,12,13, 16,17,18,19,20, 24,25,26].

A Lévy process is a stochastic process in \(\mathbb {R}^d\) which has stationary, independent increments, and starts at the origin. A subordinator \(X:=(X_t)_{t\ge 0}\) is a non-decreasing real-valued Lévy process. The Laplace exponent \(\Phi \) of a subordinator X is defined by the relation \( e^{-\Phi (\lambda )} = \mathbb {E}[ e^{- \lambda X_1} ] \) for \(\lambda \ge 0\). By the Lévy Khintchine formula [1, p. 72], \(\Phi \) can always be expressed as

$$\begin{aligned} \Phi (\lambda ) = \ \text {d} \lambda + \int _0^\infty (1- e^{-\lambda x} ) \Pi (\mathrm{{d}}x) , \end{aligned}$$
(1)

where d is the linear drift, and \(\Pi \) is the Lévy measure, which determines the size and intensity of the jumps (discontinuities) of X, and moreover satisfies the condition \(\int _0^\infty (1\wedge x)\Pi (\mathrm{{d}}x)<\infty \). The renewal function is the expected first passage time above \(\delta \), \( U(\delta ):=\mathbb {E}[T_\delta ]\), where \(T_\delta := \int _0^\infty \mathbbm {1}_{ \{ X_t \le \delta \} } \mathrm{{d}}t \).

If the Lévy measure is infinite, then infinitesimally small jumps occur at an infinite rate, almost surely. We will not study processes with finite Lévy measure, as they have only finitely many jumps, and hence no fractal structure.

The box-counting dimension of a set in \(\mathbb {R}^d\) is \( \lim _{\delta \rightarrow 0} \log (N(\delta )) / \log (1/\delta ) \), where \(N(\delta )\) is the minimal number of d-dimensional boxes of side length \(\delta \) needed to cover the set. The limsup and liminf, respectively, define the upper and lower box-counting dimensions. For further background reading, we refer to [1, 2] for subordinators, [7, 21, 23] for Lévy processes, and [9, 26] for fractals.

The paper is structured as follows: Sect. 2 outlines the statements of all of the main results; Sect. 3 contains the proof of the CLT result for \(N(t,\delta )\) and the lemmas required for this proof; Sect. 4 contains the proofs of all of the main results on the new process \(L(t,\delta )\); Sect. 5 extends this work to the graph of a subordinator, and considers the special case of a subordinator with regularly varying Laplace exponent.

2 Main Results

2.1 A Central Limit Theorem for \(N(t,\delta )\)

Expanding upon Bertoin’s result [2, Theorem 5.1], the following almost sure limiting behaviour of \(N(t,\delta )\) was determined by Savov [24, Theorem 1.1].

Theorem 2.1

(Savov [24], Theorem 1]) If a subordinator has infinite Lévy measure or a nonzero drift, then for all \(t>0\), \( \lim _{\delta \rightarrow 0+} U(\delta ) N(t,\delta ) = t \) almost surely.

We will complement and refine this work with a CLT on \(N(t,\delta )\). When the subordinator has no drift, we require a mild condition on the Lévy measure:

$$\begin{aligned} \liminf _{\delta \rightarrow 0} \frac{I(2\delta ) }{ I(\delta ) } >1, \end{aligned}$$
(2)

where \(I(u):=\int _0^u \overline{\Pi }(x)\mathrm{{d}}x\), and \(\overline{\Pi }(x):=\Pi ((x,\infty ))\).

Remark 2.2

Condition (2) has many equivalent formulations, see [1, Ex. III.7], and [3, Sect. 2.1]. We emphasise that (2) is far less restrictive than regular variation (or even \(\mathcal {O}\)-regular variation) of the Laplace exponent, and appears naturally in the context of the law of the iterated logarithm (see e.g. [1, p. 87]).

Theorem 2.3

For every driftless subordinator with Lévy measure satisfying (2), for any \(t>0\), \(N(t,\delta )\) satisfies the following central limit theorem:

$$\begin{aligned} \frac{ N(t,\delta ) - t a(\delta ) }{ t^\frac{1}{2} b(\delta ) } \overset{d}{\rightarrow }\mathcal {N}(0,1), \end{aligned}$$
(3)

as \( \delta \rightarrow 0\), where \(a(\delta ) := U(\delta )^{-1}\), and \(b(\delta ) := U(\delta )^{-\frac{3}{2}} Var (T_\delta )^\frac{1}{2} \).

2.2 An Alternative Box-Counting Scheme, \(L(t,\delta )\)

Definition 2.4

The process of \(\delta \)-shortened jumps, \(\tilde{X}^\delta :=(\tilde{X}_t^\delta )_{t\ge 0}\), is obtained by shortening all jumps of X of size larger than \(\delta \) to instead have size \(\delta \). That is, \(\tilde{X}^\delta \) is the subordinator with Laplace exponent \(\tilde{\Phi }^\delta (u)= d u + \int _0^\delta (1-e^{-ux})\tilde{\Pi }^\delta (\mathrm{{d}}x)\) and Lévy measure \(\tilde{\Pi }^\delta (\mathrm{{d}}x) = \Pi (\mathrm{{d}}x)\mathbbm {1}_{ \{ x<\delta \}} + \overline{\Pi }(\delta ) \Delta _{\delta }(dx), \) where \( \Delta _\delta \) denotes a unit point mass at \(\delta \), and \(\Pi \) is the Lévy measure of X.

Definition 2.5

For \(\delta ,t>0\), \(L(t,\delta )\) is defined by \(L(t,\delta ) := \frac{1}{\delta } \tilde{X}_t^\delta \).

We will see in Theorem 2.7 that \(L(t,\delta )\) can replace \(N(t,\delta )\) in the definition \( \lim _{\delta \rightarrow 0} \log (N(t,\delta )) / \log (1/\delta ) \) of the box-counting dimension of the range of X. Now let us formally state the main results on \(L(t,\delta )\).

Remark 2.6

The log scale at which box-counting dimension is defined allows flexibility among functions to be taken in place of the optimal count. In particular, there is freedom between functions related by \(f \asymp g \) asymptotically, where the notation means that there exist positive constants AB such that \(A f(x) \le g(x) \le B f(x)\) for all x. For more details, we refer to [9, p. 42].

Theorem 2.7

For all \(\delta ,t>0\), for every subordinator, \(N(t,\delta ) \asymp L(t,\delta ) \). In particular, by Remark 2.6, \(L(t,\delta )\) can be used to define the box-counting dimension of the range, i.e. \( \lim _{\delta \rightarrow 0} \log (N(t,\delta )) / \log (1/\delta ) = \lim _{\delta \rightarrow 0} \log (L(t,\delta )) / \log (1/\delta )\).

Theorem 2.8

For every subordinator with infinite Lévy measure, for all \( t >0\),

$$\begin{aligned} \lim _{\delta \rightarrow 0} \frac{ L(t,\delta )}{ \mu (\delta )} = t , \end{aligned}$$
(4)

almost surely, where \(\mu (\delta ) := \frac{1}{\delta } ( d + I(\delta ) ) \), and \(I(\delta )= \int _0^\delta \overline{\Pi }(y)\mathrm{{d}}y \).

Remark 2.9

It can be deduced from [2, Prop 1.4] that \(U(\delta )^{-1} \asymp \frac{1}{\delta } (d + I(\delta )),\) for any subordinator. Theorems 2.1, 2.7 and 2.8 allow us to understand this relationship in terms of geometric properties of subordinators.

Theorem 2.10

For every subordinator with infinite Lévy measure, for all \(t>0\),

$$\begin{aligned} \frac{L(t , \delta ) - t \mu (\delta )}{ \ t^{\frac{1}{2}} v(\delta ) \frac{}{} } \overset{d}{\rightarrow }\mathcal {N}(0,1) \end{aligned}$$
(5)

as \( \delta \rightarrow 0\), where \(\mu (\delta ) = \frac{1}{\delta } ( d + I(\delta ) )\), and \(v(\delta ):= \frac{1}{\delta } \left[ \int _0^\infty (x \wedge \delta )^2 \Pi (\mathrm{{d}}x) \right] ^{\frac{1}{2}}.\)

Remark 2.11

Applying Remark 2.4, the Lévy Khintchine formula (1), and the fact that for any integrable function f, \( \int _0^\delta f(x) \ \tilde{\Pi }^\delta (\mathrm{{d}}x) = \int _0^\infty f( x \wedge \delta ) \ \Pi (\mathrm{{d}}x)\), it follows that for all \(\delta ,t>0\), the mean and variance of \(L(t,\delta )\) are given by

$$\begin{aligned} \mathbb {E}[ L(t,\delta )] = t \mu (\delta ) , \quad \text {Var }( L(t,\delta ) ) = t v(\delta ) . \end{aligned}$$

Computing the moments of \(L(t,\delta )\) is remarkably simple in comparison with the moments of \(N(t,\delta )\), which are not well known. This is a key benefit of using \(L(t,\delta )\) to study the box-counting dimension of the range of a subordinator.

3 Proof of Theorem 2.3

3.1 A Sufficient Condition for Theorem 2.3

We will first work towards proving the following sufficient condition:

Lemma 3.1

For every subordinator with infinite Lévy measure, a sufficient condition for the convergence in distribution (3), with \(\sigma _\delta ^2:= Var (T_\delta )\), is

$$\begin{aligned} \lim _{\delta \rightarrow 0} \frac{ U(\delta )^\frac{7}{3} }{ \sigma _\delta ^2 } =0. \end{aligned}$$
(6)

The proof of Lemma 3.1 relies upon the Berry–Esseen theorem, a very useful result for proving central limit theorem results as it provides the speed of convergence, which is stated here in Lemma 3.2. See [10, p. 542] for more details.

Lemma 3.2

(Berry–Esseen theorem) Let \(Z\sim \mathcal {N}(0,1)\). There exists a finite constant \(c>0\) such that for every collection of iid random variables \((Y_k)_{k\in \mathbb {N}}\) with the same distribution as Y, where Y has finite mean, finite absolute third moment, and finite nonzero variance, for all \(n\in \mathbb {N}\) and \(x \in \mathbb {R}\),

$$\begin{aligned} \left| \mathbb {P}\left( \frac{ Y_1 - \mathbb {E}[Y] + \cdots + Y_n - \mathbb {E}[Y] }{ Var ( Y )^\frac{1}{2} \sqrt{n} } \ge x\right) - \mathbb {P}( Z \ge x) \right| \le \frac{ c \mathbb {E}[|Y - \mathbb {E}[Y]|^3] }{Var (Y)^\frac{3}{2} \sqrt{n} }. \end{aligned}$$
(7)

For brevity, we will only provide calculations for \(t=1\). The proofs for different values of t are essentially the same. Recall the definitions \(a(\delta ):= U(\delta )^{-1}\), \(\sigma _\delta ^2:=Var (T_\delta )\), and \(b(\delta ):= U(\delta )^{-\frac{3}{2}}\sigma _\delta \). We shall aim to prove that for all \(x\in \mathbb {R}\),

$$\begin{aligned} \lim _{\delta \rightarrow 0 } \left| \mathbb {P}\left( \frac{ N(1,\delta ) - a(\delta )}{b(\delta )} \le x\right) - \mathbb {P}( Z \le x) \right| =0. \end{aligned}$$

For each \(\delta >0\), (7) provides an upper bound, and then under condition (2), we can prove that this bound converges to zero as \(\delta \rightarrow 0\).

Proof of Lemma 3.1

Let \(T_{\delta }^{(k)}\) denote the kth time at which \(N(t,\delta )\) increases, and let \(T_{\delta , k}\), \(k \in \mathbb {N}\), denote iid copies of \(T_\delta ^{(1)}\). By the strong Markov property, \( T_{\delta }^{(k)}\) and \( \sum _{i=1}^kT_{\delta , i} \) have the same distribution. Then, with \(n:= \lceil a(\delta ) + x b(\delta ) \rceil \), where \(\lceil \cdot \rceil \) denotes the ceiling function,

$$\begin{aligned} \mathbb {P}\left( \frac{ N(1,\delta ) - a(\delta )}{b(\delta )} \le x\right) = \mathbb {P}( N(1,\delta ) \le a(\delta ) + x b(\delta )), \end{aligned}$$
(8)

and since \(N(1,\delta )\) only takes integer values, using the fact that \(T_\delta ^{(n)}\) has the same distribution as the sum of n iid copies of \(T_\delta ^{(1)}\), it follows that

$$\begin{aligned} {\begin{matrix} (8) = \mathbb {P}\left( N(1,\delta ) \le n \right) &{}= \mathbb {P}( T_\delta ^{ (n)} \ge 1) = \mathbb {P}\left( \sum _{i=1}^n T_{\delta , i} \ge 1\right) \\ &{}= \mathbb {P}\left( \sum _{i=1}^n \left( T_{\delta , i} - U(\delta ) \right) \ge 1 - nU(\delta ) \right) \\ &{}= \mathbb {P}\left( \frac{ \sum _{i=1}^n \left( T_{\delta , i} - U(\delta ) \right) }{ \sqrt{n \sigma _\delta ^2 } } \ge \frac{ 1 - n U(\delta ) }{ \sqrt{n \sigma _\delta ^2 } }\right) . \end{matrix}} \end{aligned}$$
(9)

It follows from Lemma 3.3 that \(\sigma _\delta ^2 \le \mathbb {E}[T_\delta ^2]\le c U(\delta )^2\), which then implies that \(b(\delta ) = o(a(\delta ))\) as \(\delta \rightarrow 0\). Then, as \(\delta \rightarrow 0\), the asymptotic behaviour of n is

$$\begin{aligned} n = \lceil a(\delta ) + x b(\delta ) \rceil \sim a(\delta )+ x b(\delta ) = a(\delta ) +o(a(\delta )) \sim a(\delta ) = U(\delta )^{-1}. \end{aligned}$$
(10)

It follows, with \(x^\prime \) depending on x and \(\delta \), that as \(\delta \rightarrow 0\),

$$\begin{aligned} -x^\prime :&= \frac{ 1 - n U(\delta ) }{ \sqrt{n \sigma _\delta ^2 } } = \frac{ 1 - \lceil a(\delta ) + x b(\delta ) \rceil U(\delta ) }{ \left( \lceil a(\delta ) + x b(\delta ) \rceil \right) ^{\frac{1}{2}} \sigma _\delta } \sim \frac{ 1 - ( a(\delta ) + x b(\delta ) ) U(\delta ) }{ ( a(\delta ) + x b(\delta ) )^{\frac{1}{2}} \sigma _\delta } \end{aligned}$$
(11)
$$\begin{aligned}&= \frac{ 1 - 1 - x b(\delta ) U(\delta ) }{ ( a(\delta ) + x b(\delta ) )^{\frac{1}{2}}\sigma _\delta } \sim \frac{ - x b(\delta ) U(\delta ) }{ U(\delta )^{-\frac{1}{2}} \sigma _\delta } \qquad = \frac{ - x b(\delta ) U(\delta )^\frac{3}{2} }{ \sigma _\delta } = -x. \end{aligned}$$
(12)

Now, by the triangle inequality and symmetry of the normal distribution, combining (9) and (12), it follows that as \(\delta \rightarrow 0\), for any \(x\in \mathbb {R}\),

$$\begin{aligned} {\begin{matrix} \ \ \ \ \left| \mathbb {P}\left( \frac{N(1,\delta )-a(\delta )}{b(\delta )} \le x\right) - \mathbb {P}(Z\le x)\right| \le \left| \mathbb {P}( Z \ge -x^\prime ) - \mathbb {P}(Z\ge -x) \right| &{} \\ + \left| \ \mathbb {P} \left( \frac{ 1}{ \sqrt{n \sigma _\delta ^2 } } \sum _{i=1}^n (T_{\delta , i} - U(\delta ) ) \ge -x^\prime \right) - \ \mathbb {P}( \ Z\ge -x^\prime ) \ \right| &{} \\ = \left| \ \mathbb {P} \left( \frac{ 1}{ \sqrt{n \sigma _\delta ^2 } } \sum _{i=1}^n (T_{\delta , i} - U(\delta ) ) \ge -x^\prime \right) - \ \mathbb {P}( \ Z\ge -x^\prime ) \ \right| &{} + o(1). \end{matrix}} \end{aligned}$$
(13)

Recall that we wish to show that (13) converges to zero. By the Berry–Esseen theorem and (10), it follows that as \(\delta \rightarrow 0\),

$$\begin{aligned} (12) \le C \frac{ \mathbb {E}[|T_\delta -U(\delta )|^3] }{ \sigma _\delta ^3 n^\frac{1}{2} } + o(1) \sim C \frac{ U(\delta )^\frac{1}{2} \mathbb {E}[|T_\delta -U(\delta )|^3] }{ \sigma _\delta ^3 } . \end{aligned}$$

Applying the triangle inequality, then Lemma 3.3 with \(m=2\) and \(m=3\) to \(\mathbb {E}[ |T_\delta - U(\delta )|^3 ]\), it follows that

$$\begin{aligned} (12) \le 8C \frac{U(\delta )^\frac{1}{2} U(\delta )^3}{ \sigma _\delta ^3 } = 8C \left( \frac{U(\delta )^\frac{7}{3}}{ \sigma _\delta ^2 } \right) ^{\frac{2}{3}}. \end{aligned}$$

Therefore if the condition (6) as in the statement of Lemma 3.1 holds, then the desired convergence in distribution (3) follows, as required.\(\square \)

Lemma 3.3

For every subordinator with infinite Lévy measure, for all \(m\ge 1 \),

$$\begin{aligned} \limsup _{\delta \rightarrow 0+} \frac{\mathbb {E}[ T_\delta ^m ]}{U(\delta )^m} <\infty . \end{aligned}$$

Proof of Lemma 3.3

First, by the moments and tails lemma (see [15, p. 26]),

$$\begin{aligned} \frac{\mathbb {E}[T_\delta ^m]}{U(\delta )^m} = \mathbb {E}\left[ \left( \frac{T_\delta }{U(\delta )} \right) ^m\right] =\int _{0}^{\infty }m y^{m-1}\mathbb {P}\left( \frac{T_\delta }{U(\delta )}>y\right) \mathrm{{d}}y. \end{aligned}$$

By the definition of \(T_\delta \), it follows that \( X_u \ge \delta \) if and only if \(T_\delta \le u \), and then

$$\begin{aligned} \frac{\mathbb {E}[T_\delta ^m]}{U(\delta )^m} =\int _{0}^{\infty }m y^{m-1}\mathbb {P}(X_{yU(\delta )}\le \delta )\mathrm{{d}}y = \int _{0}^{\infty }my^{m-1}\mathbb {P}\left( e^{ - \frac{1}{\delta } X_{yU(\delta )}}\ge e^{-1}\right) \mathrm{{d}}y. \end{aligned}$$

Now, applying Markov’s inequality, the definition \(\mathbb {E}[e^{-\lambda X_t}]=e^{-t\Phi (\lambda )}\), and the fact that \(U(\delta ) \Phi (1/\delta ) \ge c\) for some constant c (see [2, Prop 1.4]),

$$\begin{aligned} \frac{\mathbb {E}[T_\delta ^m]}{U(\delta )^m} \le \int _{0}^{\infty } m y^{m-1}e^{1-yU(\delta )\Phi (1/\delta )}\mathrm{{d}}y \le \int _{0}^{\infty }my^{m-1}e^{1-cy}\mathrm{{d}}y, \end{aligned}$$

which is finite and independent of \(\delta \). Therefore the \(\limsup \) is finite, as required.\(\square \)

3.2 Proof of Theorem 2.3

Theorem 2.3 is proven by a contradiction, using Lemma 3.4 to show that the sufficient condition in Lemma 3.6 holds.

Lemma 3.4

Recall the definition \(I(\delta ):=\int _0^\delta \overline{\Pi }(x)\mathrm{{d}}x\). The condition (2) implies that for each \(\eta \in (0,1)\), there exists a sufficiently large integer n such that

$$\begin{aligned} \liminf _{\delta \rightarrow 0} \frac{ I(\delta )}{ I(2^{-n} \delta ) } > \frac{1}{\eta }. \end{aligned}$$
(14)

Proof of Lemma 3.4

The integral condition (2) imposes that for some \(B>1\),

$$\begin{aligned} \liminf _{\delta \rightarrow 0} \frac{ I(\delta )}{I(\delta /2)} = \liminf _{\delta \rightarrow 0} \frac{ \int _0^\delta \overline{\Pi }(y)(\mathrm{{d}}y) }{ \int _0^{\delta /2} \overline{ \Pi }(y) \mathrm{{d}}y } =B. \end{aligned}$$
(15)

Then, by effectively replacing 1 / 2 with \(2^{-n}\) (so 1 / 2 is replaced by a smaller constant), we can replace B with \(B^n\), which can be made arbitrarily large by choice of n. This follows by splitting up the fraction,

$$\begin{aligned}&\liminf _{\delta \rightarrow 0+} \frac{ I(\delta )}{I(2^{-n}\delta )} = \liminf _{\delta \rightarrow 0+} \left( \frac{ I(\delta )}{I(2^{-1}\delta )} \frac{ I(2^{-1}\delta )}{I(2^{-2}\delta )} \cdots \frac{ I(2^{-(n-1)}\delta )}{I(2^{-n}\delta )} \right) \\ \ge&\liminf _{\delta \rightarrow 0+} \left( \frac{ I(\delta )}{I(2^{-1}\delta )}\right) \liminf _{\delta \rightarrow 0+} \left( \frac{ I(2^{-1}\delta )}{I(2^{-2}\delta )} \right) \cdots \liminf _{\delta \rightarrow 0+} \left( \frac{ I(2^{-(n-1)}\delta )}{I(2^{-n}\delta )} \right) = B^n > \frac{1}{\eta }, \end{aligned}$$

where we simply take n sufficiently large that \(B^n > 1/\eta \).\(\square \)

Using Lemma 3.4 for a contradiction is the step in the proof of Theorem 2.3 which requires the condition (2). In order to prove Theorem 2.3, we require the notation introduced in Definition 3.5. We refer to [14, p93] for more details.

Definition 3.5

Recalling from Remark 2.4 that the process \(\tilde{X}^\delta \) has Laplace exponent \(\tilde{\Phi }^{\delta }(u)= d u + \int _0^\delta (1-e^{-ux})\Pi (\mathrm{{d}}x) + (1-e^{-u\delta }) \overline{\Pi }(\delta ) \), we define:

  1. (i)

    \(g(u):= \frac{d}{\mathrm{{d}}u}\tilde{\Phi }^{\delta }(u) = d + \int _0^\delta x e^{-ux} \tilde{\Pi }^\delta (\mathrm{{d}}x) \),

  2. (ii)

    \( R(u):= \tilde{\Phi }^{\delta }(u) - ug(u) = \int _0^\delta (1 - e^{-ux}(1+ux) )\tilde{\Pi }^\delta (\mathrm{{d}}x)\),

  3. (iii)

    \(\lambda _\delta \) denotes the unique solution to \(g(\lambda _\delta ) = x_\delta \), for \(d<x_\delta < d + \int _0^\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x) \).

One can ignore the drift \(d \) in Definition 3.5, since \(d =0\) throughout Sect. 3. The proof of Theorem 2.3 now requires the following lemma:

Lemma 3.6

For \(\alpha >0\), \(t= (1+\alpha )U(\delta ) \), and \(g(\lambda _\delta ) = x_\delta = \delta /t\), if

$$\begin{aligned}\limsup _{\delta \rightarrow 0} \ \delta \lambda _\delta < \infty , \end{aligned}$$

then the desired convergence in distribution (3), as in Theorem 2.3, holds.

Proof of Theorem 2.3

Assume for a contradiction that there exists a sequence \((\delta _m)_{m\ge 1}\) converging to zero, such that \(\lim _{m\rightarrow \infty } \lambda _{\delta _m} \delta _m = \infty \). That is to say, assume that the sufficient condition in Lemma 3.6 doesn’t hold. For brevity, we omit the dependence of \(\delta _m\) on m. Hence for all fixed \(\eta ,n>0\), \( \eta \ge e^{-\lambda _\delta 2^{-n} \delta } \) for all small enough \(\delta >0\). By Fubini’s theorem, \(I(\delta ) = \int _0^\delta \overline{\Pi }(x)\mathrm{{d}}x = \int _0^\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x) \), so

$$\begin{aligned} \nonumber \eta I(\delta ) + I( 2^{-n}\delta )&\ge e^{-\lambda _\delta 2^{-n} \delta }I(\delta ) + I( 2^{-n} \delta ) \ge e^{-\lambda _\delta 2^{-n}\delta } \int _0^{\delta } x \tilde{\Pi }^\delta (\mathrm{{d}}x) + \int _0^{ 2^{-n}\delta } x \Pi (\mathrm{{d}}x) \\&= e^{-\lambda _\delta 2^{-n}\delta }\delta \overline{\Pi }(\delta ) + e^{-\lambda _\delta 2^{-n}\delta }\int _0^{\delta } x \Pi (\mathrm{{d}}x) + \int _0^{ 2^{-n}\delta } x \Pi (\mathrm{{d}}x). \end{aligned}$$
(16)

Removing part of the first integral and noting \(1 \ge e^{-\lambda _\delta x}\) for all \(x>0\),

$$\begin{aligned} (15) \ge e^{-\lambda _\delta 2^{-n} \delta }\delta \overline{\Pi }(\delta ) + \int _{ 2^{-n}\delta }^{\delta } e^{-\lambda _\delta 2^{-n} \delta } x \Pi (\mathrm{{d}}x) + \int _0^{ 2^{-n}\delta } e^{-\lambda _\delta x} x \Pi (\mathrm{{d}}x). \end{aligned}$$

Now, \( e^{-\lambda _\delta 2^{-n} \delta } \ge e^{-\lambda _\delta x}\) for \(x\ge 2^{-n}\delta \). So for \(g(\lambda _\delta )=x_\delta = \frac{\delta }{ (1+\alpha )U(\delta ) }\), where \(\alpha >0\) is fixed and chosen sufficiently large that \(x_\delta <\int _0^\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x)\) for all \(\delta \) (this is possible by the relation \(U(\delta )^{-1}\asymp I(\delta )/\delta \), see [2, Prop 1.4]),

$$\begin{aligned} (15)&\ge e^{-\lambda _\delta 2^{-n}\delta }\delta \overline{\Pi }(\delta ) + \int _{ 2^{-n}\delta }^{\delta } e^{-\lambda _\delta x} x \Pi (\mathrm{{d}}x) + \int _0^{ 2^{-n}\delta } e^{-\lambda _\delta x} x \Pi (\mathrm{{d}}x)\\&= e^{-\lambda _\delta 2^{-n}\delta }\delta \overline{\Pi }(\delta ) + \int _0^{\delta } e^{-\lambda _\delta x} x \Pi (\mathrm{{d}}x) \ge g(\lambda _\delta ) = \frac{\delta }{(1+\alpha )U(\delta )} \ge \frac{I(\delta )}{(1+\alpha )K} , \end{aligned}$$

where the last two inequalities, respectively, follow by Definitions 2.43.5 (i) with \(d =0\), and the relation \(U(\delta )^{-1} \asymp I(\delta )/\delta \), see [1, p74]. So for a constant \(K>0\), for all sufficiently small \(\delta >0\), we have shown \(\eta I(\delta ) + I( 2^{-n}\delta ) \ge \frac{I(\delta )}{(1+\alpha )K}\). Taking \(\eta >0\) small enough that \( \frac{1}{(1+\alpha )K} \ge 2\eta \), it follows that \( I(2^{-n} \delta ) \ge \eta I(\delta )\), and hence \(I(\delta )/I(2^{-n} \delta ) \le 1/\eta \). But in Lemma 3.4 we showed that for each fixed \(\eta >0\), there is sufficiently large n such that \( \liminf _{\delta \rightarrow 0} I(\delta ) / I(2^{-n}\delta ) > 1/\eta \), which is a contradiction, so the sufficient condition as in Lemma 3.6 must hold.\(\square \)

Remark 3.7

For a driftless subordinator, Theorem 2.3 holds under the same condition (2) applied to the function \(H(y):= \int _0^y x \Pi (\mathrm{{d}}x)\) rather than the integrated tail function I. The integrated tail \(I(y)=H(y) + y\overline{\Pi }(y)\) depends on the large jumps of X since \(\overline{\Pi }(x)=\Pi ((x,\infty ))\), but H does not depend on the large jumps, so these conditions are substantially different. With only minor changes, the argument as in the proof of Theorem 2.3 works with H in place of I. Under condition (2) for H in place of I, one can prove that Lemma 3.4 holds with H in place of I. Then we assume for a contradiction that there exists a sequence \((\delta _m)_{m\ge 1}\) converging to zero, such that \(\lim _{m\rightarrow \infty } \lambda _{\delta _m} \delta _m = \infty \). But then as in the proof of Theorem 2.3, one can deduce that \(\eta H(\delta ) + H(2^{-n}\delta ) \ge \frac{1}{(1+\alpha )K^\prime }H(\delta ) \), which contradicts the analogous Lemma 3.4 result with H in place of I.

Remark 3.8

Theorem 2.3 can also be proven for subordinators with a drift \(d >0\), under a stronger regularity condition. For \(Y_t:=X_t - d t\), define \(\Phi _Y\) as the Laplace exponent of Y. The convergence in distribution (3) holds whenever \(\limsup _{x\rightarrow 0} x^{-5/6} \Phi _Y(x) <\infty \). This is proven using Remark 3.10, the inequality \(\mathbb {P}(Y_t< a) \ge 1 - C t h( a)\) for all Lévy processes (see [22, p954] for details), and the asymptotic expansion of \(U(\delta )\) as in [8, Theorem 4].

3.3 Proofs of Lemmas 3.9, 3.12, 3.6

Lemmas 3.9, 3.12, and 3.6 give sufficient conditions for Theorem 2.3 to hold. The proofs for these lemmas are facilitated by Lemma 3.11, which was proven in 1987 by Jain and Pruitt [14, p94]. Recall that \(\tilde{X}^\delta \) denotes the process with \(\delta \)-shortened jumps, as defined in Definition 2.4.

Lemma 3.9

The convergence in distribution (3) as in Theorem 2.3 holds if for some \(\alpha \in (0,1]\), \( \liminf _{\delta \rightarrow 0} [ \mathbb {P}( \tilde{X}^\delta _{(1+\alpha )U(\delta ) } \le \delta ) + \mathbb {P}( \tilde{X}^\delta _{(1-\alpha )U(\delta ) } \ge \delta ) ] >0\).

Proof of Lemma 3.9

For all \(\alpha >0\), recalling that \(\mathbb {E}[T_\delta ] = U(\delta )\),

$$\begin{aligned} \sigma _\delta ^2 = Var (T_\delta ) \ge Var (T_\delta ; |T_\delta - U(\delta )| \ge \alpha U(\delta )) \end{aligned}$$
$$\begin{aligned} \ge \alpha ^2 U(\delta )^2 [ \mathbb {P}( T_\delta \ge (1+\alpha )U(\delta )) + \mathbb {P}( T_\delta \le (1-\alpha )U(\delta ) )]. \end{aligned}$$

For the desired convergence in distribution (3) to hold, it is sufficient by Lemma 3.1 to show that \( \lim _{\delta \rightarrow 0} U(\delta )^\frac{7}{3} / \sigma _\delta ^2=0\). Now,

$$\begin{aligned}\frac{U(\delta )^\frac{7}{3}}{\sigma _\delta ^2} \le \frac{U(\delta )^\frac{1}{3}}{ \alpha ^2 [ \mathbb {P}( T_\delta \ge (1+\alpha )U(\delta ) ) + \mathbb {P}( T_\delta \le (1-\alpha )U(\delta ) ) ] }. \end{aligned}$$

Note that \(T_\delta \ge t \) if and only if \(\tilde{X}^\delta _t \le \delta \) since jumps of size larger than \(\delta \) do not occur in either case, and so \(X_t=\tilde{X}^\delta _t\) when \(T_\delta \ge t\). It follows that (3) holds if

$$\begin{aligned} \liminf _{\delta \rightarrow 0} [ \mathbb {P}( \tilde{X}^\delta _{(1+\alpha )U(\delta ) } \le \delta ) + \mathbb {P}( \tilde{X}^\delta _{(1-\alpha )U(\delta ) } \ge \delta ) ] >0. \end{aligned}$$

\(\square \)

Remark 3.10

The condition in Lemma 3.9 is not optimal. If for \(\varepsilon \in (0,1/6)\), \( \lim _{\delta \rightarrow 0} U(\delta )^{2\varepsilon - \frac{1}{3}} [ \mathbb {P}( \tilde{X}^\delta _{U(\delta ) + U(\delta )^{1+\varepsilon } } \le \delta ) + \mathbb {P}( \tilde{X}^\delta _{U(\delta ) - U(\delta )^{1+\varepsilon } } \ge \delta ) ] =\infty \), then the convergence in distribution (3) follows too. This stronger condition does not lead to any more generality than the condition (2) for driftless subordinators.

Lemma 3.11

(Jain, Pruitt [14, Lemma 5.2]) There exists \(c>0\) such that for every \(\varepsilon >0\), \(t\ge 0\) and \(x_\delta >0\) satisfying \(d = g(\infty )< x_\delta < g(0) = d + \int _0^\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x) \),

$$\begin{aligned} \mathbb {P}( \tilde{X}^\delta _t \le t x_\delta ) \ge \left( 1 - \frac{ (1+\varepsilon )c }{\varepsilon ^2 tR(\lambda _\delta ) } \right) e^{ - (1+2\varepsilon )tR(\lambda _\delta )}. \end{aligned}$$
(17)

Lemma 3.12

For \(\alpha >0\), \(t= (1+\alpha )U(\delta ) \), and \(g(\lambda _\delta ) = x_\delta = \delta /t\), if

$$\begin{aligned} \limsup _{\delta \rightarrow 0} \ tR(\lambda _\delta ) < \infty , \end{aligned}$$

then the desired convergence in distribution (3), as in Theorem 2.3, holds.

Proof of Lemma 3.12

Applying the inequality (17) from Lemma 3.11,

$$\begin{aligned} \mathbb {P}( \tilde{X}^\delta _{(1+\alpha )U(\delta ) } \le \delta ) \ge \left( 1 - \frac{ (1+\varepsilon )c }{\varepsilon ^2 tR(\lambda _\delta ) } \right) e^{ - (1+2\varepsilon )tR(\lambda _\delta )}. \end{aligned}$$
(18)

Now, letting \(\limsup _{\delta \rightarrow 0} tR(\lambda _\delta )<\infty \), we will consider two separate cases: (i) If \(\liminf _{\delta \rightarrow 0}tR(\lambda _\delta )=\beta >0\), then by choice of \(\varepsilon >0\) such that \(\frac{1+\varepsilon }{\varepsilon ^2} = \frac{\beta }{2c}\), the lower bound in (18) is larger than a positive constant as \(\delta \rightarrow 0\).

(ii) If \(\liminf _{\delta \rightarrow 0} tR(\lambda _\delta ) = 0\), then imposing \(\varepsilon = 2c/(tR(\lambda _\delta ))\), the lower bound in (18) is again larger than a positive constant as \(\delta \rightarrow 0\). The desired convergence in distribution (3) then follows in each case by Lemma 3.9.\(\square \)

Proof of Lemma 3.6

Noting that \(1 - e^{-y}(1+y) \le y\) for all \(y>0\),

$$\begin{aligned}tR(\lambda _\delta ) = (1+\alpha )U(\delta ) \int _0^\delta \left( 1 - e^{-\lambda _\delta x}(1+\lambda _\delta x) \right) \tilde{\Pi }^\delta (\mathrm{{d}}x) \end{aligned}$$
$$\begin{aligned} \le (1+\alpha )U(\delta ) \int _0^\delta \lambda _\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x) = (1+\alpha ) U(\delta ) \left( \int _0^\delta x \Pi (\mathrm{{d}}x) + \delta \overline{\Pi }(\delta ) \right) \lambda _\delta . \end{aligned}$$
(19)

Then by the relation \(U(\delta ) I(\delta ) \le C \delta \) for a constant C (see [2, Prop 1.4]),

$$\begin{aligned} (18) = (1+\alpha ) U(\delta ) I(\delta ) \lambda _\delta \le C \delta \lambda _\delta . \end{aligned}$$

So we can conclude that if \(\limsup _{\delta \rightarrow 0} \delta \lambda _\delta < \infty \), then the desired convergence in distribution (3) follows by Lemma 3.12.\(\square \)

4 Proofs of Results on \(L(t,\delta )\)

Firstly, we prove Theorem 2.7, which confirms that \(L(t,\delta )\) can replace \(N(t,\delta )\) in the definition of the box-counting dimension of the range. This is done by showing that \(L(t,\delta )\asymp N(t,\delta )\), which is known to be sufficient by Remark 2.6.

Proof of Theorem 2.7

The jumps of the original subordinator X and the process with shortened jumps \(\tilde{X}^\delta \) are all the same size, other than jumps bigger than size \( \delta \). The optimal number of intervals to cover the range, \(N(X,t,\delta )\), always increases by 1 at each jump bigger than size \(\delta \), regardless of its size, so it follows that \(N(X,t,\delta )=N(\tilde{X}^\delta ,t,\delta )\), with the obvious notation.

Instead of counting the number \(N(X,t,\delta )\) of boxes needed to cover the range of X, consider those needed for the range of the subordinator \(X^{(0,\delta )}\) with Lévy measure \(\Pi (\mathrm{{d}}x) \mathbbm {1}_{\{x<\delta \}}\) (so all jumps of size larger than \(\delta \) are removed), and adding \(Y_t^\delta \), which counts the number of jumps larger than size \(\delta \) of X. Then one can easily verify that \(N(X,t,\delta ) \le N(X^{(0,\delta )},t,\delta ) + Y_t^\delta \le 2N(X,t,\delta )\).

Consider \(M(X^{(0,\delta )},t,\delta )\), the number of intervals in a lattice of side length \(\delta \) to intersect with the range of \(X^{(0,\delta )}\). It is easy to show \(N(t,\delta ) \asymp M(t,\delta )\) (see [9, p 42]). Also, \(M(X^{(0,\delta )},t,\delta )= \lceil \frac{1}{\delta } X_t^{(0,\delta )} \rceil \), since \(X^{(0,\delta )}\) has no jumps of size larger than \(\delta \). Now, \( \frac{1}{\delta } X_t^{(0,\delta )} \asymp \lceil \frac{1}{\delta } X_t^{(0,\delta )} \rceil \) for small enough \(\delta \), and hence

$$\begin{aligned} L(X,t,\delta ) = \frac{1}{\delta } \tilde{X}_t^\delta = \frac{1 }{\delta } X_t^{(0,\delta )}+ Y_t^\delta \asymp M(X^{(0,\delta )},t,\delta ) + Y_t^\delta \end{aligned}$$
$$\begin{aligned} \asymp N(X^{(0,\delta )},t,\delta ) + Y_t^\delta \asymp N(X,t,\delta ) . \end{aligned}$$

By Remark 2.6, \(\lim _{\delta \rightarrow 0} \frac{ \log (L(t,\delta )) }{\log (1/\delta ) } = \lim _{\delta \rightarrow 0} \frac{ \log (N(t,\delta )) }{\log (1/\delta ) }\), and hence \(L(t,\delta )\) can be used to define the box-counting dimension of the range of any subordinator. \(\square \)

Next we will prove the CLT result for \(L(t,\delta )\), working with \(t=1\) for brevity. The proof is essentially the same for other values of \(t>0\). We will show convergence of the Laplace transform of \(\frac{1}{v(\delta )} (L(1,\delta ) - \mu (\delta ) ) \) to that of the standard normal distribution. Recall that \(Z\sim \mathcal {N}(0,1)\) has Laplace transform \(\mathbb {E}[ e^{- \lambda Z} ] = e^{\lambda ^2 /2}\).

Proof of Theorem 2.10

By Remark 2.4 and (1), \( \delta L(t,\delta ) = \tilde{X}_t^\delta \) is a subordinator with Laplace exponent \( \tilde{\Phi }^\delta \), and it follows that for any \(\lambda \ge 0\),

$$\begin{aligned} \lim _{\delta \rightarrow 0} \mathbb {E}\left[ exp \left( -\lambda \frac{L(1,\delta ) - \mu (\delta ) }{ v(\delta ) } \right) \right] = e^{\frac{\lambda ^2}{ 2}} \iff \lim _{\delta \rightarrow 0} \left( \frac{\lambda \mu (\delta ) }{v(\delta )} - \tilde{\Phi }^\delta \left( \frac{\lambda }{\delta v(\delta )}\right) \right) = \frac{\lambda ^2}{2} . \end{aligned}$$

Recalling the definition \(\mu (\delta )=\frac{1}{\delta }(d +I(\delta ))\), where \(I(\delta ):= \int _0^\delta x \tilde{\Pi }^\delta (\mathrm{{d}}x)\), and writing \(\tilde{\Phi }^\delta \) in the Lévy Khintchine representation as in (1), it follows that

$$\begin{aligned} \frac{\lambda \mu (\delta ) }{v(\delta )} - \tilde{\Phi }^\delta \left( \frac{\lambda }{\delta v(\delta )}\right) = \frac{\lambda ( d + I(\delta ) ) }{ \delta v(\delta ) } - \frac{ d \lambda }{ \delta v(\delta ) } - \int _0^\delta \left( 1- e^{- \frac{\lambda x}{\delta v(\delta )}} \right) \tilde{\Pi }^\delta (\mathrm{{d}}x) \end{aligned}$$
$$\begin{aligned} = \frac{\lambda I(\delta ) }{ \delta v(\delta ) } - \int _0^\delta \left( 1- e^{- \frac{\lambda x}{\delta v(\delta )}} \right) \tilde{\Pi }^\delta (\mathrm{{d}}x) = \int _0^\delta \frac{\lambda x}{ \delta v(\delta ) } \tilde{\Pi }^\delta (\mathrm{{d}}x) - \int _0^\delta \left( 1- e^{- \frac{\lambda x}{\delta v(\delta )}} \right) \tilde{\Pi }^\delta (\mathrm{{d}}x).\nonumber \\ \end{aligned}$$
(20)

Then applying the fact that \( \frac{y^2}{2} - \frac{y^3}{6} \le y - 1 + e^{-y} \le \frac{y^2}{2} \) for all \(y>0\),

$$\begin{aligned} \int _0^\delta \left( \frac{\lambda ^2 x^2}{2 \delta ^2 v(\delta )^2 } - \frac{ \lambda ^3 x^3}{ 6 \delta ^3 v(\delta )^3} \right) \tilde{\Pi }^\delta (\mathrm{{d}}x) \le (19) \le \int _0^\delta \frac{\lambda ^2 x^2}{ 2 \delta ^2 v(\delta )^2 } \tilde{\Pi }^\delta (\mathrm{{d}}x) . \end{aligned}$$

By the definition of \(v(\delta )\), it follows that \(v(\delta )^2 = \frac{1}{\delta ^2} \int _0^\delta x^2 \tilde{\Pi }^\delta (\mathrm{{d}}x)\), and so

$$\begin{aligned} \int _0^\delta \frac{ \lambda ^2 x^2}{2 \delta ^2 v(\delta )^2} \tilde{\Pi }^\delta (\mathrm{{d}}x) = \frac{\lambda ^2}{2} . \end{aligned}$$

It is then sufficient, in order to show that (20) converges to \(\frac{\lambda ^2}{2}\), to prove that

$$\begin{aligned} \lim _{\delta \rightarrow 0} \int _0^\infty \frac{ x^3}{\delta ^3 v(\delta )^3} \ \tilde{\Pi }^\delta (\mathrm{{d}}x) = 0. \end{aligned}$$
(21)

Again by the definition of \(v(\delta )\), for (21) to hold we require both

$$\begin{aligned} \lim _{\delta \rightarrow 0} \frac{ \int _0^\delta x^3 \Pi (\mathrm{{d}}x) }{ \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) + \delta ^2 \overline{\Pi }(\delta ) \right) ^{\frac{3}{2}} } = 0, \end{aligned}$$
(22)
$$\begin{aligned} \lim _{\delta \rightarrow 0} \frac{ \delta ^3 \overline{\Pi }(\delta ) }{ \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) + \delta ^2 \overline{\Pi }(\delta ) \right) ^{\frac{3}{2}} } =0. \end{aligned}$$
(23)

Squaring the expression in (22), since \(x\le \delta \) within each integral, it follows that

$$\begin{aligned} \frac{ \left( \int _0^\delta x^3 \Pi (\mathrm{{d}}x)\right) ^2 }{ \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) + \delta ^2 \overline{\Pi }(\delta ) \right) ^{3} } \le \frac{ \delta ^2 \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x)\right) ^2 }{ \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) + \delta ^2 \overline{\Pi }(\delta ) \right) ^{3} } . \end{aligned}$$

By the binomial expansion, \((a+b)^3 \ge 3a^2b\) for \(a,b>0\), and then as \(\delta \rightarrow 0\),

$$\begin{aligned} (21) \le \frac{ \delta ^2 \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x)\right) ^2 }{ 3 \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) \right) ^2 \left( \delta ^2 \overline{\Pi }(\delta ) \right) } = \frac{1}{3 \overline{\Pi }(\delta )} \rightarrow 0 , \end{aligned}$$

since the Lévy measure is infinite. For (23), simply observe that as \(\delta \rightarrow 0\),

$$\begin{aligned} \frac{ \delta ^3 \overline{\Pi }(\delta ) }{ \left( \int _0^\delta x^2 \Pi (\mathrm{{d}}x) + \delta ^2 \overline{\Pi }(\delta ) \right) ^{\frac{3}{2}} } \le \frac{ \delta ^3 \overline{\Pi }(\delta ) }{ \left( \delta ^2 \overline{\Pi }(\delta ) \right) ^{\frac{3}{2}} } = \frac{1}{\overline{\Pi }(\delta )^{\frac{1}{2}}} \rightarrow 0. \end{aligned}$$

\(\square \)

Next we will prove the almost sure convergence result for \(L(t,\delta )\). If there is a drift and the Lévy measure is finite, then the result is trivial. So we need only consider cases with infinite Lévy measure, and begin with the zero drift case. Using a Borel–Cantelli argument (see [15, p. 32] for details), we shall prove that almost surely

$$\begin{aligned} \liminf \nolimits _{\delta \rightarrow 0}L(t,\delta )/\mu (\delta ) = \limsup \nolimits _{\delta \rightarrow 0} L(t,\delta )/\mu (\delta ) = t. \end{aligned}$$

First, we will prove the almost sure convergence to t along a subsequence \(\delta _n\) converging to zero. Then, by monotonicity of \(\mu (\delta )\) and \(L(t,\delta )\), we will deduce that for all \(\delta \) between \(\delta _n\) and \(\delta _{n+1}\), \(L(t,\delta )/\mu (\delta )\) also tends to t as \(\delta _n\rightarrow 0\).

Proof of Theorem 2.8

For all \(\varepsilon >0\), by Chebyshev’s inequality and Remark 2.11,

$$\begin{aligned} {\begin{matrix} \sum _n \ \ \mathbb {P} \ \left( \ \ \left| \ \frac{L(t,\delta _n )}{t\mu (\delta _n ) } -1\ \right| > \varepsilon \ \right) \ &{}\le \frac{1}{\varepsilon ^2} \sum _n \frac{ \text {Var} \left( L(t,\delta _n )\right) }{t^2\mu (\delta _n )^2 } \\ &{} = \frac{1}{\varepsilon ^2} \sum _n \frac{ \frac{ t}{\delta _n^2} \left( \int _0^{\delta _n} x^2 \Pi (\mathrm{{d}}x) + \delta _n^2 \overline{\Pi }(\delta _n) \right) }{ \frac{ t^2 }{ \delta _n^2 } \left( \int _0^{\delta _n} x \Pi (\mathrm{{d}}x) + \delta _n \overline{\Pi }(\delta _n) \right) ^2 }\\ &{}= \ \frac{1}{t \varepsilon ^2} \sum _n \frac{ \left( \int _0^{\delta _n} x^2 \Pi (\mathrm{{d}}x) + \delta _n^2 \overline{\Pi }(\delta _n) \right) }{ \left( \int _0^{\delta _n} x \Pi (\mathrm{{d}}x) + \delta _n \overline{\Pi }(\delta _n) \right) ^2 } \end{matrix}} \end{aligned}$$
$$\begin{aligned} \le \frac{1}{t \varepsilon ^2} \sum _n \frac{ \delta _n \left( \int _0^{\delta _n} x \Pi (\mathrm{{d}}x) + \delta _n \overline{\Pi }(\delta _n) \right) }{ \left( \int _0^{\delta _n} x \Pi (\mathrm{{d}}x) + \delta _n \overline{\Pi }(\delta _n) \right) ^2 } = \ \frac{1}{ t \varepsilon ^2} \sum _n \frac{1}{\mu (\delta _n)} . \end{aligned}$$
(24)

Recall that \(\mu (\delta ) = \int _0^\infty \frac{1}{\delta } ( x \wedge \delta ) \ \Pi (\mathrm{{d}}x) \), so since \( \frac{1}{\delta } ( x \wedge \delta )\) is non-decreasing as \(\delta \) decreases, it follows that \(\mu (\delta )\) is non-decreasing as \(\delta \) decreases. Now, \(\mu \) is continuous, and \(\lim _{\delta \rightarrow 0}\mu (\delta ) = \infty \), so it follows that for any fixed \(r \in (0,1)\) there is a decreasing sequence \(\delta _n\) such that \(\mu (\delta _n) = r^{-n}\) for each n. Then (24) is finite, so by the Borel–Cantelli lemma, \(\lim _{n\rightarrow \infty } L(t,\delta _n)/\mu (\delta _n) =t\) almost surely.

When there is no drift, \(L(t,\delta ) \) is given by changing the original subordinator’s jump sizes from y to \( \frac{1}{\delta } (y\wedge \delta )\). By monotonicity of this map, it follows that for a fixed sample path of the original subordinator, each individual jump of the process \(L(t, \delta _{n+1} )\) is at least as big as the corresponding jump of the process \(L(t, \delta _{n} )\). So \(L(t,\delta )\) is non-decreasing as \(\delta \) decreases, and so for all \(\delta _{n+1} \le \delta \le \delta _n\),

$$\begin{aligned} \frac{ L(t,\delta _{n} ) }{ t\mu (\delta _n) } \frac{ \mu (\delta _{n}) }{ \mu (\delta _{n+1} ) } \le \frac{L(t,\delta ) }{t\mu (\delta )} \le \frac{ L(t,\delta _{n+1} ) }{t \mu (\delta _n) } = \frac{ L(t,\delta _{n+1} ) }{ t \mu (\delta _{n+1}) } \frac{ \mu (\delta _{n+1} ) }{ \mu (\delta _{n}) }. \end{aligned}$$

Then by our choice of the subsequence \(\delta _n\), it follows that for all \(\delta _{n+1} \le \delta \le \delta _n\),

$$\begin{aligned} r \frac{ L(t,\delta _{n} ) }{ t\mu (\delta _n) } \le \frac{ L(t,\delta ) }{t \mu (\delta ) } \le \frac{1}{r} \frac{ L(t,\delta _{n+1} ) }{t \mu (\delta _{n+1}) } , \end{aligned}$$
(25)

and since \(\lim _{n\rightarrow \infty } L(t,\delta _n)/\mu (\delta _n)=t\), it follows that

$$\begin{aligned}rt \le \liminf _{\delta \rightarrow 0} \frac{ L(t,\delta )}{\mu (\delta ) } \le \limsup _{\delta \rightarrow 0} \frac{L(t,\delta )}{\mu (\delta )} \le \frac{t}{r}. \end{aligned}$$

Taking limits as \(r\rightarrow 1\), it follows that \(\lim _{\delta \rightarrow 0} L(t,\delta )/\mu (\delta ) =t\) almost surely.

For a process with a positive drift \(d >0\) and infinite Lévy measure, denote the scaling term obtained by removing the drift as \(\hat{\mu }(\delta ):=\mu (\delta ) - d /\delta \). Then the above Borel–Cantelli argument for \(\hat{\mu }\) yields the almost sure limit along a subsequence \(\hat{\delta }_n\) as in (24). Then since the functions \(\mu (\delta )\) and \(L(t,\delta )\) are again monotone in \(\delta \) when there is a drift, the argument applies as in (25).\(\square \)

Remark 4.1

Theorem 2.8 is formulated in terms of the characteristics of the subordinator (i.e. the drift and Lévy measure). For \(N(t,\delta )\), the almost sure behaviour in Theorem 2.1 is formulated in terms of the renewal function, and in order to write this in terms of the characteristics, the expression is more complicated than for \(L(t,\delta )\). For details, see [24, Corollary 1] and [8, Prop 1], the latter of which is very powerful for understanding the asymptotics of \(U(\delta )\) for subordinators with a positive drift, significantly improving upon results in [5].

5 Extensions and Special Cases

5.1 Extensions: Box-Counting Dimension of the Graph

The graph of a subordinator X up to time t is the set \(\{ (s,X_s) : 0 \le s \le t \}\). The box-counting dimensions of the range and graph are closely related. This is evident when we consider the mesh box-counting schemes \(M_G(t,\delta )\), \(M_R(t,\delta )\), denoting graph and range, respectively. The mesh box-counting scheme counts the number of boxes in a lattice of side length \(\delta \) to intersect with a set.

Remark 5.1

For every subordinator with infinite Lévy measure or a positive drift, \(M_G(t,\delta ) = \left\lfloor t / \delta \right\rfloor + M_R(t,\delta ), \) where \(\lfloor \cdot \rfloor \) denotes the floor function. Indeed, \(M_R(t,\delta )\) increases by 1 if and only if \(M_G(t,\delta )\) increases by 1 and the new box for the graph lies directly above the previous box. For each integer n, \(M_G(t,\delta )\) also increases at time \(n\delta \), the new box directly to the right of the previous box.

Remark 5.2

It follows that the graph of every subordinator X has the same box-counting dimension as the range of \(X^\prime _t := t+X_t \), the original process plus a unit drift.

Proposition 5.3

For every subordinator with drift d \(>0\), the box-counting dimensions of the range and graph agree almost surely.

Proof of Proposition 5.3

Letting \(T_{(\delta ,\infty )}\) denote the first passage time of the subordinator above \(\delta \), consider an optimal covering of the graph with squares of side length \(\delta \) as follows:

Starting with \([0,\delta ] \times [0,\delta ]\), at time \(T_1 := \min (T_{(\delta ,\infty )} , \delta )\), add a new box \([T_1 , T_1 + \delta ] \times [X_{T_1} , X_{T_1} + \delta ]\), and so on. Denote the number of these boxes by \(N_G(t,\delta )\), and write \(N_R(t,\delta )\) as the optimal number of boxes needed to cover the range.

If d \(\ge 1\), then we have \(T_1 = T_{(\delta ,\infty )}\) because \(X_\delta \ge \text {d} \delta \). It follows that each time \(N_{G}(t,\delta )\) increases by 1, so does \(N_{R}(t,\delta )\), and vice versa, so \(N_{G}(t,\delta ) = N_{R}(t,\delta )\), and the box-counting dimension of the range and graph are equal when \(d \ge 1\).

For d \(\in (0,1)\), a similar argument applies with a covering of \(\frac{\delta }{d } \times \delta \) rectangles rather than \(\delta \times \delta \) squares. Starting with \([0, \frac{\delta }{d } ] \times [0,\delta ]\), at time \(T_1\), add a new box \([T_1 , T_1 +\frac{\delta }{d }] \times [X_{T_1} , X_{T_1} + \delta ]\), and so on. The number of these boxes is again \(N_{R}(\delta ,t)\), since \(X_{\frac{\delta }{d }} \ge \delta \). By Remark 2.6 , this covering of rectangles can still be used to define the box-counting dimension of the range, since for \(k := \left\lceil \frac{1}{d } \right\rceil \), with \(N_{G} (t,\delta )\) and \(N_{G}^\prime (t,\delta )\) as the number of squares and of rectangles, respectively,

$$\begin{aligned} N_{G}^{\prime } (t,\delta ) \le N_{G} (t,\delta ) \le k \ N_{G}^{\prime } (t, \delta / k ). \end{aligned}$$

\(\square \)

Remark 5.4

The box-counting dimension of the graph of every subordinator is 1 almost surely, since subordinators have bounded variation (BV) almost surely. The same is true for the graph of all BV functions/processes, including in particular every Lévy process without a Gaussian component, whose Lévy measure satisfies \(\int (1 \wedge |x|)\Pi (\mathrm{{d}}x)<\infty \). By Proposition 5.3, the box-counting dimension of the range of every subordinator with drift \(d >0\) is 1 almost surely.

5.2 Special Cases: Regular Variation of the Laplace Exponent

Corollary 5.5 is analogous to [24, Corollary 2], with \(L(t,\delta )\) in place of \(N(t,\delta )\). This allows very fine comparisons, not visible at the log scale, to be made between subordinators whose Laplace exponents are regularly varying with the same index.

Corollary 5.5

Consider a subordinator whose Laplace exponent is regularly varying at infinity, such that \(\Phi (\lambda ) \sim \lambda ^\alpha F(\lambda )\) for \(\alpha \in (0,1)\), where \(F(\cdot )\) is a slowly varying function. Then almost surely as \(\delta \rightarrow 0\), for all \(t>0\),

$$\begin{aligned} L(t,\delta ) \sim \frac{t \delta ^{-\alpha } F\left( \frac{1}{\delta }\right) }{\Gamma ( 2-\alpha ) }. \end{aligned}$$

Proof of Corollary 5.5

Note that \(d =0\), i.e. there is no drift, when the Laplace exponent is regularly varying of index \(\alpha \in (0,1)\). By Theorem 2.8, as \(\delta \rightarrow 0\),

$$\begin{aligned} L(t,\delta ) \sim t \mu (\delta ) = \frac{t I(\delta )}{\delta } = \frac{t}{\delta } \int _0^\delta \overline{\Pi }(x)\mathrm{{d}}x. \end{aligned}$$

Since \(\Phi \) is regularly varying at 0, as \(x\rightarrow 0\), \(\overline{\Pi }(x) \sim \Phi (\frac{1}{x})/\Gamma (1-\alpha )\) (see [1, p. 75]). Then by Karamata’s theorem (see [3, Prop. 1.5.8]), almost surely as \(\delta \rightarrow 0\),

$$\begin{aligned} L(t,\delta ) \sim \frac{ t \delta ^{-\alpha } F\left( \frac{1}{\delta } \right) }{ \Gamma \left( 2-\alpha \right) }. \end{aligned}$$

\(\square \)

Corollary 5.6 strengthens the result of Theorem 2.7 when the Laplace exponent \(\Phi \) is regularly varying. The result cannot be strengthened in general, as the relationship between \(\mu (\delta )\) and \(U(\delta )^{-1}\) is “\(\asymp \)” rather than “\(\sim \)” (see [2, Prop. 1.4]).

Corollary 5.6

For a subordinator with Laplace exponent \(\Phi \) regularly varying at infinity with index \(\alpha \in (0,1)\), for all \(t>0\), almost surely as \(\delta \rightarrow 0\),

$$\begin{aligned} N(t,\delta ) \sim \Gamma (2-\alpha ) \Gamma (1+\alpha ) L(t,\delta ) \end{aligned}$$

Corollary 5.6 follows immediately from Corollary 5.5 and [24, Corollary 2], which says that when the Laplace exponent \(\Phi \) is regularly varying at infinity, such that \(\Phi (\lambda ) \sim \lambda ^\alpha F(\lambda )\) for \(\alpha \in (0,1)\), where \(F(\cdot )\) is a slowly varying function, for all \(t>0\), almost surely as \(\delta \rightarrow 0\),

$$\begin{aligned}N(t,\delta )\sim \Gamma (1+\alpha ) t \delta ^{-\alpha } F\left( \frac{ 1}{\delta } \right) . \end{aligned}$$

Remark 5.7

For \(\alpha \in (0,1)\), \( \Gamma (2-\alpha ) \Gamma (1+\alpha ) \) takes values between \(\pi /4\) and 1. So \(L(t,\delta )\) and \(N(t,\delta )\) are closely related when the Laplace exponent is regularly varying, but as \(\delta \rightarrow 0\), \(L(t,\delta )\) grows to infinity slightly faster than \(N(t,\delta )\).