Next Article in Journal
Efficient Algorithm for the Computation of the Solution to a Sparse Matrix Equation in Distributed Control Theory
Next Article in Special Issue
Partial Diffusion Markov Model of Heterogeneous TCP Link: Optimization with Incomplete Information
Previous Article in Journal
A Study of Yielding and Plasticity of Rapid Prototyped ABS
Previous Article in Special Issue
Particle Filtering: A Priori Estimation of Observational Errors of a State-Space Model with Linear Observation Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open Markov Type Population Models: From Discrete to Continuous Time

by
Manuel L. Esquível
1,*,
Nadezhda P. Krasii
2 and
Gracinda R. Guerreiro
1
1
Department of Mathematics, FCT NOVA, and CMA New University of Lisbon, Campus de Caparica, 2829-516 Caparica, Portugal
2
Department of Higher Mathematics, Don State Technical University, 344000 Rostov-on-Don, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1496; https://doi.org/10.3390/math9131496
Submission received: 31 May 2021 / Revised: 21 June 2021 / Accepted: 23 June 2021 / Published: 25 June 2021

Abstract

:
We address the problem of finding a natural continuous time Markov type process—in open populations—that best captures the information provided by an open Markov chain in discrete time which is usually the sole possible observation from data. Given the open discrete time Markov chain, we single out two main approaches: In the first one, we consider a calibration procedure of a continuous time Markov process using a transition matrix of a discrete time Markov chain and we show that, when the discrete time transition matrix is embeddable in a continuous time one, the calibration problem has optimal solutions. In the second approach, we consider semi-Markov processes—and open Markov schemes—and we propose a direct extension from the discrete time theory to the continuous time one by using a known structure representation result for semi-Markov processes that decomposes the process as a sum of terms given by the products of the random variables of a discrete time Markov chain by time functions built from an adequate increasing sequence of stopping times.

1. Introduction

After the first works introducing homogeneous open Markov population models in [1] followed by those in [2] and then in [3], further expanded by several authors and exposed in [4] and then in [5], the study of open populations in a finite state space in discrete time with a Markov chain structure became well established.
Following the pioneering work of Gani, introducing in [6] what now is known as Cyclic Open Markov population models, there were further extensions in [7], for non-homogeneous Markov chains and then, for cyclic non-homogeneous Markov systems or equivalently for non-homogeneous open Markov population processes, by the authors of [8,9]. Let us stress that continuous time non-homogeneous Markov systems have been studied lately in [10]. Furthermore, the recent work in [11] develops an approach to open Markov chains in discrete time—allowing a particle physics interpretation—for which there is a state space of the Markov chain—where distributions are studied by means of moment generating functions—there is an exit reservoir, which is tantamount to a cemetery state and, there is an incoming flow of particles, defined as a stochastic process in discrete time whose properties—e.g., stationarity—condition the distribution law of the particles in the state space.
Discrete time non-homogeneous semi-Markov systems or equivalently open semi-Markov population models were introduced and studied in [12,13]. The study of open populations in a finite state space in continuous time and governed by Markov laws, has already been carried in [14] and the references therein, and extensions to a general state space have been given in [15,16,17]. The continuous time framework has also been addressed, for instance, in [18,19,20], for the case of semi-Markov processes and for non-homogeneous semi-Markov systems [21]. We may also refer a framework of open Markov chains with finite state space—see in [22] and references therein—that has already seen applications in Actuarial or Financial problems—as, for instance, in [23,24]—but also in population dynamics (see [25]). The weaker formalism open Markov schemes, in discrete time—developed in [26]—allows for influxes of new elements in the population to be given as general time series models.
Another example was motivated by the study of a continuous time non homogeneous Markov chain model for Long Term Care, based on an estimated Markov chain transition matrix with a finite state space, in [27], by means of a method for calibrating the intensities on the continuous time Markov chain using the discrete time transition matrix in the context of usual existence theorems for ordinary differential equations (ODE); this method will be considered, in Section 3.2, in the more general context of Caratheodory existence theorems for ODE.
The main contribution of the present work is to extend results on open Markov chains in discrete time to some continuous time process of Markov type using different methods of associating a continuous process to an observed process in discrete time. One of these methods—presented in Section 3.2 and Section 3.3—is by calibration of the transition intensities. Another method considered for open Markov schemes—in Section 4.2 and also, briefly, for some particular cases, in Section 4.3—is to exploit a natural representation of the continuous time Markov type process, in Formula (2) of Section 2.

2. From Discrete Time to Continuous Time via a Structural Approach

We present the main ideas on a structural representation for continuous time process of Markov type that are crucial to our approach. The structure of continuous time processes—for instance, Markov, semi-Markov, and Markov type schemes processes—allows us to consider a fairly general representation formula—Formula (2)—decoupling the continuous time process as a discrete time process and a sequence of time functions depending on the sequence of the jump stopping times.
Consider a complete probability space ( Ω , F , P ) , a continuous time stochastic process ( Y t ) t 0 defined on this probability space and F = ( F t ) t 0 the natural filtration associated to this process, that is, such that F t : = σ ( Y s : s t ) is the algebra- σ generated by the variables of the process until time t. Consider also a sequence of random variables ( Z n ) n 0 taking values in a finite state space Θ = { θ 1 , θ 2 , , θ r } , the sequence being adapted to the filtration F and 0 τ 0 < τ 1 < τ 2 < < τ n < an increasing sequence of F -stopping times, denoted by T , satisfying the following hypothesis:
Hypothesis 1.
Almost surely, lim n + τ n = + and, for any T R + and almost all ω Ω :
# k 1 : τ k ( ω ) T < + .
This hypothesis means that in every compact time interval [ 0 , T ] , for almost all ω Ω , there is only a finite number of stopping times realizations τ k ( ω ) in this interval.
Hypothesis 2.
The continuous time process ( Y t ) t 0 admits a representation given, for t 0 , by
Y t = n = 0 + Z n 1 I [ τ n , τ n + 1 [ ( t ) ,
that is, a hypothesis on the structure of the continuous time process ( Y t ) t 0 .
It is well known—see in [28] (pp. 367–379) and in [29] (pp. 317–320)—that if ( Z n ) n 0 is a Markov chain and the time intervals ( τ n + 1 τ n ) n 0 are Exponentially distributed then ( Y t ) t 0 can be taken to be a continuous time homogeneous Markov chain. If ( Z n ) n 0 is a Markov chain and the time intervals ( τ n + 1 τ n ) n 0 have a distribution that can depend on the present state as well as on the one visited next then ( Y t ) t 0 can be taken to be a semi-Markov process (see in [30] (pp. 261–262) and in [31] (pp. 295–299), for brief references). In the case of a semi-Markov processes, a nice result of Ronald Pyke (see in [32] (p. 1236)), reproduced ahead in Theorem A7, guarantees that when the state space is finite the process is regular implying that almost all paths of such a semi-Markov process are step-functions over [ 0 , + [ and so, the paths satisfy Formula (1). In another important case (see Theorems A5 and A6 ahead, or [30] (pp. 262–266) and [31] (pp. 195–244)), adequate hypothesis on the distribution of the stopping times and on the sequence ( Z n ) n 0 implies that ( Y t ) t 0 will be a non homogeneous Markov chain process in continuous time, whose trajectories are step functions also satisfying Formula (1). The representation in Formula (2), thus covers the cases of homogeneous and non homogeneous Markov processes in continuous time as well as semi-Markov processes, providing a desired connection between a continuous time process and a discrete one that is a component of the former. We observe that there is a practical justification for Hypothesis 1, namely, the identifiability of the process; as can be read in [33] (p. 3): “…Actually, in real systems the transition from one observable state into another takes some time.” Being so, the existence of accumulation points in a compact interval would preclude estimation procedures for instance of the distribution of the sequence ( τ n + 1 τ n ) n 1 .

3. From Discrete to Continuous Time Markov Chains: A Calibration Approach

In this section, we consider a calibration approach in order to determine a set of probability densities that best approaches a sequence of discrete time transition matrices with respect to a quadratic loss function. We then show that embeddable stochastic matrices, according to Definition 1, are solutions of the calibration problem. For the reader’s convenience, we recall in the first appendix the most important results on continuous time Markov chains with finite state space that are relevant for our study with emphasis on the crucial non-accumulation property of the jump times of a continuous time Markov chain (see Theorem A6 ahead). We will start by recalling the main information on embeddable chains. We then present one of the main contributions of this work, that is, a general result on the optimization problem of calibration and its relations with embeddable properties of discrete time Markov chains.

3.1. The Embedding of a Discrete Time Markov Chain in a Continuous One

The embedding of the discrete time Markov chain in a continuous one following the guidelines, for instance, in [34,35,36,37,38,39,40], can be considered as a method to connect a discrete time process with a continuous one. For notations on non-homogeneous continuous time Markov chains see Section 3.2.
Definition 1
(Embeddable stochastic matrix (see [38])). A stochastic matrix R is said to be embeddable if there exists a time t R > 0 and a family of stochastic matrices P ( s , t ) continuously defined in the set of times { ( s , t ) R 2 : 0 s t t R } such that
P ( s , t ) = P ( s , u ) P ( u , t ) 0 s u t t R P ( s , s ) = I 0 s t R . P ( 0 , t R ) = R .
We observe that by Theorem A2 ahead, the condition in Formulas (3) is tantamount to the definition of a continuous time Markov chain with transition probabilities given by P ( s , t ) .
Remark 1
(Intrinsic time for embeddable chains). Goodman in [41]—aiming at a more general result for the Kolmogorov differential equations—showed that with the change of time given by φ ( u ) : = log det P ( 0 , u ) —which amounts to a change in the matrix coefficients of P ( s , t ) —we have that
t R = log det R .
This remarkable representation for the embedding time t R will be useful for a result in Section 3.2 devoted to the calibration approach. It has also been used for estimation in [42] (p. 330).
See the work in [35] for a definition similar to Definition 1 and for a summary of many important results on this subject. The characterization of an embeddable stochastic matrix in a form useful for practical purposes was recently achieved in [43]. More useful results were obtained in [44]. The connections between this kind of embedding and the other approaches, for the association of a discrete time Markov chain and a continuous time process, deserve further study.

3.2. Continuous Time Markov Chains Calibration with a Discrete Time Markov Transition Matrix

The calibration of transition intensities of a non homogeneous Markov chain, with a discrete time Markov chain transition matrix estimated from data, was proposed in [27]. In this section, we establish a general formulation of the existence a unicity result that subsumes the approach and we establish a connection with the embedding approach of Section 3.1. Notation and needed essential results on non-homogeneous Markov processes in continuous time were recalled in Appendix A.
The procedure for calibration of intensities consists in finding the intensities of a non homogeneous continuous time Markov chain using a probability transition matrix of a discrete time Markov chain and a given loss function—having as arguments the transition probabilities of the continuous time Markov chain and some function of the transition matrix of the discrete time Markov chain—in such a way that the loss function is minimized.
Previously to the consideration of the theorem on the calibration of intensities we discuss some motivation for this approach. It may happen that a phenomena that could be dealt—due to its characteristics—with a continuous time Markov chain model can only be observed at regularly spaced time intervals. This is the case of the periodic assessments of the healthcare status of patients that can change at any time but are only object of a comprehensive evaluation on, say, a weekly basis. With the data originated by these observations we can only determine transition probabilities—for a defined period, say, a week—and, most importantly we cannot determine the time stamps for the patient status change. The question naturally poses itself: is it possible to associate—in some canonical way—to an estimated discrete time Markov chain transition matrix a process in continuous time that encompasses the discrete time process? First steps in this direction are provided by Theorem 1 that we now present and the following Theorems 2 and 3.
We formulate Theorem 1 in the context of Caratheodory’s general existence theory of solutions of ordinary differential equations that we briefly recall. One reason for this choice is that according to [41] (p. 169) and we quote: “…This fact gives further evidence in support of the view that Caratheodory equations occupy a natural place in the theory of non-stationary Markov chains.” Another reason is the fact that Caratheodory existence theory is particularly suited for regime switching models and these models are the object of Theorem 3 ahead. Following the work in [45] (pp. 41–44), we consider the definition of an extended solution for a Cauchy problem of a differential equation,
Y ( t ) = f ( t , Y ( t ) ) , Y ( 0 ) = ξ ,
or formulated in an equivalent form,
Y ( t ) = ξ + 0 t f ( s , Y ( s ) ) d s ,
for f ( t , y ) : I × D R r a non-necessarily continuous function, with I [ 0 , + [ and D R r , to be an absolutely continuous function Y ( t ) (see [46], pp. 144–150) such that f ( t , Y ( t ) ) D for t I and Formula (5) is verified for all t I possibly with the exception of a set of null Lebesgue measure. The well-known Caratheodory’s existence theorem (see in [45], p. 43) ensures the existence of an extended solution with a given initial condition—given in a neighborhood of the initial time—under the conditions that f ( t , y ) is measurable in the variable t, for fixed y , and continuous in the variable y , for fixed t, and moreover that there exists a Lebesgue integrable function m ( t ) , defined on a neighborhood of the initial time, let us say I, such that f ( t , y ) m ( t ) for ( t , y ) I × D . The question of unicity of the solution is dealt, usually, either directly using Theorem 18.4.13 in [47] (p. 337) or using Osgood’s uniqueness theorem—as exposed, for instance, in [48] (p. 58) or in [49] (pp. 149–151)—to conclude that the extended solution—that with Caratheodory’s theorem we know to exist—is unique in the sense that two solutions may only differ on a set of Lebesgue measure equal to zero. For our purposes we need an existence and unicity theorem for ordinary differential equations with solutions depending continuously on a parameter such as the general result of Theorem 4.2 in [45] (p. 53) with an omitted proof that follows for a lengthy previous exposition of related matters. For completeness we now establish a result that is suited to our purposes as it deals with the particular type of Kolmogorov equations for continuous time Markov chains.
Theorem 1
(Calibration of intensities with Caratheodory’s type ODE existence theorem hypothesis). Let, for 1 n N , R τ n = r i j ( τ n ) i , j = 1 , , r be the generic element of a sequence of numerical transition matrices taken at sequence of increasing dates ( τ n ) 1 n N . Consider a set of intensities Q ( t , λ ) = q ( u , i , j , λ ) i , j = 1 , , r —with λ Λ R d being a parameter and Λ being a compact set—satisfying the following conditions:
1.
For every fixed λ the functions q ( u , i , j , λ ) are measurable as functions of u.
2.
For every fixed u the functions q ( u , i , j , λ ) are continuous as functions of λ.
3.
There exists a locally integrable function M : [ 0 , + [ [ 0 , + [ , such that for all λ Λ , i I , u [ 0 , + [ and 0 s t , the following conditions are verified:
q ( u , i , i , λ ) M ( u ) a n d s t M ( u ) d u < + .
Then, we have
1.
There exists P ( s , t , λ ) = p ( s , i , t , j , λ ) i , j = 1 , , r a probability transition matrix, with entries absolutely continuous in s and t, such that conditions in Definition A2, the Chapman–Kolmogorov equations in Theorem A1 and Theorem A3 are verified.
2.
For each fixed s 0 , consider the loss function
O ( s 0 , λ ) : = i , j = 1 , , r n = 1 N p ( s 0 , i , τ n , j , λ ) r i j ( τ n ) 2 .
Then, for the optimization problem inf λ Λ O ( s 0 , λ ) there exists λ 0 Λ such that
O ( s 0 , λ 0 ) = min λ Λ O ( s 0 , λ ) ,
the unique minimum being attained at possibly several points λ 0 Λ .
Proof. 
We will prove, simultaneously, the existence of the probability transition matrix, the unicity in the extended solution sense and the continuous dependence of the parameter λ Λ following the lines of the proof of the result denominated Hostinsky’s representation (see in [29], pp. 348–349). As we suppose that Λ is compact, the continuity of P ( s 0 , t , λ ) , as a function of λ Λ for every fixed t, will be enough to establish the second thesis.
We want to determine an extended solution of the Kolmogorov forward equation given in Formula (A11), that is an extended solution of
P t ( s 0 , t , λ ) = P ( s 0 , t , λ ) Q ( t , λ ) P ( t , t ) = I ,
an equation which, as seen in Formula (A12), can be read in integral form as,
P ( s 0 , t , λ ) = I + [ s 0 , t ] P ( s 0 , s , λ ) Q ( s , λ ) d s .
As previously said, we will now follow the general idea of successive approximations in the proof of the Picard–Lindelöf theorem for proving existence and unicity of solutions of ordinary differential equations for the forward Kolmogorov equation. By replacing P ( s 0 , s , λ ) in the right-hand member of Equation (11) by this right-hand member we get,
P ( s 0 , t , λ ) = I + [ s 0 , t ] Q ( s , λ ) d s + [ s 0 , t ] [ s 0 , t 1 ] P ( s 0 , t 2 , λ ) Q ( t 1 , λ ) Q ( t 2 , λ ) d t 2 d t 1
and, by induction, we obtain
P ( s 0 , t , λ ) = I + [ s 0 , t ] Q ( s , λ ) d s + + n = 2 k [ s 0 , t ] [ t 1 , t ] [ t n 1 , t ] Q ( t 1 , λ ) Q ( t 2 , λ ) Q ( t n , λ ) d t n d t 1 + + [ s 0 , t ] [ t 1 , t ] [ t k 1 , t ] P ( s 0 , t k , λ ) Q ( t 1 , λ ) Q ( t 2 , λ ) Q ( t k , λ ) d t k d t 1 .
Now, considering the function M ( t ) in the third hypothesis stated above about the intensity matrix, we have that, by Lemma A1 (see also Lemma 8.4.1 in [29], p. 348), since M ( t ) is integrable over any compact set, considering the ( i , j ) component of the r × r matrix, we have that
[ s 0 , t ] [ t 1 , t ] [ t k 1 , t ] P ( s 0 , t k , λ ) Q ( t 1 , λ ) Q ( t 2 , λ ) Q ( t k , λ ) d t k d t 1 i j r k [ s 0 , t ] [ t 1 , t ] [ t k 1 , t ] M ( t 1 ) M ( t 2 ) M ( t k ) d t k d t 1 = = r [ s 0 , t ] M ( s ) d s k k ! .
Finally, as
lim k + r [ s 0 , t ] M ( s ) d s k k ! = 0 ,
we have that the series for which the sum represents P ( x , t , λ ) , that is,
P ( s 0 , t , λ ) = I + n = 1 + [ s 0 , t ] [ t 1 , t ] [ t n 1 , t ] Q ( t 1 , λ ) Q ( t 2 , λ ) Q ( t n , λ ) d t n d t 1 ,
is a series—of absolutely continuous functions of the variable t which are also continuous as functions of the parameter λ Λ —converging normally and so the sum is an absolutely continuous function of the variable t and continuous function of the parameter λ . With a similar reasoning applied to the backward Kolmogorov equation we also have that P ( s , t 0 , λ ) is absolutely continuous in the variable s and, obviously, continuous as a function of the parameter λ Λ . We observe that it was stated in [41], pp. 166–167 (with a reference to a proof in [50] and proved also in [51]), that the separate absolute continuity of P ( s , t , λ ) in the variables s and t ensures the uniqueness of the solution. □
Remark 2
(An alternative path for the existence result). We observe that, for every fixed value of the parameter λ, by a direct application of Caratheodory’s existence theorem to the forward and backward Kolmogorov equations in Theorem A3, we obtain a probability transition matrix P ( s , t , λ ) = p ( s , i , t , j , λ ) i , j = 1 , , r , such that conditions in Definition A2 and the Chapman–Kolmogorov equations in Theorem A1 are verified, that in addition has entries absolutely continuous in s and t and such that Kolmogorov’s equations are satisfied almost everywhere. With this approach the continuous dependence of the probability transition matrix on the parameter λ requires further proof.
Remark 3
(On the parametrized intensities and transition probabilities). In a first application to Long-Term Care of a simpler version of Theorem 1 presented in [27], we chose as intensities a parametrized family—of Gompertz–Makeham type (see, for instance, in [52], p. 62)—with a three dimensional parameter. We observe that, in its actual formulation, Theorem 1 contemplates the case of a set of intensities—and of associated transition probabilities—not necessarily with the same functional form with varying parameters but merely with a finite set of different functional forms indexed by the parameters.
Remark 4
(Only one transition matrix observation). In the case where we only have one estimated transition matrix R , we can consider the sequence of n step transition matrices given by the n fold product of the matrix R by itself. This situation will be addressed in Theorem 2 ahead, in the case of homogeneous Markov chains and in Theorem 3 for the non-homogeneous case.
We also observe that in the case of a multidimensional parameter set Λ—say r 1 —and even in a reasonable state space of the discrete time Markov chain—say with r 2 states—the optimization problem of Formula (8) may require adequate algorithms to be solved as the number of variables is of the order of r 1 × r 2 × ( r 2 1 ) . In [27] we opted for a modified grid search coupled with the numerical solutions of the Kolmogorov equations in order to recover the transition probabilities of the continuous time Markov chain.
Remark 5
(On the unicity of the solution of the calibration problem). The unicity in law of the solution of the calibration problem deserves discussion. If there are several minimizers of the calibration problem, to each of these minimizers corresponds an intensity and to each intensity a, possible, different law for the stopping times of the continuous time Markov chain, as these laws are determined by the intensities (see Remark A2). The existence of criteria allowing to identify a distribution of inter-arrival times that stochastically dominates all other solutions is an open problem.
We can establish a connection between the approach in Section 3.1 and Theorem 1 on calibration above, showing first—in Theorem 2—that, if a matrix is embeddable in a homogeneous continuous time Markov chain—with intensities depending continuously on a parameter—for a fixed value of the parameter, then this continuous time Markov chain solves the calibration problem in an optimum way. We recall that the continuous time Markov chain is homogeneous if, for all 0 s , t the transition probabilities satisfy
P ( s , s + t ) = P ( 0 , t ) ,
and that the intensities matrix is constant as a function of time (see [41] (pp. 165–166) for definitions in this context).
Theorem 2
(Discrete chains embeddable in homogeneous continuous chains can be optimally calibrated). Suppose that the matrix R is embeddable and let t R and the transition probabilities P ( s , t , λ 1 ) satisfy Definition 1 in the case of a homogeneous continuous time Markov chain for some family of intensities Q ( λ 1 ) where λ 1 Λ is a given parameter. Then, with τ n : = n t R for n 1 and R τ n : = R ( n ) —the n fold product of the matrix R by itself—we have that the optimization problem, inf λ Λ O ( λ ) with respect to the loss function given by Formula (8) has an optimal solution P ( s , t , λ 1 ) such that
O ( λ 1 ) = min λ Λ O ( λ ) = 0 .
Proof. 
It is enough to observe that by Formulas (3) in Definition 1 we have, as τ 2 τ 1 = τ 1 ,
P ( 0 , τ 2 , λ 1 ) = P ( 0 , τ 1 , λ 1 ) P ( τ 1 , τ 2 , λ 1 ) = P ( 0 , τ 1 , λ 1 ) P ( 0 , τ 2 τ 1 , λ 1 ) = = P ( 0 , τ 1 , λ 1 ) P ( 0 , τ 1 , λ 1 ) = R ( 2 ) = R τ 2 ,
and, by induction, that P ( 0 , τ n , λ 1 ) = R τ n and so in Formula (8) we have that O ( λ 1 ) = 0 . □
Remark 6
(On the skeletons of a homogeneous continuous time Markov chain). Another possible way to extend results from discrete time to continuous time is the approach of skeletons of Kingman and other authors (see [53,54], for instance). As we are more interested in non-homogeneous continuous time Markov chains we do not pursue this approach in the present work.
We now address the case of non homogeneous Markov chain. In Theorem 3, we show that if every element of a sequence, with no gaps, of matrix powers of a discrete time Markov chain is embeddable then there is a regime switching process of Markov type that solves optimally the calibration problem.
Theorem 3
Discrete power-embeddable discrete chains can be optimally calibrated). Suppose that all the powers R ( n ) = r i j ( n ) i , j = 1 , , r , for 1 n N , of a discrete time Markov chain transition matrix R are embeddable and let P n ( s , t , λ n ) be the transition probabilities of the embedding continuous time Markov chain for R ( n ) given in their intrinsic time—defined in Remark 1—in such a way that the respective embedding times verifies t R ( n ) = n log det R (according to Formula (4)). We suppose that the intensities Q n ( t , λ n ) for each of the transition probabilities P n ( s , t , λ n ) depend on parameters λ n Λ , possibly different but all in a common parameter set Λ. With the convention t R ( 0 ) = 0 , and
λ ( t ) : = λ n , t R ( n 1 ) t t R ( n ) ,
let P ˜ ( s , t , λ ( t ) ) be defined by
P ˜ ( s , t , λ ( t ) ) : = P n ( s , t , λ n ) , 0 = t R ( 0 ) s t R ( n ) , t R ( n 1 ) t t R ( n ) , s t ,
and thus satisfying P ˜ ( 0 , t R ( n ) , λ ( t ) ) = P n ( 0 , t R ( n ) , λ n ) = R ( n ) . Then, we have that the optimization problem, inf λ Λ O ( λ ) with respect to the loss function given by
O ( λ ) : = i , j = 1 , , r n = 1 N P ˜ ( 0 , t R ( n ) , λ ( t ) ) i j r i j ( n ) 2 ,
has an optimal solution P ˜ ( s , t , λ ( t ) ) such that
O ( λ ( t ) ) = min λ Λ O ( λ ) = 0 .
Proof. 
We observe that the definition in Formula (12) is coherent—see Figure 1—and then it is a simple verification with the definitions proposed. □
Remark 7
(An associated regime switching process). The function P ˜ ( s , t , λ ( t ) ) defined in Formula (12) was obtained by superimposing different transition probabilities for different Markov chains in continuous time. A natural question is to determine if there is—based on these different transitions probabilities—a regime switching Markov chain in continuous time that bears some connection with P ˜ ( s , t , λ ( t ) ) . From a brief analysis of Figure 1 we can guess the natural definition of a regime switching Markov chain based on the probabilities P n ( s , t , λ n ) . Let
P ( s , t , λ ( t ) ) : = P n ( s , t , λ n ) , t R ( n 1 ) s t t R ( n ) .
Formula (14) has the following interpretation. For each 1 n N , consider continuous time Markov chain processes ( X t n ) t [ t R ( n 1 ) , t R ( n ) ] with transition probabilities P n ( s , t , λ n ) defined in the domains R n : = { ( s , t ) R 2 : t R ( n 1 ) s t t R ( n ) } with the convention t R ( 0 ) = 0 . The regime switching process ( Y t ) t [ 0 , t R ( n ) ] is such that (compare with Formula (2)):
Y t = X t n , t [ t R ( n 1 ) , t R ( n ) ] ,
that is, the process ( Y t ) t [ 0 , t R ( n ) ] is obtained by gluing together ( X t n ) t [ t R ( n 1 ) , t R ( n ) ] , the paths of the processes which are bona fide continuous time Markov processes in each of their—non-random—time intervals [ t R ( n 1 ) , t R ( n ) ] . It is clear that P ( s , t , λ ( t ) ) can be interpreted as a transition probability only when restricted to some domain R n and that, in general, it will not be a transition probability in the whole interval [ 0 , t R ( N ) ] .
Remark 8.
The regime switching process defined in Remark 7 deserves further study. We may, nevertheless, define transition probabilities P ^ ( s , t , λ ( t ) ) for t R ( k 1 ) s t R ( k ) t t R ( k + 1 ) —with properties to be thoroughly investigated—by considering
P ^ ( s , t , λ ( t ) ) : = P k ( s , t R ( k ) , λ k ) · P k + 1 ( t R ( k ) , t , λ k + 1 ) .

3.3. Conclusions on the Relations between Embeddable Matrices, Calibration, and Open Markov Chain Models

From Theorems 1–3, the following conclusions can be drawn. Given a discrete time Markov transition matrix,
  • if the matrix is embeddable—according to Definition 1 of Section 3.1—there is an unique in law homogeneous Markov chain in continuous time that solves the calibration problem optimally; the unicity is a consequence of Remark A2 that shows that the laws of the stopping times ( τ n ) n 0 in the representation of Formula (A13) only depend on the intensities and these are uniquely determined whenever the discrete time Markov chain is embeddable.
  • if the matrix is power-embeddable—that is, if all the matrices of a finite sequence with no gaps of powers of the matrix are embeddable—then there is an unique regime switching continuous time non-homogeneous Markov chain—in the sense of Remark 7—that solves the calibration problem optimally. In this case, the unicity has a justification similar to the previously referred case, that is, the laws of the stopping times only depends on the intensities and these are determined by the fact that the matrix is power-embeddable.
As a consequence, for our purposes, it appears of fundamental importance to determine if a discrete time Markov chain transition matrix is embeddable and to determine—if possible, explicitly—the embedding continuous time Markov chain. Regarding this problem the results in [43,55] deserve further consideration.
Remark 9
(Aplying Theorems 1–3). Suppose that discrete time Markov chain transition matrix, of a Markov chain process ( Z n ) n 1 is embeddable in a continuous time Markov chain ( X t ) t 0 . We have, for this continuous time process and for a determined sequence of stopping times ( τ n ) n 1 , the representation given in Formula (A13) of Theorem A5, that is,
X t = n = 0 + X τ n 1 I [ τ n , τ n + 1 [ ( t ) .
Now, as the Theorems referred to may consider that the process ( Z n ) n 1 is suitably approximated by ( X t ) t 0 , we can also consider that the continuous time process defined by
X ˜ t : = n = 0 + Z τ n 1 I [ τ n , τ n + 1 [ ( t ) ,
is an approximation of ( Z n ) n 1 in continuous time. For processes with a structural representation similar to the one of the process ( X ˜ t ) t 0 we propose in Section 4.3 a method to extend from discrete to continuous time the open populations methodology.

4. More on Open Continuous Time Processes from Discrete Ones

In this section, we discuss an extension of the formalism of open Markov chains to the case of semi-Markov processes (sMp) and other continuous time processes, namely, the open Markov chain schemes introduced in [26]. For the reader’s convenience we present in Appendix B a short summary on sMp and in the next Section 4.1 a review of the main results on the open Markov chain formalism for discrete time. Finally, we propose the second main contribution of this work, that is, an extension of the open Markov chain formalism in discrete time to continuous time in the case of sMp. We also briefly refer the case of open Markov schemes that, in some particular instances, can be dealt as the sMp case.

4.1. Open Markov Chain Modeling in Discrete Time: A Short Review

We now detail and comment the results that will be used in this paper on discrete time open Markov chains. The study of open Markov chain models we will present next relies on results and notations that were introduced in [56], further developed in [22] and that we reproduce next, for the readers convenience. We will suppose that, in general, the transition matrix of the Markov chain model may be written in the following form:
P = K U 1 0 V
where K is a k × k transition matrix between transient states, U 1 a k × ( r k ) matrix of transitions between the transient and the recurrent states, and V a ( r k ) × ( r k ) matrix of transitions between the recurrent states. A straightforward computation then shows that
P ( n ) = K ( n ) U n 0 V ( n ) , n N
with U n = U n 1 V + K ( n 1 ) U 1 = i = 0 n 1 K ( i ) U 1 V ( n 1 i ) . We write the vector of the initial classification, for a time period i, as
c i = t i r i , i N
with t i the vector of the initial allocation probabilities for the transient states and r i the vector of the initial allocation probabilities for the recurrent states. We suppose that at each epoch i 0 there is an influx of new elements in the classes of the population—population that has its evolution governed by the Markov chain transition matrix—that is, a Poisson distributed with parameter λ i . It is a consequence of the randomized sampling principle (see [57], pp. 216–217) that, if the incoming populations are distributed by the classes according with the multinomial distribution, then the sub-populations in the transient classes have independent Poisson distributions, with parameters given by the product of the Poisson parameter by the probability of the incoming new member being affected to the given class. With Formulas (16) and (17), we now notice that the vector of the Poisson parameters, for the population sizes in each state at an integer time N, may be written as
λ N + + = i = 1 N λ i t i K ( N i ) i = 1 N λ i t i U N i + r i V ( N i ) .
We observe that the first block corresponds to the transient states and the second block, the one in the right-hand side, corresponds to the recurrent states. From now on, as a first restricting hypothesis, we will also suppose that the transition matrix of the transient states, K , is diagonalizable and so
K = j = 1 k η j α j β j ,
with ( η j ) j { 1 , , k } the eigenvalues, ( α j ) j { 1 , , k } the left eigenvectors and ( β j ) j { 1 , , k } the right eigenvectors of matrix K . We observe that j { 1 , , k } corresponds to a transient state if and only if η j < 1 . We may write the powers of K as
K ( n ) = j = 1 k η j n α j β j ,
and so, as a consequence of (18), for the vector of the Poisson parameters corresponding only to the transient states, λ N + , we have
λ N + = i = 1 N λ i t i K ( N i ) = j = 1 k i = 1 N λ i η j N i t i α j β j .
The main result describing the asymptotic behaviour, established in [22], is the following.
Theorem 4
(Asymptotic behavior of Poisson parameters of an open Markov chain with Poisson distributed influxes). Let a Markov chain driven system have a diagonalizable transition matrix between the transient states K = j = 1 k η j α j β j , written in its spectral decomposition form. Suppose the system to be fed by Poisson inputs with intensities ( λ i ) i N and such that the vector of initial classification of the inputs in the transient states converges to a fixed value, that is, lim i + t i = t 0 . Then, with λ n + the vector of Poisson parameters of the transient sub-populations, at date n N , we have the following:
1.
If lim n + λ n = λ R + , then
λ + = lim n + λ n + = j = 1 k λ 1 η j t α j β j .
2.
If lim n + λ n = + and there exists a constant C > 0 such that
max 1 i n λ i λ i + 1 λ n C
then
lim n + λ n + λ n = j = 1 k 1 1 η j t α j β j .
Remark 10.
We observe that proportions in the Markov chain transient classes, on both statements of the Theorem 4, only depend on the eigenvalues η j , j = 1 , , k . In fact, whenever using Formula (21) to compute proportions these proportions do not depend on the value of λ as we have that
j = 1 k λ 1 η j t α j β j = λ t · j = 1 k 1 1 η j α j β j ,
and the term in the right-hand side multiplying λ is a vector with the dimension equal to the number of transient classes k, which is equal to the dimension of the square matrix K . As so, when computing proportions, by normalizing this vector with the sum of its components, λ 0 disappears.

4.2. Open sMP from Discrete time Open Markov Chains

Let us suppose that the successive Poisson distributions of the influx of new members in the population are independent of the random time at which the influx of new members in the population occurs. For the notations used, see Appendix B. Consider a sMp given by the representation in Formula (A17), that is,
Y t = n = 0 + Z n 1 I [ τ n , τ n + 1 [ ( t ) ,
in which ( Z n ) n 0 is the embedded Markov chain and ( τ n ) n 0 are the jump times of the process. We now propose a method to extend the known method to study open Markov chains in discrete time to sMps.
(1) 
In applications we usually consider that we have the influx of new members in the population being modeled by Poisson random variables that at each time t has a parameter λ ( t ) . Being so, Formula (20) may be rewritten as
λ N + = i = 1 i : t i N λ ( t i ) t i K ( N i ) = j = 1 k i = 1 i : t i N λ ( t i ) η j N i t i α j β j ,
where usually we can take t i = i , as in a discrete time Markov chain, the actual time stamp is irrelevant as we only consider the sequence of epochs i 0 .
(2) 
In a sMp the only difference we have with respect to a discrete time Markov chain is that the dates τ i corresponding to each epoch i are random; altogether, the structure of the changes in the sub-populations in the transient states is governed by the transition matrix of the Markov chain. In a sMp, the only possible observable changes are those that occur at the random times where it jumps; as so, we will suppose that the influxes of the new members of the population only occur at these random times. As a consequence, we should have that the vector parameter of the Poisson parameters, in the transient classes, is random since it depends on the random times in each we consider influxes and so, Formula (23) becomes
λ N + ( ω ) = i = 1 i : τ i ( ω ) N λ ( τ i ( ω ) ) t i K ( N i ) = j = 1 k i = 1 i : τ i ( ω ) N λ ( τ i ( ω ) ) η j N i t i α j β j .
(3) 
The parameters of interest will be the expected values of the random variables λ N + ( ω ) —with the correspondent asymptotic behavior of these expected values when N grows indefinitely—and these expected values can be computed whenever the joint laws of ( τ 0 , τ 1 , , τ i ) are known, for i 0 . In fact, we observe that by Formula (24) we have
E λ N + τ 1 , τ i = E j = 1 k i = 1 i : τ i N λ ( τ i ) η j N i t i α j β j τ 1 , τ i = = j = 1 k i = 1 i : τ i N λ ( τ i ) η j N i t i α j β j .
This formula has two consequences. The first one is that given an arbitrary strictly increasing sequence of dates 0 = t 0 < t 1 < < t i < we have
E λ N + τ 1 = t 1 , τ i = t i = j = 1 k i = 1 i : t i N λ ( t i ) η j N i t i α j β j ,
thus justifying the assumption that given the strictly increasing of non accumulating stopping times dates ( τ 1 = t 1 , τ i = t i ) we can proceed as with the usual open Markov chain model in discrete time. The second consequence deserving mention is that in order to compute the expected value of the vector parameters of the transient classes sub-populations, while preserving the Poisson distribution of the influx new members, we compute
E λ N + = E E λ N + τ 1 , τ i = E j = 1 k i = 1 i : τ i N λ ( τ i ) η j N i t i α j β j ,
using the joint laws of ( τ 1 , , τ i ) for i 0 , laws we will suppose to be given.
Theorem 6, in the following, is one possible extension of the open Markov chain formalism to the sMp case taking as a starting point a discrete time Markov chain. To prove this result we will need Theorem 5—a generalization of Lebesgue dominated convergence theorem with varying measures—that we quote from Theorem 3.5 in [58] (p. 390).
Theorem 5
(Lebesgue dominated convergence theorem with varying measures). Consider ( X , B ( X ) ) a locally compact, separable topological space endowed with its Borel σ-algebra. Suppose that the sequence of probability measures ( μ n ) n 1 —each one of them defined in ( X , B ( X ) ) —converges weakly to μ on ( X , B ( X ) ) and that the sequence of measurable functions ( f n ) n 1 converges continuously to f. Suppose additionally that, for some sequence of measurable functions ( f n ) n 1 defined on X :
1.
For all t X and n 1 , we have that f n ( t ) g n ( t ) .
2.
With the function g defined on X by
g ( t ) : = inf ( t n ) n 1 , lim n + t n = t lim   inf n + g n ( t n )
we have that
lim   sup n + g n ( t ) d μ n ( t ) g ( t ) d μ ( t ) < + .
Then, we have
lim n + f n ( t ) d μ n ( t ) = f ( t ) d μ ( t ) < + .
As said, we will suppose that we only observe the influx of the new members of the population into the sMp classes at the random times where it jumps—but, of course, accounting the state before the jump and the state after the jump—which is a hypothesis that makes sense under the perspective that we usually observe trajectories of the process. We then have the following extension of Theorem 4 to the case of sMp.
Theorem 6
(On the stability of open sMp transient states). Let a sMp given by the representation in Formula (A17), that is,
Y t = n = 0 + Z n 1 I [ τ n , τ n + 1 [ ( t ) ,
in which ( Z n ) n 0 is the embedded Markov chain and ( τ i ) i 0 are the jump times of the process. For the embedded Markov chain ( Z n ) n 0 , consider the notations of Section 4.2 and of Theorem 4 in this subsection. Suppose that the influx of new members in the population is modeled by Poisson random variables that at each time t [ 0 , + [ have a parameter λ ( t ) , with λ a continuousfunction. Suppose, furthermore, that the following hypothesis are verified.
1.
The stopping times ( τ i ) i 0 are integrable, that is, E [ τ i ] < + for all i 1 .
2.
There exists λ > 0 such that, for every sequence of positive real numbers ( t i ) i 1 such that lim i + t i = + we have
lim i + λ ( t i ) = λ
Then, we have that the asymptotic behavior of the expected value vector of parameters of Poisson distributed sub-populations in the transient classes of an open sMp, submitted to a Poisson influx of new members at the jump times of the sMp, is given by
lim N + E λ N + = lim N + E j = 1 k i = 1 i : τ i N λ ( τ i ) η j N i t i α j β j = j = 1 k λ 1 η j t α j β j .
Proof. 
For each n 1 , let F ( τ 1 , , τ n ) be the joint distribution function of ( τ 1 , , τ n ) . We want to compute the following limit of expectations:
lim N + E λ N + = lim N + E λ N + , τ 1 < < τ i N = = lim N + 0 < t 1 < < t i N λ N + d F ( τ 1 , , τ n ) ( t 1 , , t n ) = = lim N + 0 < t 1 < < t i N j = 1 k i = 1 i : t i N λ ( t i ) η j N i t i α j β j d F ( τ 1 , , τ n ) ( t 1 , , t n ) ,
and we observe that by Theorem 4 and by the first hypothesis, for every sequence of positive real numbers ( t i ) i 1 such that lim i + t i = + and t 1 < t 2 < < t i < , we have that
lim N + j = 1 k i = 1 i : t i N λ ( t i ) η j N i t i α j β j = j = 1 k λ 1 η j t α j β j .
The limit in the last term of Formula (27) requires a result of Lebesgue convergence theorem type but with varying measures. For the purpose of applying Theorem 5, we introduce the adequate context and notations and then we will apply the referred theorem. Consider the space X = [ 0 , + [ 0 defined to be the space of infinite sequences of numbers in [ 0 , + [ , that is,
X = t = ( t 1 , , t i , ) : i 1 , t i [ 0 , + [ .
Recall that with the metric d given by
t = ( t 1 , , t i , ) , t = ( t 1 , , t i , ) X , d ( t , t ) : = i = 1 + min ( 1 , t i t 1 ) 2 i ,
X is a metric space, locally compact, separable and complete (see, for instance, in [59], pp. 9–10). We will consider X = [ 0 , + [ 0 endowed with the Borel σ -algebra B ( X ) generated by the family P f given by
P f = A i 1 × A i 2 × × A i p : p 1 , A i 1 B ( [ 0 , + [ ) ,
with B ( [ 0 , + [ ) the Borel σ -algebra of [ 0 , + [ . We now take ( τ i ) i 0 the sequence of the jump times of the process represented in Formula (A17). First, we define the sequence of measures ( μ n ) n 1 where for each n 1 we have that μ n is defined on the measurable space ( [ 0 , + [ n , B ( [ 0 , + [ n ) ) by considering, for A 1 × A 2 × A n with A i B ( [ 0 , + [ ) , that
μ n ( A 1 × A 2 × A n ) = P τ 1 A 1 , , τ n A n = t 1 A 1 , , t n A n d F ( τ 1 , , τ n ) ( t 1 , , t n ) .
Being so, μ n is the probability joint law of ( τ 1 , , τ n ) and the last integral in the last term of Formula (27) is exactly an integration with respect to the measure μ n . As a consequence of Formula (29), the sequence ( μ n ) n 1 verifies the compatibility conditions of Kolmogorov extension theorem (see [60], p. 46) and so there is a probability measure μ , defined on ( X , B ( X ) ) , having as finite dimensional distributions the measures of the sequence ( μ n ) n 1 .
Now, for each n 1 , we can consider μ ˜ n the extension of μ n to the measurable space ( X , B ( X ) ) in the following way:
A B ( X ) μ ˜ n ( A ) = { t = ( t 1 , , t i , ) A : t 1 , , t n [ 0 , + [ } d F ( τ 1 , , τ n ) ( t 1 , , t n ) .
In fact, with this definition the restriction of μ ˜ n to B ( [ 0 , + [ n ) is exactly μ n . An important observation is the following. Consider A : = A i 1 × A i 2 × × A i p P f . Then, for m i p we have that
μ ˜ m A = { t = ( t 1 , , t i , ) A : t 1 , , t m [ 0 , + [ } d F ( τ 1 , , τ m ) ( t 1 , , t m ) = = { t = ( t 1 , , t i , ) A : t 1 , , t i p [ 0 , + [ } d F ( τ 1 , , τ i p ) ( t 1 , , t i p ) = = μ ˜ i p A = μ i p A = μ A ,
thus showing that for every A P f the sequence ( μ ˜ m A ) m 1 converges to μ A . Now, by Theorem 2.2 in [59] (p. 17), as P f is a π -system and every open set in the metric space ( X , d ) is a countable union of elements of P f , we have that the sequence ( μ ˜ m ) m 1 converges weakly to μ . In order to apply Theorem 5 to compute the limit, we may consider two approaches to deal with the fact that λ N + is a vector of finite dimension k. Either we proceed component wise or we consider norms. Let us follow the second path. Define, for integer N, and some constant M,
f N ( t ) = f N ( t 1 , , t i , ) : = i = 1 i : t i N λ ( t i ) η j N i t i α j β j ,
and also,
g N ( t ) g : = j = 1 k λ 1 η j t α j β j + M ,
in such a way that f N ( t ) g ; such choice of M is possible as a consequence of Formula (28). We can verify that the sequence ( f N ) N 1 converges continuously to a function f by using Theorem 4.1.1 in [22] (p. 373). In fact, let us consider a sequence ( t N ) n 1 converging to some ( t = ( t 1 , , t i , ) in the metric space ( X , d ) . With ( t N = ( t 1 N , , t i N , ) we surely have that lim N + t i N = t i for all i 1 . As a consequence of the continuity of λ and of Theorem 4.1.1 in [22] (p. 373), we have that
lim N + f N ( t N ) = lim N + i = 1 i : t i N N λ ( t i N ) η j N i t i α j β j = j = 1 k λ ( lim i + t i ) 1 η j t α j β j = : f ( t ) .
It is clear now that the sequences ( f N ) N 1 , ( g N ) t 1 and ( μ ˜ n ) n 1 satisfy together with μ the hypothesis of Theorem 5 and so the announced result in Formula (25) follows. □
Remark 11
(Alternative proof for the weak convergence of the sequence ( μ ˜ n ) n 1 ). There is another proof the weak convergence of the sequence ( μ ˜ m ) m 1 to μ that we now present. We proceed by showing that the sequence ( μ ˜ n ) n 1 is relatively compact—as a consequence of Prohorov theorem (see [59], pp. 59–63)—because, as we will show next, this sequence is tight. Let an arbitrary 0 < ϵ < 1 be given and consider a sequence of positive numbers ( ξ i ) i 1 such that, by Tchebychev inequality and using the fact that the stopping times τ i have finite integrals,
P τ i > ξ i E τ i ξ i ,
in such a way that
i = 1 + E τ i ξ i < ϵ .
Now consider the Borel set K ϵ = i = 1 + [ 0 , ξ i ] X which is compact by Tychonov theorem. We now have that
μ ˜ n ( K ϵ ) = { t = ( t 1 , , t i , ) K ϵ : t 1 , , t n [ 0 , + [ } d F ( τ 1 , , τ n ) ( t 1 , , t n ) = = i = 1 n [ 0 , ξ i ] d F ( τ 1 , , τ n ) ( t 1 , , t n ) = = P ( τ 1 , , τ n ) i = 1 n [ 0 , ξ i ] = P i = 1 n { τ i ξ i } = 1 P i = 1 n { τ i > ξ i } 1 i = 1 n E τ i ξ i 1 i = 1 + E τ i ξ i 1 ϵ ,
thus showing that the sequence of probability measures ( μ ˜ n ) n 1 is tight in the measurable space ( X , B ( X ) ) . As said, by Prokhorov’s theorem, this implies that the sequence ( μ ˜ n ) n 1 is relatively compact, that is, for every subsequence of ( μ ˜ n ) n 1 , there exists a further subsequence and a probability measure such that this subsequence converges weakly to the said probability measure. Now, as, by construction, the probability measure μ has, as finite dimensional distributions the probability measures ( μ ˜ n ) n 1 we can say that for n 1 , the finite dimensional distributions of μ ˜ n converge weakly to the finite dimensional distributions of μ. As a consequence, following the observation in [59] (p. 58), the sequence ( μ ˜ n ) n 1 converges weakly to μ.
Remark 12
(Applying Theorem 6). If we manage to estimate a discrete time Markov chain transition matrix and if we manage to fit some function f—such that lim t + f ( t ) = λ —to the number of new incoming members in the population at a set of non accumulating non-evenly spaced dates (as done with a statistical procedure in [22] or, with a simple fitting in [25]) then, Theorem 6 allows us to get the asymptotic expected number of elements in the transient classes of a sMp having as embedded Markov chain the estimated one.

4.3. Open Continuous Time Processes from Open Markov Schemes

We may follow the approach of open Markov schemes in [26] and define a process in continuous time after getting a process in random discrete times describing, at least on average, the evolution of the elements in each transient class. Let us briefly recall the main idea. A population model is driven by a Markov chain defined by a sequence of initial distributions given, for n 1 , by ( q n ) = ( q 1 n , q 2 n , , q r n ) and a transition matrix P = [ p i j ] , 1 i , j r . After the first transition, the new values of the proportions in all states, after one transition, can be recovered from P q = ( q P ) and, after n transitions, by ( P ( n ) ) q = ( q P ( n ) ) . We want to account for the evolution of the expected number of elements in each class supposing that, at each random date τ k , a random number X τ k of new elements enters the population. Just after the second cohort enters the population, a first transition occurs in the first cohort driven by the Markov chain law and so on and so forth. Table 1 summarizes this accounting process in which, at each step k, we distribute multinomially the new random arrivals X τ k according to the probability vector q k and the elements in each class are redistributed according to the Markov chain transition matrix P .
At date τ k , if we suppose that each new set of individuals in the population, a cohort, evolves independently from any one of the already existing sets of individuals but, accordingly, to the same Markov chain model, we may recover the total expected number of elements in each class at date τ k by computing the sum:
K ¯ n = k = 1 n E [ X τ k ] ( q k ) P ( n k ) .
Each vector component corresponds precisely to the expected number of elements in each class. In order to further study the properties of ( K ¯ n ) n 1 , given the properties of a stochastic process X = ( X τ k ) k 1 , we will randomize formula (32) by considering, instead, for n 1 :
K n = k = 1 n X τ k ( q k ) P ( n k ) ,
and we observe that in any case E [ K ] = K ¯ n . It is known that if the vector of classification probabilities is constant c k = c and if the X is an ARMA, ARIMA, or SARIMA process, then the populations in each of the transient classes can be described by a sum of a deterministic trend, plus an ARMA process plus an evanescent process, that is a centered process ( Y k ) k 1 such that lim k + E Y k 2 = 0 (see Theorems 3.1 and 3.2 in [26]).
The step process in continuous time naturally associated with the discrete time one would be then defined by for t 0 by
K t : = n = 0 + K n 1 I [ τ n , τ n + 1 [ ( t ) = n = 0 + k = 1 n X τ k ( q k ) P ( n k ) 1 I [ τ n , τ n + 1 [ ( t ) .
In order to study this process we will have to take advantage of the properties of X and of the family of stopping times ( τ k ) k 0 . It should be noticed that if the process X = ( X t ) t 0 is Poisson distributed and the laws of the sequence ( τ k ) k 0 are known and it possible to determine the expected value of K t for t 0 with a result similar to Theorem 6.

5. Conclusions

In this work, we studied several ways to associate, to an open Markov chain process in discrete time—which is often the sole accessible fruit of observation—a continuous time Markov or semi-Markov process that bears some natural relation with the discrete time process. Furthermore, we expect that association to allow the extension of the study of open populations from the discrete to the continuous time model. For that purpose, we consider three approaches: the first, for the continuous time Markov chains; the second, for the semi Markov case; and the third, for the open Markov schemes (see in [26]). For the semi-Markov case, under the hypothesis that we only observe the influx of new individuals in the population at the times of the random jumps, in the main result we determine the expected value of the vector of parameters of the conditional Poisson distributions in the transient classes when the influx of new members is Poisson distributed. The third approach, dealing with open Markov schemes is similar to the second one whenever we consider a similar context hypothesis, that is, distributed incoming new members of the population with known distributions and observation of this influx of new individuals at the times of the random jumps. In the case of the first approach, that is, for the case of Markov chain in continuous time, we propose a calibration procedure for which the embeddable Markov chains provide optimal solutions. In this case also, the study of open populations models relies on the main result proved for the semi-Markov case approach. Future work encompasses applications to real data and the determination of criteria to assess the quality of the association of the continuous model to the observed discrete time model.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

For the second author, this work was done under partial financial support of RFBR (Grant n. 19-01-00451). For the first and third author this work was partially supported through the project of the Centro de Matemática e Aplicações, UID/MAT/00297/2020 financed by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology). The APC was funded by the insurance company Fidelidade.

Acknowledgments

This work was published with finantial support from the insurance company Fidelidade. The authors would like to thank Fidelidade for this generous support and also, for their interest in the development of models for insurance problems in Portugal. The authors express gratitude to Professor Panagiotis C.G. Vassiliou for his enlightening comments on a previous version of this work and to the comments, corrections and questions of the referees, in particular, to the one question that motivated the inclusion of Remark 5.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Some Essential Results on Continuous Time Markov Chains

In this exposition of the most relevant results pertinent to our purposes, we follow mainly the references [29,30,31]. As this exposition is a mere reminder of needed notions and results, the proofs are omitted unless the result is essential for our purposes.
Definition A1
(Continuous time Markov chain). Let I be some finite set; for instance, Θ = { θ 1 , θ 2 , , θ r } of Section 2. A stochastic process ( X t ) t 0 is a continuous time Markov chain with state space I if and only if the following Markov property is verified, namely, for all i 0 , i 1 , i n I and 0 = t 0 < t 1 < < t n < we have that
P X t n = i n X t n 1 = i n 1 , X t 1 = i 1 , X t 0 = i 0 = = P X t n = i n X t n 1 = i n 1 .
We observe that by force of the Markov property in Definition A1 the law of a continuous time Markov chain depends only on the following transition probabilities. Let I be the identity matrix with dimension # I the Kronecker’s delta be given by
δ i j = 0 i j 1 i = j .
Definition A2
(Transition probabilities). Let I be the state space of ( X t ) t 0 a continuous time Markov chain. The transition probabilities are defined by
i , j I , s < t , p ( s , i , t , j ) = P X t = j X s = i   a n d   p ( t , i , t , j ) = δ i j .
Let L ( R # I ) be the space of square matrices with coefficients in R . The transition probability matrix function P : R + × R + L ( R # I ) is defined by
i , j I , s < t , P ( s , t ) = p ( s , i , t , j ) i , j I   a n d   P ( t , t ) = I .
Transition probabilities of Markov processes in general satisfy a very important functional equation that results from the Markov property.
Theorem A1
(Chapman-Kolmogorov equations). Consider a NH-CT-MC as given in Definition A1. Let P its transition probability matrix function as given in Definition A2. We then have
s , u , t , 0 s < u < t , P ( s , t ) = P ( s , u ) P ( u , t )
As an application of the celebrated existence theorem of Kolmogorov (in the form exposed in [61], pp. 8–10) we have that, under a set of natural hypothesis, there exists a NH-CT-MC such as the one in Definition A1.
Theorem A2
(On the existence of NH-CT-MC). Let p 0 be an initial probability over I . Consider a matrix valued function P : R + × R + L ( R # I ) denoted by P ( s , t ) = p ( s , i , t , j ) i , j I and satisfying Formulas (A3) and (A4) below, that is,
1.
For all s < t and for all i I
j I p ( s , i , t , j ) = 1 .
2.
Formula (A2) in Theorem A1, namely,
s , u , t , s < u < t , P ( s , t ) = P ( s , u ) P ( u , t ) .
Define, for all i 0 , i 1 , i n I and 0 = t 0 < t 1 < < t n < , the function
ν t 0 , t 1 , , t n ( i 0 , i 1 , , i n ) = = p 0 ( i 0 ) p ( t 0 , i 0 , t 1 , i 1 ) p ( t 1 , i 1 , t 2 , i 2 ) p ( t n 1 , i n 1 , t n , i n ) ,
and extend this definition to all possible t 0 , t 1 , , t n , by considering, with the adequate ordering permutation σ of { 0 , 1 , 2 , , # I } such that we have t σ ( 0 ) < t σ ( 1 ) < , < t σ ( n ) ,
ν t σ ( 0 ) , t σ ( 1 ) , , t σ ( n ) ( i 0 , i 1 , , i n ) = ν t 0 , t 1 , t n ( i σ 1 ( 0 ) , i σ 1 ( 1 ) , , i σ 1 ( n ) ) .
Then, ( ν t 0 , t 1 , , t n ) t 0 , 1 , , t n , n 1 is a family of probability measures satisfying the compatibility conditions of Kolmogorov existence theorem and so, there exists P a probability measure over the canonical probability space ( Ω , A ) —with Ω = I R + and A = P ( I ) R + —such that if the stochastic process ( X t ) t 0 is denoted by
ω = ( i t ) t 0 Ω , X t ( ω ) = i t ,
then,
i , j I , s < t , p ( s , i , t , j ) = P X t = j X s = i   a n d   p ( t , i , t , j ) = δ i j ,
that is, ( X t ) t 0 has P ( s , t ) = p ( s , i , t , j ) i , j I —together with P ( t , t ) = I —as its transition probabilities.
A natural and useful way of defining transition probabilities is by means of the transition intensities that act like differential coefficients of transition probability functions.
Definition A3
(Transition intensities). Let L ( R # I ) be the space of square matrices with coefficients in R . A function Q : R L ( R # I ) denoted by
Q ( t ) = q ( t , i , j ) i , j I ,
is a transition intensity iff for almost all t 0 it verifies
(i) 
i I , t 0 , q ( t , i , i ) 0 ;
(ii) 
i I , t 0 , q ( t , i , j ) q ( t , i , i ) 0 ;
(iii) 
i I j I q ( t , i , j ) = 0 .
There is a way to write differential equations—the Kolmogorov backward and forward equations—useful for recovering the transition probability matrix from the intensities matrix and to study important properties of these transition probabilities.
Theorem A3
(Backward and Forward Kolmogorov equations). Suppose that P ( s , t ) is continuous at s, that is,
lim t 0 P ( 0 , t ) = I   a n d   lim t s P ( s , t ) = lim t s P ( t , s ) = I .
If there exists Q such that
Q ( t ) = lim k + h 0 + , k 0 h 0 P ( t k , t + h ) I k + h = lim h 0 , h > 0 P ( t , t + h ) I h = = lim k 0 , k > 0 P ( t k , t ) I k ,
then we have the backward Kolmogorov (matrix) equation:
s P ( s , t ) = Q ( s ) P ( s , t ) , P ( s , s ) = I ,
and the forward Kolmogorov (matrix) equation:
t P ( s , t ) = P ( s , t ) Q ( s ) , P ( t , t ) = I .
Remark A1.
The general theory of Markov processes shows that the condition that P ( s , t ) is continuous in both s and t is sufficient to ensure the existence of the matrix intensities Q given in Formulas (A9) (see [31], p. 232). By means of a change of time Goodman (see [41]) proved that the existence of solutions of Kolmogorov equations is amenable to an application of Caratheodory’s existence theorem for differential equations.
Given transition intensities satisfying an integrability condition there are transition probabilities uniquely associated with these transition intensities.
Theorem A4
(Transition probabilities from intensities). Let Q be a transition intensity as in Definition A3 such that Theorem A3 holds. Then, we have that
P ( s , t ) = I + s t Q ( u ) P ( u , t ) d u   a n d   P ( s , t ) = I + s t P ( s , u ) Q ( u ) d u .
The existence of a NH-CT-MC can also be guaranteed by a constructive procedure that we now present and that is most useful for simulation.
Remark A2
(Constructive definition). Given a transition intensity Q define
p * ( t , i , j ) = 1 δ i j q ( t , i , i ) q ( t , i , j ) q ( t , i , i ) 0 δ i j q ( t , i , i ) = 0 .
1.
Let X 0 = i , according to some initial distribution on I ; the sequence ( τ n ) n 0 is defined by induction as follows; τ 0 0 .
2.
τ 1 time of first jump with Exponential distribution function:
F τ 1 ( t ) = P τ 1 t = 1 exp 0 t q ( u , i , i ) d u ,
and
P X s 1 = j | τ 1 = s 1 , X 0 = i = p * ( s 1 , i , j ) ,
and so X t = i for 0 τ 0 t < τ 1 . We note that this distribution of the stopping time is mandatory as a consequence of a general result on the distribution of sojourn times of a continuous time Markov chain (see Theorem 2.3.15 in [31], p. 221).
3.
Given that τ 1 = s 1 and X s 1 = j , τ 2 time of the second jump with Exponential distribution function
F τ 2 τ 1 = s 1 ( t ) = P τ 2 t τ 1 = s 1 = 1 exp 0 t q ( u + s 1 , j , j ) d u
and
P X s 2 = k | τ 1 = s 1 , X 0 = i , τ 2 = s 2 , X s 1 = j = p * ( s 1 + s 2 , j , k ) ,
and so X t = j for τ 1 t < τ 2 .
The following result ensures that the preceding construction yields the desired result.
Theorem A5
(The continuous time Markov chain). Let the intensities satisfy condition given by Formula (A12) in Theorem A4. Then, given the times ( τ 0 ) n 1 , we have that with the sequence ( Y n ) n 1 defined by Y n = X τ n , the process defined by:
X t = n = 0 + Y n 1 I [ τ n , τ n + 1 [ ( t ) = n = 0 + X τ n 1 I [ τ n , τ n + 1 [ ( t )
is a continuous time Markov chain with transition probabilities P given by Definition A2 and transition intensities Q given by Definition A3 and Theorem A3.
Proof. 
This theorem is stated and proved, in the general case of Markov continuous time Markov processes in [31] (p. 229). □
Lemma A1.
Let q : R + R a measurable function integrable over every bounded interval of R + . Then, we have that
s t s 1 t s n 1 t q ( s 1 ) q ( s 2 ) q ( s n ) d s n d s 2 d s 1 = s t q ( u ) d u n n ! ,
for all 0 s t , n 1 .
Proof. 
Let us observe that, for n = 2 , we have that
s t q ( u ) d u 2 = s t s t q ( v ) q ( u ) d u d v = = s t s t 1 I { u v } q ( v ) q ( u ) d u d v + s t s t 1 I { v u } q ( v ) q ( u ) d u d v .
By induction we have for all n 1 , and for every permutation σ S n
s t q ( u ) d u n = = σ S n s t s t 1 I { u σ ( 1 ) u σ ( 2 ) u σ ( n ) } q ( u 1 ) q ( u 1 ) d u n d u 1 = = n ! s t s t 1 I { u 1 u 2 u n } q ( u 1 ) q ( u 1 ) d u n d u 1 = = s t u 1 t u n 1 t q ( u 1 ) q ( u 2 ) q ( u n ) d u n d u 2 d u 1 ,
as all the integrals in the sum are equal by the symmetry of the integrand function, and then, by Fubini theorem. □
Remark A3
(On a fundamental condition). The condition on q stated in Lemma A1 and reformulated in Formula (7) is the key to the proof of important results. In fact we have that this condition is sufficient to ensure that the associated Markov process has no discontinuities of the second type (see [31], p. 227) and, most important for the goals in this work, that the trajectories of the associated Markov process are step functions, that is, any trajectory has only a finite number of jumps in any compact subinterval of [ 0 , + [ ; we will detail this last part of the remark in Theorem A6.
Under the perspective of our main motivation the following result is crucial.
Theorem A6
(The non accumulation property of the jump times of a Markov chain). Let the intensities satisfy condition given by the statement of Lemma A1. Then, given the times ( τ n ) n 1 , we have that:
P n = 1 + τ n = + = 1 ,
and so the trajectories of the process are step functions.
Proof. 
Property in Formula (A14) has non immediate proof. We present a proof based on a result in [62] (p. 160), stating that the condition given by:
lim h 0 sup t , i j i p ( t , i , t + h , j ) = 0 ,
guarantees that the process has a stochastic equivalent that is a step process, meaning that for any trajectory ω the set of jumps of this trajectory has no limit points in the interval [ 0 , ζ ( ω ) [ , with ζ ( ω ) being the end date of the trajectory. This result is based on a thorough analysis (see [62], pp. 149–159) of the conditions for a Markov process not to have discontinuities of the second type, meaning that the right-hand side and left-hand side limits exists for every date point and every trajectory. Now, with,
q ( t ) : = max 1 i # I q ( t , i , i ) ,
by virtue of the condition on q in Lemma A1—that is reformulated more precisely in Formula (7) of the statement in Theorem 1—we have that:
p ( t , i , t + h , j ) k = 1 + # I t t + h q ( u ) d u k k ! .
Therefore, for almost all t [ 0 , T ] ,
lim h 0 sup t , i j i p ( t , i , t + h , j ) = # I 1 lim h 0 sup t k = 0 + # I · t t + h q ( u ) d u k k ! = = # I 1 lim h 0 sup t k = 0 + h · # I · 1 h t t + h q ( u ) d u k k ! = 0 ,
as the series is uniformly convergent and for almost all t [ 0 , T ] ,
lim h 0 1 h t t + h q ( u ) d u = q ( t ) ,
by Lebesgue’s differentiation theorem. □
Remark A4
(Negative properties). The following negative properties suggest the alternative calibration approach that we propose in Section 3.2. Given ( X τ n ) n 0 , the successive states occupied by the process, we observe that
  • the times ( τ n ) n 1 are not independent;
  • the sequence ( Y n ) n 1 defined by Y n = X τ n is not a Markov chain.

Appendix B. Semi-Markov Processes: A Short Review

For the reader’s convenience we present a short summary of the most important results semi-Markov processes (sMp), needed in this work, following [63] (pp. 189–200). The main foundational references for the theory of sMp are [32,64,65]. Important developments can be read in [33,66,67]. Among the many works with relevance for applications we refer, for instance, [68,69,70,71,72,73]. Let us consider a complete probability space ( Ω , F , P ) . The approach of Markov and semi-Markov processes via kernels if fruitful and so we are lead to the following definitions and results for what we will now follow, mainly, the works in [67] (pp. 7–15) and in [33]. Consider a general measurable state space ( Θ , A ( Θ ) ) . The σ -algebra A ( Θ ) may be seen as the observable sets of the state space of the process Θ .
Definition A4
(Semi-Markov transition kernel). A map Q : Θ × A ( Θ ) × [ 0 , + [ [ 0 , 1 ] such that ( x , B , t ) Q ( x , B , t ) is a semi-Markov transition kernel if it satisfies the following properties.
(i)  
Q ( x , · , t ) is measurable with respect to A ( Θ ) × B ( [ 0 , + [ ) with B ( [ 0 , + [ ) the Borel σ-algebra of [ 0 , + [ .
(ii)   
For fixed t > 0 , Q ( · , · , t ) : Θ × A ( Θ ) [ 0 , 1 ] is a semistochastic kernel, that is,
(ii.1)   
For fixed θ Θ and t > 0 , the map Q ( θ , · , t ) : A ( Θ ) [ 0 , 1 ] is a measure and we have Q ( θ , Θ , t ) 1 ; if Q ( θ , Θ , t ) = 1 we have that Q ( · , · , t ) is a stochastic kernel.
(ii.2)   
For a fixed T Θ we have that Q ( · , T , t ) : Θ [ 0 , 1 ] is measurable with respect to A ( Θ ) .
(iii)   
For fixed ( θ , T ) Θ × A ( Θ ) we have that the function Q ( θ , T , t ) : [ 0 , + [ [ 0 , 1 ] is a nondecreasing function, continuous from the right and such that Q ( θ , T , 0 ) = 0 .
(iv)   
P ( · , · ) : Θ × A ( Θ ) [ 0 , 1 ] defined to be: P ( · , · ) = Q ( · , · , + ) = lim t + Q ( · , · , t ) is a stochastic kernel.
(v)   
For any θ Θ we have that the function defined for t [ 0 , + [ by F θ ( t ) : = Q ( θ , Θ , t ) is a probability distribution function.
Now, consider Q a semi-Markov transition kernel, a continuous time stochastic process ( Y t ) t 0 defined on this probability space and F = ( F t ) t 0 the natural filtration associated to this process, i.e., F t : = σ ( Y s : s t ) is the algebra- σ generated by the variables of the process until time t. We now consider a sequence of random variables ( Z n ) n 0 —taking values in a state space Θ , that for our purposes will, in general, be finite state space Θ = { θ 1 , θ 2 , , θ r } and sometimes an infinite one Θ = { θ 1 , θ 2 , , θ r , } —the sequence being adapted to the filtration F . We consider also 0 τ 0 < τ 1 < τ 2 < < τ n < an increasing sequence of F -stopping times, denoted by T and Δ n : = τ n τ n 1 for n 1 .
Definition A5
(Markov renewal process). A two dimensional discrete time process ( Z n , Δ n ) n 0 with state space Θ × [ 0 , + [ verifying,
P Z n + 1 = θ j , Δ n t Z 0 , , Z n , Δ 1 , Δ 2 , , Δ n = P Z n + 1 = θ j , Δ n t Z n ,
for all θ j Θ , t 0 and almost surely that is, an homogeneous two dimensional Markov Chain, is a Markov renewal process if its transition probabilities are given by:
Q ( θ , T , t ) = P Z n + 1 T , Δ n t Z n = θ .
Remark A5
(Markov chains and Markov renewal processes). The transition probabilities of a Markov renewal process do not depend on the second component; as so, a Markov renewal process is a process of different type of a two dimensional Markov chain process. The first component of a Markov renewal process is a Markov chain, denoted the embedded Markov chain, with transition probabilities given by:
P ( θ , T ) = Q ( θ , T , + ) = lim t + Q ( θ , T , t ) = P Z n + 1 T Z n = θ .
Definition A6
(Markov renewal times). The Markov renewal times of the Markov renewal process ( τ n ) n 0 are defined by
τ n = k = 1 n Δ k ,
and the probability distribution functions F θ of the Markov renewal times depend on the states of the embedded Markov chain, as, by definition we have
F θ ( t ) : = Q ( θ , Θ , t ) = P Δ n t Z n = θ .
Proposition A1.
Consider a general measurable state space ( Θ , A ( Θ ) ) . Let Q be a semi-Markov transition kernel and P the associated stochastic kernel according to Definition A1. Then, there exists a function F θ ( γ , t ) such that:
Q ( θ , T , t ) = T F θ ( γ , t ) P ( θ , d γ ) .
Proof. 
As we have for θ Θ and T A ( Θ ) ) that P ( θ , T ) = Q ( θ , T , + ) , we may conclude that Q ( θ , T , + ) P ( θ , T ) and so, the measure Q ( θ , · , + ) is absolutely continuous with respect to the probability measure P ( θ , · ) on ( Θ , A ( Θ ) ) and so, by the Radon–Nicodym theorem, there exists a density F θ ( γ , t ) verifying Formula (A16). □
Remark A6
(Semi-Markov kernel for discrete space state). In the case of a discrete state space, say Θ = { θ 1 , θ 2 , , θ r , } , we may consider A ( Θ ) = P ( Θ ) the maximal σ-algebra of all the subsets of Θ ) and, with this condition, a semi-Markov kernel Q is defined by a matrix function Q = q ( i , j , t ) i , j 1 , t 0 such that
(i)   
For i , j 1 fixed the function q ( i , j , · ) : [ 0 , + [ [ 0 , 1 ] is nondecreasing.
(ii)   
For i 1 fixed the function F i ( t ) : = j 1 q ( i , j , t ) is a probability distribution function.
(iii)   
The matrix P = p ( i , j ) i , j 1 , t 0 with p ( i , j ) : = q ( i , j , + ) = lim t + q ( i , j , t ) is a stochastic matrix.
Definition A7
(Semi-Markov process). The process ( Y t ) t 0 is a semi-Markov process if:
(i)   
The process admits a representation given, for t 0 , by
Y t = n = 0 + Z n 1 I [ τ n , τ n + 1 [ ( t ) .
(ii)   
For n 0 we have that Z n = Y τ n .
(iii)   
The process ( Z n , τ n ) n 0 is a Markov renewal process (Mrp), that is, it verifies
P Z n + 1 = θ j , τ n + 1 τ n t Z 0 , , Z n , τ 1 , τ 2 , . τ n = = P Z n + 1 = θ j , τ n + 1 τ n t Z n ,
for all θ j Θ , t 0 and almost surely—as it is a conditional expectation.
Proposition A2
(The sMp as a Markov chain). The process ( Z n , τ n ) n 0 is a Markov chain with state space Θ × [ 0 , + [ and with semi-Markov transition kernel given by:
q ( i , j , t ) : = P Z n + 1 = θ j , τ n + 1 τ n t Z n = θ i .
Proposition A3
(The embedded Markov chain of the Mrp). The process ( Z n ) n 0 is a Markov chain with state space Θ with transition probabilities given by:
p ( i , j ) : = q ( i , j , + ) = P Z n + 1 = θ j Z n = θ i ,
and is denoted as the embedded Markov chain of the Mrp.
Proposition A4
(The conditional distribution function of the time between two successive jumps). Let Q = q ( i , j , t ) i , j { 1 , 2 , r } , t 0 be the semi-Markov kernel as in Proposition A20. Let the times between successive jumps be Δ n : = τ n τ n 1 have the conditional distribution function of the time between two successive jumps be given by
F i j ( t ) : = P Δ n t Z n = θ i , Z n + 1 = θ j .
Then, the semi-Markov kernel verifies,
q ( i , j , t ) : = P Z n + 1 = θ j , Δ n t Z n = θ i = p ( i , j ) F i j ( t ) ,
with p ( i , j ) as defined in Proposition A3.
Proof. 
It is a consequence of Proposition A1. □
Remark A7
(Homogeneous Markov chains as semi Markov processes). Let ( X t ) t 0 be a homogeneous Markov chain in continuous time with state space Θ = { θ 1 , θ 2 , , θ r , } and with—time independent—transition intensities given by Q ( t ) = q ( i , j ) i , j 1 (see Definition A3). Then, by the well known results on homogeneous Markov chains (see [29], pp. 317, 318) and by the representation given by Formula (A22), we have that
q ( t , i , j ) = q ( i , j ) q ( i , i ) 1 e q ( i , i ) t i j , 0 i = j   o r   q ( i , i ) = 0 ,
is the semi Markov kernel of a sMp. Being so, comparing Formula (A23) with Formulas (A21) and (A22), we can see that the main difference between a sMp and a continuous time Markov process is the fact that in the sMp case the conditional distribution function of the time between two successive jumps depend not only on the initial state of the jump but also on the final state, while in the homogeneous Markov chain case the dependence is only on the initial state of the jump.
Definition A8
(The sojourn time distribution in a state). The sojourn time distribution in the state θ i Θ = { θ 1 , θ 2 , , θ r , } , is defined by:
H i ( t ) : = j = 1 + q ( i , j , t ) = j = 1 + p ( i , j ) F i j ( t ) .
Its mean value represent the mean sojourn time in state θ i of the sMP ( Y t ) t 0 .
Definition A9
(Regular sMp). A sMP ( Y t ) t 0 is regular, with N ( t ) the number of jumps of the process in the time interval ] 0 , t ] given by:
N ( t ) : = sup { n 0 : τ n t } ,
defined for t > 0 verifies for all θ i Θ ,
P i N ( t ) < + : = P N ( t ) < + Z 0 = θ i = 1 .
Proposition A5
(Jumps times of a regular sMp do not have accumulation points). Let the sMP ( Y t ) t 0 be regular. Then, almost surely, lim n + τ n = + and, for any T R + and almost all ω Ω :
# k 1 : τ k ( ω ) T < + .
This means that in every compact time interval [ 0 , T ] , for almost all ω Ω there is only a finite number of times τ k ( ω ) in this interval.
The following fundamental theorem ensures that for sMp with finite state space the sequence of stopping times do not accumulate in a compact interval.
Theorem A7
(A sufficient condition for regularity of a sMp). Let α > 0 and β > 0 be constants such that or every state θ i the sojourn time distribution in this state H i ( t ) defined in Definition A8 verifies:
H i ( α ) < 1 β .
Then, the sMp is regular. In particular, any sMp with a finite state space is regular.
Proof. 
See in [74] (p. 88). □
Remark A8
(On the estimation of sMp). The estimation of sMp is dealt, for instance, in [75,76].

References

  1. Vajda, S. The stratified semi-stationary population. Biometrika 1947, 34, 243–254. [Google Scholar] [CrossRef] [PubMed]
  2. Young, A.; Almond, G. Predicting Distributions of Staff. Comput. J. 1961, 3, 246–250. [Google Scholar] [CrossRef] [Green Version]
  3. Bartholomew, D.J. A multi-stage renewal process. J. R. Statist. Soc. Ser. B 1963, 25, 150–168. [Google Scholar] [CrossRef]
  4. Bartholomew, D.J. Stochastic Models for Social Processes, 2nd ed.; Wiley Series in Probability and Mathematical Statistics; John Wiley & Sons: London, UK; New York, NY, USA; Sydney, Australia, 1973. [Google Scholar]
  5. Bartholomew, D.J. Stochastic Models for Social Processes, 3rd ed.; Wiley Series in Probability and Mathematical Statistics; John Wiley & Sons, Ltd.: Chichester, UK, 1982. [Google Scholar]
  6. Gani, J. Formulae for Projecting Enrolments and Degrees Awarded in Universities. J. R. Stat. Soc. Ser. A 1963, 126, 400–409. [Google Scholar] [CrossRef]
  7. Bowerman, B.; David, H.T.; Isaacson, D. The convergence of Cesaro averages for certain nonstationary Markov chains. Stoch. Process. Appl. 1977, 5, 221–230. [Google Scholar] [CrossRef] [Green Version]
  8. Vassiliou, P.C.G. Cyclic behaviour and asymptotic stability of nonhomogeneous Markov systems. J. Appl. Probab. 1984, 21, 315–325. [Google Scholar] [CrossRef]
  9. Vassiliou, P.C.G. Asymptotic variability of nonhomogeneous Markov systems under cyclic behaviour. Eur. J. Oper. Res. 1986, 27, 215–228. [Google Scholar] [CrossRef]
  10. Dimitriou, V.A.; Georgiou, A.C. Introduction, analysis and asymptotic behavior of a multi-level manpower planning model in a continuous time setting under potential department contraction. Commun. Statist. Theory Methods 2021, 50, 1173–1199. [Google Scholar] [CrossRef]
  11. Salgado-García, R. Open Markov Chains: Cumulant Dynamics, Fluctuations and Correlations. Entropy 2021, 23, 256. [Google Scholar] [CrossRef]
  12. Vassiliou, P.C.G.; Papadopoulou, A.A. Nonhomogeneous semi-Markov systems and maintainability of the state sizes. J. Appl. Probab. 1992, 29, 519–534. [Google Scholar] [CrossRef]
  13. Papadopoulou, A.A.; Vassiliou, P.C.G. Asymptotic behavior of nonhomogeneous semi-Markov systems. Linear Algebra Appl. 1994, 210, 153–198. [Google Scholar] [CrossRef] [Green Version]
  14. Vassiliou, P.C.G. Asymptotic Behavior of Markov Systems. J. Appl. Probab. 1982, 19, 851–857. [Google Scholar] [CrossRef]
  15. Vassiliou, P.C.G. Markov Systems in a General State Space. Commun. Stat. Theory Methods 2014, 43, 1322–1339. [Google Scholar] [CrossRef]
  16. Vassiliou, P.-C.G. Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space. Mathematics 2020, 8, 1021. [Google Scholar] [CrossRef]
  17. Vassiliou, P.-C.G. Non-Homogeneous Markov Set Systems. Mathematics 2021, 9, 471. [Google Scholar] [CrossRef]
  18. McClean, S.I. A continuous-time population model with Poisson recruitment. J. Appl. Probab. 1976, 13, 348–354. [Google Scholar] [CrossRef]
  19. McClean, S.I. Continuous-time stochastic models of a multigrade population. J. Appl. Probab. 1978, 15, 26–37. [Google Scholar] [CrossRef]
  20. McClean, S.I. A Semi-Markov Model for a Multigrade Population with Poisson Recruitment. J. Appl. Probab. 1980, 17, 846–852. [Google Scholar] [CrossRef]
  21. Papadopoulou, A.A.; Vassiliou, P.C.G. Continuous time nonhomogeneous semi-Markov systems. In Semi-Markov Models and Applications (Compiègne, 1998); Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; pp. 241–251. [Google Scholar]
  22. Esquível, M.L.; Fernandes, J.M.; Guerreiro, G.R. On the evolution and asymptotic analysis of open Markov populations: Application to consumption credit. Stoch. Models 2014, 30, 365–389. [Google Scholar] [CrossRef]
  23. Guerreiro, G.R.; Mexia, J.A.T.; de Fátima Miguens, M. Statistical approach for open bonus malus. Astin Bull. 2014, 44, 63–83. [Google Scholar] [CrossRef]
  24. Afonso, L.B.; Cardoso, R.M.R.; Egídio dos Reis, A.D.; Guerreiro, G.R. Ruin Probabilities And Capital Requirement for Open Automobile Portfolios With a Bonus-Malus System Based on Claim Counts. J. Risk Insur. 2020, 87, 501–522. [Google Scholar] [CrossRef]
  25. Esquível, M.L.; Patrício, P.; Guerreiro, G.R. From ODE to Open Markov Chains, via SDE: An application to models for infections in individuals and populations. Comput. Math. Biophys. 2020, 8, 180–197. [Google Scholar] [CrossRef]
  26. Esquível, M.; Guerreiro, G.; Fernandes, J. Open Markov chain scheme models. REVSTAT 2017, 15, 277–297. [Google Scholar]
  27. Esquível, M.L.; Guerreiro, G.R.; Oliveira, M.C.; Corte Real, P. Calibration of Transition Intensities for a Multistate Model: Application to Long-Term Care. Risks 2021, 9, 37. [Google Scholar] [CrossRef]
  28. Resnick, S.I. Adventures in Stochastic Processes; Birkhäuser: Boston, MA, USA, 1992. [Google Scholar]
  29. Rolski, T.; Schmidli, H.; Schmidt, V.; Teugels, J. Stochastic Processes for Insurance and Finance; Wiley Series in Probability and Statistics; John Wiley & Sons Ltd.: Chichester, UK, 1999. [Google Scholar] [CrossRef]
  30. Iosifescu, M. Finite Markov Processes and Their Applications; Wiley Series in Probability and Mathematical Statistics; JohnWiley & Sons, Ltd.: Chichester, UK; Editura Tehnică: Bucharest, Romania, 1980; p. 295. [Google Scholar]
  31. Iosifescu, M.; Tăutu, P. Stochastic Processes and Applications in Biology and Medicine. I: Theory; Biomathematics; Editura Academiei RSR: Bucharest, Romania; Springer: Berlin, Germany; New York, NY, USA, 1973; Volume 3, p. 331. [Google Scholar]
  32. Pyke, R. Markov renewal processes: Definitions and preliminary properties. Ann. Math. Statist. 1961, 32, 1231–1242. [Google Scholar] [CrossRef]
  33. Korolyuk, V.S.; Korolyuk, V.V. Stochastic Models of Systems. In Mathematics and its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; Volume 469. [Google Scholar] [CrossRef]
  34. Kingman, J.F.C. The imbedding problem for finite Markov chains. Probab. Theory Relat. Fields 1962, 1, 14–24. [Google Scholar] [CrossRef]
  35. Johansen, S. The Imbedding Problem for Finite Markov Chains. In Geometric Methods in System Theory, 1st ed.; Mayne, D.Q.B.R.W., Ed.; D. Reidel Publishing Company: Dordrecht, The Netherlands; Boston, MA, USA, 1973; Volume 1, Chapter 13; pp. 227–237. [Google Scholar]
  36. Johansen, S. A central limit theorem for finite semigroups and its application to the imbedding problem for finite state Markov chains. Z. Wahrscheinlichkeitstheorie Verw. Gebiete 1973, 26, 171–190. [Google Scholar] [CrossRef]
  37. Johansen, S. Some Results on the Imbedding Problem for Finite Markov Chains. J. Lond. Math. Soc. 1974, 2, 345–351. [Google Scholar] [CrossRef]
  38. Fuglede, B. On the imbedding problem for stochastic and doubly stochastic matrices. Probab. Theory Relat. Fields 1988, 80, 241–260. [Google Scholar] [CrossRef]
  39. Guerry, M.A. On the Embedding Problem for Discrete-Time Markov Chains. J. Appl. Probab. 2013, 50, 918–930. [Google Scholar] [CrossRef]
  40. Jia, C. A solution to the reversible embedding problem for finite Markov chains. Stat. Probab. Lett. 2016, 116, 122–130. [Google Scholar] [CrossRef] [Green Version]
  41. Goodman, G.S. An intrinsic time for non-stationary finite Markov chains. Probab. Theory Relat. Fields 1970, 16, 165–180. [Google Scholar] [CrossRef]
  42. Singer, B. Estimation of Nonstationary Markov Chains from Panel Data. Sociol. Methodol. 1981, 12, 319–337. [Google Scholar] [CrossRef]
  43. Lencastre, P.; Raischel, F.; Rogers, T.; Lind, P.G. From empirical data to time-inhomogeneous continuous Markov processes. Phys. Rev. E 2016, 93, 032135. [Google Scholar] [CrossRef] [Green Version]
  44. Ekhosuehi, V.U. On the use of Cauchy integral formula for the embedding problem of discrete-time Markov chains. Commun. Stat. Theory Methods 2021, 1–15. [Google Scholar] [CrossRef]
  45. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; McGraw-Hill Book Company, Inc.: New York, NY, USA; Toronto, ON, Canada; London, UK, 1955. [Google Scholar]
  46. Rudin, W. Real and Complex Analysis, 3rd ed.; McGraw-Hill Book Co.: New York, NY, USA, 1987. [Google Scholar]
  47. Kurzweil, J. Ordinary differential equations. In Studies in Applied Mechanics; Introduction to the theory of ordinary differential equations in the real domain, Translated from the Czech by Michal Basch; Elsevier Scientific Publishing Co.: Amsterdam, The Netherlands, 1986; Volume 13, p. 440. [Google Scholar]
  48. Teschl, G. Ordinary differential equations and dynamical systems. In Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 2012; Volume 140. [Google Scholar] [CrossRef]
  49. Nevanlinna, F.; Nevanlinna, R. Absolute Analysis; Translated from the German by Phillip Emig, Die Grundlehren der mathematischen Wissenschaften, Band 102; Springer: New York, NY, USA; Heidelberg, Germany, 1973. [Google Scholar]
  50. Severi, F.; Scorza Dragoni, G. Lezioni di analisi. Vol. 3. Equazioni Differenziali Ordinarie e Loro Sistemi, Problemi al Contorno Relativi, Serie Trigonometriche, Applicazioni Geometriche; Cesare Zuffi: Bologna, Italy, 1951. [Google Scholar]
  51. Dobrušin, R.L. Generalization of Kolmogorov’s equations for Markov processes with a finite number of possible states. Matematicheskii Sbornik 1953, 33, 567–596. [Google Scholar]
  52. Pritchard, D.J. Modeling Disability in Long-Term Care Insurance. N. Am. Actuar. J. 2006, 10, 48–75. [Google Scholar] [CrossRef]
  53. Kingman, J.F.C. Ergodic properties of continuous-time Markov processes and their discrete skeletons. Proc. Lond. Math. Soc. 1963, 13, 593–604. [Google Scholar] [CrossRef]
  54. Conner, H. A note on limit theorems for Markov branching processes. Proc. Am. Math. Soc. 1967, 18, 76–86. [Google Scholar] [CrossRef]
  55. Israel, R.B.; Rosenthal, J.S.; Wei, J.Z. Finding generators for Markov chains via empirical transition matrices, with applications to credit ratings. Math. Financ. 2001, 11, 245–265. [Google Scholar] [CrossRef]
  56. Guerreiro, G.R.; Mexia, J.A.T. Stochastic vortices in periodically reclassified populations. Discuss. Math. Probab. Stat. 2008, 28, 209–227. [Google Scholar] [CrossRef] [Green Version]
  57. Feller, W. An Introduction to Probability Theory and Its Applications. Vol. I, 3rd ed.; John Wiley & Sons, Inc.: New York, NY, USA; London, UK; Sydney, Australia, 1968. [Google Scholar]
  58. Serfozo, R. Convergence of Lebesgue integrals with varying measures. Sankhyā Ser. A 1982, 44, 380–402. [Google Scholar]
  59. Billingsley, P. Convergence of Probability Measures, 2nd ed.; Wiley Series in Probability and Statistics: Probability and Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1999. [Google Scholar] [CrossRef] [Green Version]
  60. Durrett, R. Probability—Theory and Examples. In Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 2019; Volume 49. [Google Scholar] [CrossRef]
  61. Skorokhod, A.V. Lectures on the Theory of Stochastic Processes; VSP: Utrecht, The Netherlands; TBiMC Scientific Publishers: Kiev, Ukraine, 1996. [Google Scholar]
  62. Dynkin, E.B. Theory of Markov Processes; Translated from the Russian by D. E. Brown and edited by T. Köváry, Reprint of the 1961 English translation; Dover Publications, Inc.: Mineola, NY, USA, 2006. [Google Scholar]
  63. Iosifescu, M.; Limnios, N.; Oprişan, G. Introduction to Stochastic Models; Applied Stochastic Methods Series; Translated from the 2007 French original by Vlad Barbu; ISTE: London, UK; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010. [Google Scholar] [CrossRef]
  64. Pyke, R. Markov renewal processes with finitely many states. Ann. Math. Statist. 1961, 32, 1243–1259. [Google Scholar] [CrossRef]
  65. Feller, W. On semi-Markov processes. Proc. Nat. Acad. Sci. USA 1964, 51, 653–659. [Google Scholar] [CrossRef] [Green Version]
  66. Kurtz, T.G. Comparison of semi-Markov and Markov processes. Ann. Math. Statist. 1971, 42, 991–1002. [Google Scholar] [CrossRef]
  67. Korolyuk, V.; Swishchuk, A. Semi-Markov random evolutions. In Mathematics and its Applications; Translated from the 1992 Russian original by V. Zayats and revised by the authors; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1995; Volume 308. [Google Scholar] [CrossRef]
  68. Janssen, J.; de Dominicis, R. Finite non-homogeneous semi-Markov processes: Theoretical and computational aspects. Insur. Math. Econ. 1984, 3, 157–165. [Google Scholar] [CrossRef]
  69. Janssen, J.; Limnios, N. (Eds.) Semi-Markov Models and Applications; Selected papers from the 2nd International Symposium on Semi-Markov Models: Theory and Applications held in Compiègne, December 1998; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar] [CrossRef] [Green Version]
  70. Janssen, J.; Manca, R. Applied Semi-Markov Processes; Springer: New York, NY, USA, 2006. [Google Scholar]
  71. Janssen, J.; Manca, R. Semi-Markov Risk Models for Finance, Insurance and Reliability; Springer: New York, NY, USA, 2007. [Google Scholar]
  72. Barbu, V.S.; Limnios, N. Semi-Markov chains and hidden semi-Markov models toward applications. In Lecture Notes in Statistics; Springer: New York, NY, USA, 2008; Volume 191. [Google Scholar]
  73. Grabski, F. Semi-Markov Processes: Applications in System Reliability and Maintenance; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  74. Ross, S.M. Applied Probability Models with Optimization Applications; Reprint of the 1970 original; Dover Publications, Inc.: New York, NY, USA, 1992. [Google Scholar]
  75. Moore, E.H.; Pyke, R. Estimation of the transition distributions of a Markov renewal process. Ann. Inst. Stat. Math. 1968, 20, 411. [Google Scholar] [CrossRef]
  76. Ouhbi, B.; Limnios, N. Nonparametric Estimation for Semi-Markov Processes Based on its Hazard Rate Functions. Stat. Inference Stoch. Process. 1999, 2, 151–173. [Google Scholar] [CrossRef]
Figure 1. A representation of P ˜ ( s , t , λ ( t ) ) in Formula (12) for the first three initial times.
Figure 1. A representation of P ˜ ( s , t , λ ( t ) ) in Formula (12) for the first three initial times.
Mathematics 09 01496 g001
Table 1. Accounting of n Markov cohorts each with an initial distribution.
Table 1. Accounting of n Markov cohorts each with an initial distribution.
Date τ 1 τ 2 τ n 1 τ n
τ 1 E [ X τ 1 ] ( q 1 ) E [ X τ 1 ] ( q 1 ) P E [ X τ 1 ] ( q 1 ) P ( n 2 ) E [ X τ 1 ] ( q 1 ) P ( n 1 )
τ 2 E [ X τ 2 ] ( q 2 ) E [ X τ 2 ] ( q 2 ) P ( n 3 ) E [ X τ 2 ] ( q 2 ) P ( n 2 )
τ n E [ X τ n ] ( q n )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Esquível, M.L.; Krasii, N.P.; Guerreiro, G.R. Open Markov Type Population Models: From Discrete to Continuous Time. Mathematics 2021, 9, 1496. https://doi.org/10.3390/math9131496

AMA Style

Esquível ML, Krasii NP, Guerreiro GR. Open Markov Type Population Models: From Discrete to Continuous Time. Mathematics. 2021; 9(13):1496. https://doi.org/10.3390/math9131496

Chicago/Turabian Style

Esquível, Manuel L., Nadezhda P. Krasii, and Gracinda R. Guerreiro. 2021. "Open Markov Type Population Models: From Discrete to Continuous Time" Mathematics 9, no. 13: 1496. https://doi.org/10.3390/math9131496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop