1 Introduction

In many applications the variables that appear in a mathematical description take only positive or nonnegative values. Examples of such systems can be found in [4, 12, 15, 17], where also a theory of linear positive systems was developed. Usually the systems that are studied fall into two separate classes: continuous-time systems and discrete-time systems. In [17], all the problems are studied twice in these two separate settings. The characterizations of properties of positive systems for these two classes are sometimes similar, or even identical, and sometimes essentially distinct.

Stefan Hilger in his Ph.D. thesis [16] started the most successful attempt to unify the theories of continuous-time systems and discrete-time systems into one theory. It is based on the concept of time scale and the calculus on time scales. A time scale is a model of time. Time may be discrete or continuous, or partly continuous and partly discrete. The concepts of standard derivative used in the case of continuous time and forward difference used in the discrete time are unified into one concept of delta derivative. This allows to consider delta differential equations on arbitrary time scales. They generalize standard differential equations and difference equations. Theory of dynamical systems on time scales was developed in [5]. Special attention was paid to linear delta differential equations. Another theory unifying discrete and continuous dynamical systems was developed in [19]

The interest in control systems on time scales dates back to 2004. The first results have concerned controllability, observability and realizations of linear constant-coefficient and varying-coefficient control systems with outputs [2, 3, 13]. Since then the literature on control systems on time scales has been rapidly growing, including also nonlinear systems.

Controllability of continuous-time and discrete-time linear positive systems has been a subject of research since late 1980s [10, 11, 20, 21]. Discrete-time systems appeared to be easier to deal with and it seems that positive cotrollability of such systems is now fully understood (see e.g., [68, 18]). On the other hand, only recently it was discovered that positive reachability of continuous-time systems requires very restrictive conditions to be satisfied [9, 22]. Thus criteria of positive reachability for discrete-time systems and continuous-time systems are essentially different.

In this paper, we study linear positive constant-coefficients systems on arbitrary time scales. The results presented here unify and extend corresponding results obtained for linear positive continuous-time and discrete-time systems. We prove necessary and sufficient conditions for a linear system \(x^\Delta =Ax+Bu\) on a time scale \(\mathbb T \) to be positive. They involve the matrices \(A\) and \(B\) and the graininess function of the time scale, which describes distribution of the instances of time. We also study two controllability properties of positive linear systems: positive accessibility and positive reachability. Accessibility appears to be a property whose characterization does not depend on time scale. It is equivalent to standard controllability and expressed with the aid of the Kalman controllability matrix. As the criteria for positive reachability are completely different for continuous-time and discrete-time systems, we have tried to develop methods that would result in the same statements for different time scales. We introduce a modified Gram matrix for a system on a time scale, for which we select columns of \(B\) and choose different sets of integration for different columns. We prove that the system is positively reachable on an interval if and only if such a Gram matrix is monomial, i.e., each its column and each its row contain exactly one positive element. Then we show that from this characterization we can deduce known criteria for positive reachability of continuous-time and discrete-time systems. We also show that on nonhomogeneous time scales many properties known to hold for homogeneous time scales are no longer true. We state reachability criteria for a discrete nonhomogeneous time scale and for the time scale that is a union of disjoint closed intervals. The latter differs significantly from the standard continuous-time case.

In Sect. 2 we recall basic material on positive systems, time scales and linear systems on time scales. Section 3 is devoted to positive control systems, and Sect. 4 to positive accessibility and positive reachability.

2 Preliminaries

We introduce here the main concepts, recall definitions and facts, and set notation. For more information on positive continuous-time and discrete-time systems, the reader is referred to, e.g., [12], and for information on time scales calculus to, e.g., [5].

2.1 Positive matrices and cones

By \(\mathbb R \) we shall denote the set of all real numbers, by \(\mathbb Z \) the set of integers, and by \(\mathbb N \) the set of natural numbers (without \(0\)). We shall also need the set of nonnegative real numbers, denoted by \(\mathbb R _+\) and the set of nonnegative integers \(\mathbb Z _+,\) i.e., \(\mathbb N \cup \{0\}.\) Similarly, \(\mathbb R ^k_+\) will mean the set of all column vectors in \(\mathbb R ^k\) with nonnegative components and \(\mathbb R ^{k\times p}_+\) will consist of \(k\times p\) real matrices with nonnegative elements. If \(A\in \mathbb R ^{k\times p}_+\) we write \(A\ge 0\) and say that \(A\) is nonnegative. A nonnegative matrix \(A\) will be called positive if at least one of its elements is greater than \(0.\) Then we shall write \(A>0.\)

A positive column or row vector is called monomial if one of its components is positive and all the other are zero. A monomial column in \(\mathbb R ^n_+\) has the form \(\alpha e_k\) for some \(\alpha >0\) and \(1\le k\le n,\) where \(e_k\) denotes the column with 1 at the \(k\)th position and other elements equal 0. Then we say that the column is \(k\) -monomial. An \(n\times n\) matrix \(A\) is called monomial if all columns and rows of \(A\) are monomial. Then \(A\) is invertible and its inverse is also positive. Moreover, we have the following important fact.

Proposition 1

A positive matrix \(A\) has a positive inverse if and only if \(A\) is monomial.

It will be convenient to extend the set of all real numbers adding one element. It will be denoted by \(\infty \) and will mean the positive infinity. We set \(\bar{\mathbb{R }}:=\mathbb R \cup \{\infty \}\) and \(\bar{\mathbb{R }}_+:=\mathbb R _+\cup \{\infty \}.\) If \(a\in \mathbb R \) then we define \(a+\infty =\infty .\) Moreover, for \(a\in \mathbb R \) and \(a>0\) we set \(a/0=\infty \) and \(a/\infty =0.\) Of course \(\infty >0.\) If a matrix \(A\) has elements from \(\bar{\mathbb{R }},\) then the notions of nonnegativity and positivity have the same meanings as before and are denoted in the same way. Addition of such matrices is defined in the standard way, but we shall not need multiply or invert such matrices.

A subset \(C\) of \(\mathbb R ^n\) is called a (positive) cone if for any \(\alpha \in \mathbb R _+\) and any \(x\in C,\,\alpha x\in C.\) It is clear that \(\mathbb R ^n_+\) is a cone.

2.2 Calculus on time scales

Calculus on time scales is a generalization of the standard differential calculus and the calculus of finite differences.

A time scale \(\mathbb T \) is an arbitrary nonempty closed subset of the set \(\mathbb R \) of real numbers. In particular \(\mathbb T =\mathbb R ,\,\mathbb T =h\mathbb Z \) for \(h>0\) and \(\mathbb T =q^\mathbb{N }:=\{ q^k, k\in \mathbb N \}\) for \(q>1\) are time scales. We assume that \(\mathbb T \) is a topological space with the relative topology induced from \(\mathbb R .\) If \(t_0,t_1\in \mathbb T ,\) then \([t_0,t_1]_\mathbb T \) denotes the intersection of the ordinary closed interval with \(\mathbb T .\) Similar notation is used for open, half-open or infinite intervals.

For \(t \in \mathbb T \) we define the forward jump operator \(\sigma :\mathbb T \rightarrow \mathbb T \) by \(\sigma (t):=\inf \{s \in \mathbb T :s>t\}\) if \(t\ne \sup \mathbb T \) and \(\sigma (\sup \mathbb T )=\sup \mathbb T \) when \(\sup \mathbb T \) is finite; the backward jump operator \(\rho :\mathbb T \rightarrow \mathbb T \) by \(\rho (t):=\sup \{s \in \mathbb T :s<t\}\) if \(t\ne \inf \mathbb T \) and \(\rho (\inf \mathbb T )=\inf \mathbb T \) when \(\inf \mathbb T \) is finite; the forward graininess function \(\mu :\mathbb T \rightarrow [0,\infty )\) by \(\mu (t):=\sigma (t)-t\); the backward graininess function \(\nu :\mathbb T \rightarrow [0,\infty )\) by \(\nu (t):=t-\rho (t).\)

If \(\sigma (t)>t,\) then \(t\) is called right-scattered, while if \(\rho (t)<t,\) it is called left-scattered. If \(t<\sup \mathbb T \) and \(\sigma (t)=t\) then \(t\) is called right-dense. If \(t>\inf \mathbb T \) and \(\rho (t)=t,\) then \(t\) is left-dense.

The time scale \(\mathbb T \) is homogeneous, if \(\mu \) and \(\nu \) are constant functions. When \(\mu \equiv 0\) and \(\nu \equiv 0,\) then \(\mathbb T =\mathbb R \) or \(\mathbb T \) is a closed interval (in particular a half-line). When \(\mu \) is constant and greater than \(0,\) then \(\mathbb T =\mu \mathbb Z .\)

If \(M:=\sup \mathbb T \) is finite and \(\rho (M)<M,\) then we set \(\mathbb T ^k:=\mathbb T \setminus \{M\}.\) Otherwise \(\mathbb T ^k:=\mathbb T .\) Thus \(\mathbb T ^k\) is got from \(\mathbb T \) by removing its maximal point if this point exists and is left-scattered.

Let \(f:\mathbb T \rightarrow \mathbb R \) and \(t \in \mathbb T ^k.\) The delta derivative of \(f\) at \(t\), denoted by \(f^{\Delta }(t),\) is the real number with the property that given any \(\varepsilon \) there is a neighborhood \(U=(t-\delta ,t+\delta ) \cap \mathbb T \) such that

$$\begin{aligned} |(f(\sigma (t))-f(s))-f^{\Delta }(t)(\sigma (t)-s)| \le \varepsilon |\sigma (t)-s| \end{aligned}$$

for all \(s \in U.\) If \(f^{\Delta }(t)\) exists, then we say that \(f\) is delta differentiable at \(t\). Moreover, we say that \(f\) is delta differentiable on \(\mathbb T ^k\) provided \(f^{\Delta }(t)\) exists for all \(t\in \mathbb T ^k.\)

Example 1

If \(\mathbb T =\mathbb R ,\) then \(f^{\Delta }(t)=f^{\prime }(t).\) If \(\mathbb T =h\mathbb Z ,\) then \(f^{\Delta }(t)=\frac{f(t+h)-f(t)}{h}.\) If \(\mathbb T =q^\mathbb{N },\) then \(f^{\Delta }(t)=\frac{f(qt)-f(t)}{(q-1)t}.\)

A function \(f:\mathbb T \rightarrow \mathbb R \) is called rd-continuous provided it is continuous at right-dense points in \(\mathbb T \) and its left-sided limits exist (finite) at left-dense points in \(\mathbb T .\) If \(f\) is continuous, then it is rd-continuous.

A function \(F:\mathbb T \rightarrow \mathbb R \) is called an antiderivative of \(f: \mathbb T \rightarrow \mathbb R \) provided \(F^{\Delta }(t)=f(t)\) holds for all \(t \in \mathbb T ^k.\) Let \(a,b\in \mathbb T .\) Then the delta integral of \(f\) on the interval \([a,b)_\mathbb T \) is defined by

$$\begin{aligned} \int _a^b f(\tau ) \Delta \tau :=\int _{[a,b)_\mathbb T } f(\tau ) \Delta \tau := F(b)-F(a). \end{aligned}$$

It is more convenient to consider the half-open interval \([a,b)_\mathbb T \) than the closed interval \([a,b]_\mathbb T \) in the definition of the integral. If \(b\) is a left-dense point, then the value of \(f\) at \(b\) would not affect the integral. On the other hand, if \(b\) is left-scattered, the value of \(f\) at \(b\) is not essential for the integral (see Example 2). This is caused by the fact that we use delta integral, corresponding to the forward jump function.

Riemann and Lebesgue delta integrals on time scales have been also defined (see e.g., [14]). It can be shown that every rd-continuous function has an antiderivative and its Riemann and Lebesgue integrals agree with the delta integral defined above.

We have a natural property:

$$\begin{aligned} \int _a^b f(\tau )\Delta \tau =\int _a^c f(\tau )\Delta \tau + \int _c^b f(\tau )\Delta \tau \end{aligned}$$

for any \(c\in (a,b)_T.\) Moreover, if \(f\) is rd-continuous, \(f(t) \ge 0\) for all \(a \le t < b\) and \(\int _a^b f(\tau ) \Delta \tau =0 ,\) then \(f\equiv 0.\)

Example 2

If \(\mathbb T =\mathbb R ,\) then \(\int _a^b f(\tau ) \Delta \tau =\int _a^b f(\tau )d\tau ,\) where the integral on the right is the usual Riemann integral. If \(\mathbb T =h\mathbb Z ,\,h>0,\) then \(\int _a^b f(\tau )\Delta \tau =\sum _{t=\frac{a}{h}}^{\frac{b}{h}-1}f(th)h\) for \(a<b.\)

2.3 Linear systems on time scale

Let us consider the system of delta differential equations on a time scale \(\mathbb T \):

$$\begin{aligned} x^{\Delta }(t)=Ax(t), \end{aligned}$$
(1)

where \(x(t)\in \mathbb R ^n\) and \(A\) is a constant \(n\times n\) matrix.

Remark 1

If \(\mathbb T =\mathbb R ,\) then (1) is a system of ordinary differential equations \(x^{\prime }=Ax.\) But for \(\mathbb T =\mathbb Z ,\) (1) takes the difference form \(x(t+1)-x(t)=Ax(t),\) which can be transformed to the shift form \(x(t+1)=(I+A)x(t).\) Thus to compare the definitions and the results stated for delta differential systems in the case \(\mathbb T =\mathbb Z \) with those that were obtained for discrete-time systems in the shift form, one has to take this into account. One can easily transform the difference form to the shift form and vice versa.

Proposition 2

Equation (1) with initial condition \(x(t_0)=x_0\) has a unique forward solution defined for all \(t \in [t_0,+\infty )_\mathbb T .\)

The matrix exponential function (at \(t_0\)) for \(A\) is defined as the unique forward solution of the matrix differential equation \(X^{\Delta }=AX,\) with the initial condition \(X(t_0)=I.\) Its value at \(t\) is denoted by \(e_A(t,t_0).\)

Example 3

If \(\mathbb T =\mathbb R ,\) then \(e_A(t,t_0)=e^{A(t-t_0)}.\) If \(\mathbb T =h\mathbb Z ,\) then \(e_A(t,t_0)=(I+A)^{(t-t_0)/h}.\) If \(\mathbb T =q^\mathbb{N },\,q>1,\) then \(e_A(q^kt_0,{t_0})=\prod _{i=0}^{k-1}(I+(q-1)q^it_0A)\) for \(k\ge 1\) and \(t_0\in \mathbb T .\)

Proposition 3

The following properties hold for every \(t,s,r\in \mathbb T \) such that

\(r \le s \le t\):

i) \(e_A(t,t) = I\);

ii) \(e_A(t,s)e_A(s,r)=e_A(t,r)\);

Let us consider now a nonhomogeneous system

$$\begin{aligned} x^{\Delta }(t)=Ax(t)+f(t) \end{aligned}$$
(2)

where \(f\) is rd-continuous.

Theorem 1

Let \(t_0\in \mathbb T .\) System (2) for the initial condition \(x(t_0)=x_0\) has a unique forward solution of the form

$$\begin{aligned} x(t)=e_A(t,t_0)x_0 + \int _{t_0}^t e_A(t,\sigma (\tau ))f(\tau )\Delta \tau . \end{aligned}$$
(3)

3 Positive control systems

Let \(n\in \mathbb N \) be fixed. From now on we shall assume that the time scale \(\mathbb T \) consists of at least \(n+1\) elements.

Let us consider a linear control system, denoted by \(\Sigma ,\) and defined on the time scale \(\mathbb T \):

$$\begin{aligned} x^{\Delta }(t)=Ax(t)+Bu(t) \end{aligned}$$
(4)

where \(t\in \mathbb T ,\,x(t)\in \mathbb R ^n\) and \(u(t)\in \mathbb R ^m.\)

We assume that the control \(u\) is a piecewise continuous function defined on some interval \([t_0,t_1)_\mathbb T ,\) depending on \(u,\) where \(t_0\in \mathbb T \) and \(t_1\in \mathbb T \) or \(t_1=\infty .\) We shall assume that at each point \(t\in [t_0,t_1)_\mathbb T ,\) at which \(u\) is not continuous, \(u\) is right-continuous and has a finite left-sided limit if \(t\) is left-dense. This allows to solve (4) step by step. Moreover, for a finite \(t_1\) we can always evaluate \(x(t_1).\) For \(t_1\) being left-scattered we do not need the value of \(u\) at \(t_1,\) and for a left-dense \(t_1\) we just take a limit of \(x(t)\) at \(t_1.\)

Definition 1

We say that system \(\Sigma \) is positive if for any \(t_0\in \mathbb T ,\) any initial condition \(x_0\in \mathbb R ^n_+,\) any control \(u : [t_0,t_1)_\mathbb T \rightarrow \mathbb R ^m_+\) and any \(t\in [t_0,t_1]_\mathbb T ,\) the solution \(x\) of (4) satisfies \(x(t)\in \mathbb R ^n_+.\)

By the separation principle we have the following characterization.

Proposition 4

The system \(\Sigma \) is positive if and only if \(e_A(t,t_0)\in \mathbb R ^{n\times n}_+\) for every \(t,t_0\in \mathbb T \) such that \(t\ge t_0,\) and \(B\in \mathbb R ^{n\times m}_+.\)

The proof is very similar to the proof of the continuous-time case.

To state criteria of nonnegativity of the exponential matrix, let \(\bar{\mu }=\sup \{ \mu (t) : t\in \mathbb T \}\) and \(A_\mathbb T := A+I/\bar{\mu },\) where \(I/\infty \) means the zero \(n\times n\) matrix and \(I/0\) is a diagonal matrix with \(\infty \) on the diagonal. Thus for \(\mathbb T =\mathbb R ,\,A_\mathbb T \) is obtained from \(A\) by replacing the elements on the diagonal by \(\infty ,\) for \(\mathbb T =\mathbb Z ,\,A_\mathbb T =A+I,\) and for \(\mathbb T =q^\mathbb N ,\,A_\mathbb T =A.\)

The following theorem unifies different criteria of nonnegativity of the exponential matrix for discrete- and continuous-time systems into one statement, in which, besides the matrix \(A,\) the graininess of the time scale is involved.

Theorem 2

The exponential matrix \(e_A(t,t_0)\) is nonnegative for every \(t,t_0\in \mathbb T \) such that \(t\ge t_0\) if and only if \(A_\mathbb T \in \bar{\mathbb{R }}^{n\times n}_+.\)

Proof

\(\Leftarrow \)” Assume that \(A_\mathbb T \ge 0.\) If \(\mu (t_0)>0,\) then \(A+I/\mu (t_0)\ge 0.\) This means that \(e_A(\sigma (t_0),t_0)=\mu (t_0)A+I \ge 0.\) If \(\mu (t_0)=0,\) then for \(t>t_0\) and close to \(t_0,\,I+A(t-t_0)>0.\) The last term approximates \(e_A(t,t_0).\) Since the exponential matrix is continuous (with respect to \(t\)), then also \(e_A(t,t_0)>0\) for \(t>t_0\) and close to \(t_0.\) To achieve nonnegativity of \(e_A(t,t_0)\) for all \(t\in \mathbb T ,\,t>t_0,\) we have to use the semigroup property of the exponential matrix: \(e_A(t,s)e_A(s,\tau )=e_A(t,\tau )\) for \(\tau <s<t\) and \(\tau ,s,t\in \mathbb T .\)

\(\Rightarrow \)” Assume that \(e_A(t,t_0)\) is nonnegative for \(t,t_0\in \mathbb T \) such that \(t\ge t_0.\) Suppose first that \(\bar{\mu }>0\) and chose \(t_0\in \mathbb T \) with \(\mu (t_0)>0.\) Then \(e_A(\sigma (t_0),t_0)=I+\mu (t_0)A\ge 0.\) This means that also \(A+I/\mu (t_0)\ge 0.\) As it holds for all \(t_0\in \mathbb T \) with \(\mu (t_0)>0,\,A_\mathbb T =A+I/\bar{\mu }\) is nonnegative. If \(\bar{\mu }=0,\) then \(\mathbb T \) is a standard interval. The exponential matrix is then standard \(e^{A(t-t_0)}.\) For \(t\) close to \(t_0,\) it may be approximated by \(I+A(t-t_0).\) Nonnegativity of the exponential matrix implies that \(I+A(t-t_0)>0\) for \(t>t_0\) and close to \(t_0.\) This holds only if all elements of \(A\) outside the diagonal are nonnegative. Thus again \(A_\mathbb T \) is nonnegative.

Corollary 1

The system \(\Sigma \) is positive if and only if \(A_\mathbb T \in \bar{\mathbb{R }}^{n\times n}_+\) and \(B\in \mathbb R ^{n\times m}_+.\)

Remark 2

An \(n\times n\) matrix with nonnegative elements outside the diagonal is called a Metzler matrix. Thus in the continuous-time case, the exponential matrix \(e_A(t,t_0)\) is nonnegative for every \(t>t_0\) if and only if \(A\) is a Metzler matrix. In that case the elements on the diagonal may be arbitrary. On the other hand, if the time scale \(\mathbb T \) is the set \(\mathbb Z \) of integer numbers, then \(\mu \equiv 1\) and nonnegativity of the exponential matrix is equivalent to \(A+I\ge 0.\) In that case the delta differential equation \(x^\Delta (k)=Ax(k)\) may be rewritten in the shift form as \(x(k+1)=(A+I)x(k).\) Thus the condition \(A+I\ge 0\) agrees with the necessary and sufficient condition of nonnegativity for discrete-time systems of the form \(x(k+1)=Fx(k),\) where \(k\in \mathbb Z \) (see [12, 17]).

4 Controllability

If \(\Sigma \) is a positive system, then for a nonnegative initial condition \(x_0\) and a nonnegative control \(u,\) the trajectory \(x\) stays in \(\mathbb R ^n_+.\) One may be interested in properties of the reachable sets of the system. For simplicity we assume that the initial condition is \(x_0=0.\) Let \(x(t_1,t_0,0,u)\) mean the trajectory of the system corresponding to the initial condition \(x(t_0)=0\) and the control \(u,\) and evaluated at time \(t_1.\) We shall define various controllability properties.

Definition 2

Let \(t_0,t_1\in \mathbb T ,\,t_0<t_1.\) The positive reachable set (from \(0\)) of the positive system \(\Sigma \) on the interval \([t_0,t_1]_\mathbb T \) is the set \(\mathcal R _+^{[t_0,t_1]}\) consisting of all \(x(t_1,t_0,0,u),\) where \(u\) is a nonnegative control on \([t_0,t_1)_\mathbb T .\)

The positive reachable set (from \(0\)) for the initial time \(t_0\) of \(\Sigma \) is

$$\begin{aligned} \mathcal R _+^{t_0}=\bigcup _{t_1\in \mathbb T , t_1>t_0} \mathcal R _+^{[t_0,t_1]} \end{aligned}$$

and the positive reachable set (from \(0\)) of \(\Sigma \) is

$$\begin{aligned} \mathcal R _+=\bigcup _{t_0\in \mathbb T } \mathcal R _+^{t_0}. \end{aligned}$$

The positive system \(\Sigma \) is positively accessible on \([t_0,t_1]_\mathbb T \) if \(\mathcal R _+^{[t_0,t_1]}\) has a nonempty interior, \(\Sigma \) is positively accessible for the initial time \(t_0\) if \(\mathcal R _+^{t_0}\) has a nonempty interior and \(\Sigma \) is positively accessible if \(\mathcal R _+\) has a nonempty interior.

The positive system \(\Sigma \) is positively reachable on \([t_0,t_1]_\mathbb T \) if \(\mathcal R _+^{[t_0,t_1]}=\mathbb R ^n_+,\,\Sigma \) is positively reachable for the initial time \(t_0\) if \(\mathcal R _+^{t_0}=\mathbb R ^n_+\) and \(\Sigma \) is positively reachable if \(\mathcal R _+=\mathbb R ^n_+.\)

Remark 3

Accessibility was first introduced for nonlinear systems for which it is a good substitute of reachability, as the latter is often too restrictive property. The same happens for positive systems. Positive accessibility means precisely accessibility, but with nonnegative controls.

The following implications follow directly from the definitions:

Proposition 5

Let \(\Sigma \) be a positive system.

\(\Sigma \) is positively accessible on \([t_0,t_1]_\mathbb T \,\Rightarrow \,\Sigma \) is positively accessible for the initial time \(t_0\,\Rightarrow \,\Sigma \) is positively accessible.

\(\Sigma \) is positively reachable on \([t_0,t_1]_\mathbb T \,\Rightarrow \,\Sigma \) is positively reachable for the initial time \(t_0\,\Rightarrow \,\Sigma \) is positively reachable.

Positive reachability (on \([t_0,t_1]_\mathbb T \)) implies positive accessibility (on \([t_0,t_1]_\mathbb T \)).

We have also a useful inclusion:

Proposition 6

If \(\tau _0<t_0<t_1\) , then \(\mathcal R _+^{[t_0,t_1]}\subseteq \mathcal R _+^{[\tau _0,t_1]}\) and \(\mathcal R _+^{t_0} \subseteq \mathcal R _+^{\tau _0}.\)

Proof

Since we start at \(x_0=0,\) to reach \(x_1\in \mathcal R _+^{[t_0,t_1]}\) for the initial time \(\tau _0\) put \(u(\tau )=0\) for \(\tau \in [\tau _0,t_0).\) Then switch to the control that was used to reach \(x_1\) for the initial time \(t_0.\)

Remark 4

Since \(\mathbb T \) may not be homogeneous, in general \(\mathcal R _+^{t_0}\) depends on \(t_0.\) Let, for example, \(\mathbb T =\mathbb Z _+\!\setminus \!\{2\},\)

$$\begin{aligned} A=\left( \begin{array}{rr} -\frac{1}{2}&1 \\ 1&-\frac{1}{2} \end{array}\right)\quad \text{ and} \quad B=\left( \begin{array}{l} 1 \\ 0 \end{array}\right). \end{aligned}$$

It is easy to construct nonnegative controls that allow to reach \(e_1\) and \(e_2\) on the interval \([0,3]_\mathbb T .\) Take \(u(0)=0\) and \(u(1)=\frac{1}{2}\) for \(e_1,\) and \(u(0)=\frac{1}{2}\) and \(u(1)=0\) for \(e_2.\) Then the reachable set on \([0,3]_\mathbb T \) is the entire \(\mathbb R ^2_+,\) so the system is positively reachable from the initial time \(0.\) But when we start at \(t_0=1,\) we cannot reach \(e_2\) in finite time using nonnegative controls. Thus the reachable set becomes smaller. Exactly the same happens when we consider the reachable set on \([0,k]\) for \(k\ge 4\): \(e_2\) cannot be reached on that interval. The situation becomes quite weird: the reachable set may shrink when we extend the final time \(t_1.\) This of course cannot happen on homogeneous time scales.

Proposition 7

The positive reachable sets \(\mathcal R _+^{[t_0,t_1]},\,\mathcal R _+^{t_0}\) and \(\mathcal R _+\) of a positive system \(\Sigma \) are positive cones contained in \(\mathbb R ^n_+.\)

Proof

We will show this for \(\mathcal R _+.\) For other reachable sets the proof is similar. Let \(x\in \mathcal R _+.\) This means that there is a nonnegative control \(u: [t_0,t_1)_\mathbb T \rightarrow \mathbb R ^m_+\) for some \(t_1>t_0\) such that

$$\begin{aligned} x=\int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))Bu(\tau )\Delta \tau . \end{aligned}$$

Let \(\alpha >0.\) Then \(\alpha x\) also belongs to \(\mathcal R _+.\) It is enough to use the control \(v(t)=\alpha u(t),\) defined on the same interval \([t_0,t_1)_\mathbb T .\)

The following characterization has been known for discrete- and continuous-time systems (see e.g., [17]). It is extended now to arbitrary time scales.

Theorem 3

Let \(\Sigma \) be a positive system and \(t_0,t_1\) be elements of \(\mathbb T \) such that \([t_0,t_1]_\mathbb T \) consists of at least \(n+1\) elements. The following conditions are equivalent:

  1. a)

    \(\Sigma \) is positively accessible on \([t_0,t_1]_\mathbb T ,\)

  2. b)

    \(\Sigma \) is positively accessible for the initial time \(t_0,\)

  3. c)

    \(\Sigma \) is positively accessible,

  4. d)

    \(\mathrm{rank}(B,AB,\ldots ,A^{n-1}B)=n.\)

Proof

a) \(\Rightarrow \) b). This follows from the fact that \(\mathcal R _+^{[t_0,t_1]}\) is contained in \(\mathcal R _+^{t_0}.\)

b) \(\Rightarrow \) c). This follows from the fact that \(\mathcal R _+^{t_0}\) is contained in \(\mathcal R _+.\)

c) \(\Rightarrow \) d). Assume that \(\Sigma \) is positively accessible. The reachable set \(\mathcal R \) of \(\Sigma \) is the set of states that can be reached with controls that are not necessarily nonnegative. It is clear that \(\mathcal R _+\) is contained in \(\mathcal R .\) Moreover, \(\mathcal R \) is a linear subspace of \(\mathbb R ^n\) (see [3]). Positive accessibility implies that \(\mathcal R \) contains an open subset. Therefore \(\mathcal R \) must be equal to \(\mathbb R ^n,\) which means that \(\Sigma \) is reachable (controllable) from \(0.\) This is characterized by the condition \(\mathrm{rank}(B,AB,\ldots ,A^{n-1}B)=n\) (see [3]).

d) \(\Rightarrow \) a). The condition \(\mathrm{rank}(B,AB,\ldots ,A^{n-1}B)=n\) implies reachability of \(\Sigma \) from \(0\) on arbitrary interval consisting of at least \(n+1\) points (see [3]). Actually the states can be obtained with the aid of piecewise constant controls with at most \(n-1\) switching at fixed instances. Thus the reachable set \(\mathcal R ^{[t_0,t_1]}\) on \([t_0,t_1]_\mathbb T \) can be described as the image of a linear map defined on a finite-dimensional space. This map restricted to positive controls will give the set with the nonempty interior and this means positive accessibility.

Example

Let \(\mathbb T =\mathbb Z ,\,A= \left(\begin{array}{ll} 0&0 \\ 1&0 \end{array}\right)\) and \(B= \left(\begin{array}{l} 1 \\ 0 \end{array}\right).\) The system \(x^\Delta =Ax+Bu\) is positive and \(\mathrm rank [B,AB]=2,\) so the system is positively accessible. Assume that we start at \(t_0=0\) from \(x_0=0.\) Observe that the reachable set on the interval \([0,1]\) is the cone generated by \(B,\) and the reachable set on the interval \([0,k]\) is the convex cone generated by \(B\) and \(\left(\begin{array}{cl} 1 \\ k-1 \end{array}\right).\) Thus the reachable set is growing as \(k\) grows, but it does not contain the point \(\left(\begin{array}{l} 0 \\ 1 \end{array}\right).\) This means that the system is not positively reachable.

To study positive reachability let us introduce a modified Gram matrix related to the control system.

Definition 3

Let \(M\subseteq \{1,\ldots ,m\}\) and \(t_0,t_1\in \mathbb T ,\,t_0<t_1.\) For each \(k\in M\) let \(S_k\) be a subset of \([t_0,t_1)_\mathbb T \) that is a union of finitely many disjoint intervals of \(\mathbb T \) of the form \([\tau _0,\tau _1)_\mathbb T ,\) and let \(\mathcal S _M=\{S_k : k\in M\}.\) By the Gram matrix of system (4) corresponding to \(t_0,\,t_1,\,M\) and \(\mathcal S _M\) we mean the matrix

$$\begin{aligned} W:=W_{t_0}^{t_1}(M,\mathcal S _M):=\sum _{k\in M} \int _{S_k} e_A(t_1,\sigma (\tau ))b_kb_k^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau . \end{aligned}$$
(5)

The following theorem is the main result of the paper. It shows that a modified Gram matrix is a key tool for characterization of positive reachability. Moreover, this characterization holds for all time scales.

Theorem 4

Let \(t_0,t_1\in \mathbb T ,\,t_0<t_1.\) Positive system (4) is positively reachable on \([t_0,t_1]_\mathbb T \) if and only if there are \(M\subseteq \{1,\ldots ,m\}\) and the family \(\mathcal S _M=\{S_k : k\in M\}\) of subsets of \([t_0,t_1]_\mathbb T \) such that the matrix \(W=W_{t_0}^{t_1}(M,\mathcal S _M)\) is monomial.

Proof

\(\Leftarrow \)” Let \(\bar{x}\in \mathbb R ^n_+.\) By \(\tilde{e}_1,\)..., \(\tilde{e}_m\) we denote the vectors of the standard basis in \(\mathbb R ^m.\) By Proposition 1 the matrix \(W\) is invertible and positive. Define control \(u:[t_0,t_1)\rightarrow \mathbb R ^m_+\) by \(u(\tau )=\sum _{k\in M} u_k(\tau )\tilde{e}_k,\) where \(u_k(\tau )=b_k^Te_A(t_1,\sigma (\tau ))^TW^{-1}\bar{x}\) for \(t\in S_k\) and \(u_k(\tau )=0\) for \(t\notin S_k.\) The control \(u\) is nonnegative and

$$\begin{aligned} x(t_1)&= \int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))Bu(\tau )\Delta \tau = \sum _{k\in M} \int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))b_ku_k(\tau )\Delta \tau \\&= \sum _{k\in M} \int _{S_k} e_A(t_1,\sigma (\tau ))b_kb_k^Te_A(t_1,\sigma (\tau ))^TW^{-1}\bar{x}\Delta \tau = \bar{x}. \end{aligned}$$

Thus (4) is positively reachable on \([t_0,t_1]_\mathbb T .\)

\(\Rightarrow \)” Positive reachability implies that all the vectors \(e_1,\ldots ,e_n\) can be reached using nonnegative controls. Let us fix some \(e_i.\) Then there is a piecewise continuous nonnegative control \(u=(u_1,\ldots ,u_m)\) on \([t_0,t_1)_\mathbb T \) such that

$$\begin{aligned} e_i=\sum _{j=1}^m \int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))b_ju_j(\tau )\Delta \tau . \end{aligned}$$

Since all the integrals in the sum are nonnegative, for some \(k_i\) the integral

$$\begin{aligned} \int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))b_{k_i}u_{k_i}(\tau )\Delta \tau \end{aligned}$$

is an \(i\)-monomial vector. Then for every \(\tau \in [t_0,t_1)_\mathbb T \) the vector \(e_A(t_1,\sigma (\tau ))b_{k_i}u_{k_i}(\tau )\) is either \(i\)-monomial or \(0.\) Let \(T_i\) be the set of all \(\tau \) for which \(e_A(t_1,\sigma (\tau ))b_{k_i}u_{k_i}(\tau )\) is \(i\)-monomial. Then for \(\tau \in T_i\) the matrix

$$\begin{aligned} e_A(t_1,\sigma (\tau ))b_{k_i}b_{k_i}^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \end{aligned}$$

is diagonal with the only nonzero element at the \(i\)th place. The same is true for the matrix \(\int _{T_i}e_A(t_1,\sigma (\tau ))b_{k_i}b_{k_i}^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau .\) This implies that the matrix

$$\begin{aligned} C:=\sum _{i=1}^n \int _{T_i}e_A(t_1,\sigma (\tau ))b_{k_i}b_{k_i}^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau \end{aligned}$$

is monomial (and diagonal). Let \(M\) consist of all \(k_i\) for \(i=1,\ldots ,n.\) Observe that if \(k_i=k_j\) for \(i\ne j,\) then \(T_i\cap T_j=\emptyset .\) Define \(S_k=\bigcup _{k_i=k} T_i\) and let \(\mathcal S _M=\{S_k : k\in M\}.\) Then

$$\begin{aligned} C=\sum _{k\in M} \int _{S_k}e_A(t_1,\sigma (\tau ))b_{k}b_{k}^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau =W_{t_0}^{t_1}(M,\mathcal S _M), \end{aligned}$$

so \(W_{t_0}^{t_1}(M,\mathcal S _M)\) is monomial.

Corollary 2

If the ordinary Gram matrix

$$\begin{aligned} W_{t_0}^{t_1}=\int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))BB^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau \end{aligned}$$

is monomial, then positive system (4) is positively reachable on \([t_0,t_1]_\mathbb T .\)

Proof

Observe that \(W_{t_0}^{t_1}=W_{t_0}^{t_1}(M,\mathcal S _M)\) for \(M=\{1,\ldots ,m\}\) and \(S_k=[t_0,t_1)_\mathbb T \) for all \(k\in M.\) Thus positive reachability follows from Theorem 4.

Remark 5

The condition that \(W_{t_0}^{t_1}\) is monomial is not necessary for positive reachability on \([t_0,t_1]_\mathbb T .\) Consider the system

$$\begin{aligned} x^\Delta = \left( \begin{array}{ll} -1&1 \\ 1&0 \end{array}\right) x + \left(\begin{array}{ll} 1&1 \\ 0&1 \end{array}\right)u \end{aligned}$$
(6)

on \(\mathbb T =\mathbb Z .\) Choose \(t_0=0\) and \(t_1=2.\) System (6) is positively reachable on \([t_0,t_1]_\mathbb T .\) Indeed, let \(M=\{1\}\) and \(S_1=[0,2)_\mathbb T .\) Then

$$\begin{aligned} W&= b_1b_1^\mathrm{T} + (I+A)b_1b_1^\mathrm{T}(I+A)^\mathrm{T}\\&= \left(\begin{array}{l} 1 \\ 0 \end{array}\right) (1 , 0)+ \left(\begin{array}{ll} 0&1 \\ 1&1 \end{array}\right) \left(\begin{array}{l} 1 \\ 0 \end{array}\right) (1 , 0) \left(\begin{array}{ll} 0&1 \\ 1&1 \end{array}\right)= \left(\begin{array}{ll} 1&0 \\ 0&1 \end{array}\right) \end{aligned}$$

is a monomial matrix. However,

$$\begin{aligned} W_{t_0}^{t_1}=BB^\mathrm{T} + (I+A)BB^\mathrm{T}(I+A)^\mathrm{T}= \left(\begin{array}{ll} 3&3 \\ 3&6 \end{array}\right) \end{aligned}$$

is not monomial.

Corollary 3

If there exists \(M\subseteq \{1,\ldots ,m\}\) such that the matrix \({W}_{t_0}^{t_1}(M)= \int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))\tilde{B}\tilde{B}^\mathrm{T}e_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau \) is monomial, where \(\tilde{B}\) is a submatrix of \(B\) consisting of column \(b_k, k\in M,\) then positive system (4) is positively reachable on \([t_0,t_1]_\mathbb T .\)

Proof

Observe that \({W}_{t_0}^{t_1}(M)=W_{t_0}^{t_1}(M,\mathcal S _M)\) where \(S_k=[t_0,t_1)_\mathbb T \) for all \(k\in M.\) Thus positive reachability follows from Theorem 4.

Remark 6

The condition that \({W}_{t_0}^{t_1}(M)\) is monomial is not necessary for positive reachability on \([t_0,t_1]_\mathbb T .\) Let the time scale \(\mathbb T =\{0\}\cup [1,2]\cup \{3\}.\) Consider the system

$$\begin{aligned} x^\Delta = \left(\begin{array}{ll} -1&0 \\ 1&-1 \end{array}\right)x + \left(\begin{array}{l} 1 \\ 0 \end{array}\right)u. \end{aligned}$$
(7)

The system is positively reachable on \([0,3]_\mathbb T .\) Indeed, let \(M=\{1\}\) and let \(S_1=[0,1)_\mathbb T \cup [2,3)_\mathbb T .\) Then

$$\begin{aligned} W&= \int _{[0,1)_\mathbb T } e_A(3,\sigma (\tau ))BB^Te_A(3,\sigma (\tau ))^\mathrm{T} \Delta \tau \\&+ \int _{[2,3)_\mathbb T } e_A(3,\sigma (\tau ))BB^Te_A(3,\sigma (\tau ))^\mathrm{T} \Delta \tau \\&= \left(\begin{array}{ll} 0&0 \\ 0&e^{-2} \end{array}\right) + \left(\begin{array}{ll} 1&0 \\ 0&0 \end{array}\right)= \left(\begin{array}{ll} 1&0 \\ 0&e^{-2} \end{array}\right) \end{aligned}$$

is monomial. Observe that we remove here the points \(t\) with \(\mu (t)=0.\) This is essential in order to get a monomial matrix. To calculate the full Gram matrix we have to add to \(W\) the following matrix

$$\begin{aligned} \int _{[1,2)} e_A(3,\sigma (\tau ))BB^Te_A(3,\sigma (\tau ))^\mathrm{T} d\tau . \end{aligned}$$

Its off-diagonal elements are equal to \(\int _1^2 (3-\tau )e^{-2(3-\tau )}d\tau .\) Since they are positive, \({W}_{t_0}^{t_1}(M)\) is not monomial.

From the general characterization of positive reachability presented in Theorem 4 we can deduce more concrete results for particular time scales. For \(\mathbb T =\mathbb R \) we get very restrictive conditions for positive reachability. The following result was first obtained in [9]. We give a different proof of this fact, based on the Gram matrix criterion.

Proposition 8

Let \(\mathbb T =\mathbb R \) and \(t_0,t_1\in \mathbb R ,\,t_0<t_1.\) Positive system (4) is positively reachable on \([t_0,t_1]\) if and only if \(A\) is diagonal and \(B\) contains an \(n\times n\) monomial submatrix (so \(m\ge n\)).

Proof

\(\Leftarrow \)” Let \(\tilde{B}\) denote the monomial submatrix of \(B\) and let the indices of columns of \(\tilde{B}\) form the set \(M.\) Then \(\tilde{B}\tilde{B}^\mathrm{T}\) is a diagonal matrix with all the diagonal elements being positive and so is

$$\begin{aligned} W_{t_0}^{t_1}(M)=\int _{t_0}^{t_1} e_A(t_1,\sigma (\tau ))\tilde{B}\tilde{B}^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau . \end{aligned}$$

Thus \(W_{t_0}^{t_1}(M)\) is monomial, so system (4) is positively reachable by Corollary 3. Observe that the proof of this implication works for all time scales.

\(\Rightarrow \)” Assume that the system is positively reachable on \([t_0,t_1].\) From Theorem 4 it follows that for some set \(M\) and some family \(\mathcal S _M\) the Gram matrix \(W=W_{t_0}^{t_1}(M,\mathcal S _M)\) is monomial. Let \(j\)th column of \(W\) be \(i\)-monomial. Then for some \(k\in M\) and for \(\tau \) from some subinterval of \([t_0,t_1)\) the \(j\)th column of the matrix \(e^{A(t_1-\tau )}b_kb_k^\mathrm{T}(e^{A(t_1-\tau )})^\mathrm{T}\) is \(i\)-monomial. Let \(c(\tau )=e^{A(t_1-\tau )}b_k.\) Since the \(j\)th column of the matrix \(c(\tau )c(\tau )^\mathrm{T}\) is \(i\)-monomial, then \(c(\tau )\) must be \(j\)-monomial and eventually \(i=j.\) This means that at least one column of \(e^{A(t_1-\tau )}\) must be \(i\)-monomial. As the exponential matrix is invertible such a column must be unique. This implies that \(b_k\) is monomial. Moreover the \(i\)-monomial column of \(e^{A(t_1-\tau )}\) must be its \(i\)th column. Otherwise we would get \(0\) on the diagonal of the exponential matrix for all \(\tau \) from some interval, which is impossible. Thus \(e^{A(t_1-\tau )}\) is diagonal on some interval, which means that \(A\) is also diagonal. Now to get all \(n\) monomial columns in \(W\) we need \(n\) different monomial column \(b_k.\) Thus \(B\) contains an \(n\times n\) monomial submatrix.

Remark 7

The statement of Proposition 8 holds also for \(\mathbb T =[a,b]\) and for \(\mathbb T \) being a closed half-line. However, it does not hold for disjoint union of closed intervals. The example given in Remark 6 may be considered on a bigger time scale: \(\mathbb T =[-1,0]\cup [1,2]\cup [3,4].\) Neither \(A\) nor \(B\) satisfy the requirement given in Proposition 8. However, the system is positively reachable on \([0,3]_\mathbb T .\)

Corollary 4

For \(\mathbb T =\mathbb R ,\) positive system (4) is positively reachable on some \([t_0,t_1]\) if and only if (4) is positively reachable on any interval \([\tau _0,\tau _1].\)

Corollary 5

For \(\mathbb T =\mathbb R ,\) positive system (4) is positively reachable on some \([t_0,t_1]\) if and only if (4) is positively reachable.

For discrete homogeneous time scales the conditions for positive reachability are much less restrictive.

Proposition 9

Let \(\mathbb T =\mu \mathbb Z \) for a constant \(\mu >0.\) Let \(t_0\in \mathbb T \) and \(t_1=t_0+k\mu \) for some \(k\in \mathbb N .\) System (4) is positively reachable on \([t_0,t_1]_\mathbb T \) if and only if the matrix \([B,(I+\mu A)B,\ldots ,(I+\mu A)^{k-1}B]\) contains a monomial submatrix.

Proof

\(\Leftarrow \)” Observe that \(x(t_1)=\sum _{i=0}^{k-1}\sum _{j=1}^m (I+\mu A)^ib_ju_j(k-1-i).\) If \((I+\mu A)^ib_j=\gamma e_s\) for some \(\gamma >0,\) then setting \(u_j(k-1-i)=1/\gamma \) and all other components and values at different times putting to \(0\) we get \(x(t_1)=e_s.\) This means positive reachability on \([t_0,t_1]_\mathbb T .\)

\(\Rightarrow \)” By Theorem 4 positive reachability implies existence of a set \(M\) and subsets \(S_k\) of \([t_0,t_1]\) for \(k\in M\) such that the matrix

$$\begin{aligned} W=\sum _{k\in M} \int _{S_k} e_A(t_1,\sigma (\tau ))b_kb_k^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau \end{aligned}$$

is monomial. Moreover

$$\begin{aligned}&\int _{S_k} e_A(t_1,\sigma (\tau ))b_kb_k^Te_A(t_1,\sigma (\tau ))^\mathrm{T} \Delta \tau \\&\quad =\sum _{t\in {S}_k} (I+\mu A)^{(t_1-t-\mu )/\mu }b_kb_k^\mathrm{T}((I+\mu A)^{(t_1-t-\mu )/\mu })^\mathrm{T}\mu . \end{aligned}$$

This implies that for every \(i=1,\ldots ,n\) there are \(k\in M,\,t\in S_k\) and \(0\le j\le n\) such that the \(j\)th column of \((I+\mu A)^{(t_1-t-\mu )/\mu }b_kb_k^\mathrm{T}((I+\mu A)^{(t_1-t-\mu )/\mu })^\mathrm{T}\) is \(i\)-monomial. This means that the column \((I+\mu A)^{(t_1-t-\mu )/\mu }b_k\) is \(i\)-monomial. But this column is one of the columns of the matrix \([B,(I+\mu A)B,\ldots ,(I+\mu A)^{k-1}B].\)

If \(k>n\) then it is enough to consider powers of \(I+\mu A\) only up to \(n-1.\)

Proposition 10

Let \(\mathbb T =\mu \mathbb Z ,\,k\ge n,\,t_0\in \mathbb T \) and \(t_1=t_0+k\mu .\) Positive system (4) is positively reachable on \([t_0,t_1]_\mathbb T \) if and only if the matrix \([B,(I+\mu A)B,\ldots ,(I+\mu A)^{n-1}B]\) contains a monomial submatrix.

For \(\mu =1\) this was shown in [11]. Then (4) may be rewritten in a more familiar form \(x(t+1)=(I+A)x(t)+Bu(t).\) The proof for \(\mu \ne 1\) is very similar.

Proposition 9 may be extended to nonhomogeneous discrete-time scales.

Proposition 11

Assume that \(\mu (t)>0\) for all \(t\in \mathbb T ,\,t_0\in \mathbb T \) and \(t_1=\sigma ^k(t_0).\) Positive system (4) is positively reachable on \([t_0,t_1]_\mathbb T \) if and only if the matrix

$$\begin{aligned}{}[B,(I+\mu (\sigma (t_0))A)B,&(I+\mu (\sigma ^2(t_0))A)(I+\mu (\sigma (t_0))A)B, \ldots ,\\&(I+\mu (\sigma ^{k-1}(t_0))A)\ldots (I+\mu (\sigma (t_0))A)B] \end{aligned}$$

contains a monomial submatrix.

The proof is similar to the proof of Proposition 9, but we have to take into account that the exponential matrix is no longer a power of \(I+\mu A\) for a constant \(\mu \) but rather a product of such terms with possibly different values of \(\mu .\) This criterion may be used for systems on \(\mathbb T =q^\mathbb{N }.\)

Remark 8

Proposition 10 cannot be extended to discrete nonhomogeneous time scales. Let us consider the following example: \(\mathbb T =\{ 0,1,2,4\},\, n=2,\)

$$\begin{aligned} A= \left(\begin{array}{ll} -\frac{1}{2}&0 \\ 1&-\frac{1}{2} \end{array}\right),\ B=\left(\begin{array}{l} 1 \\ 0 \end{array}\right)\!. \end{aligned}$$

Let \(t_0=0\) and \(t_1=4,\) so \(k=3.\) The matrix

$$\begin{aligned}{}[B,(I+\mu (1)A)B,(I+\mu (2)A)(I+\mu (1)A)B]=\left(\begin{array}{lll} 1&\frac{1}{2}&0 \\ 0&1&3 \end{array}\right) \end{aligned}$$

evidently contains a monomial \(2\times 2\) submatrix. But this is not true for the matrix \([B,(I+\mu (1)A)B].\) Thus to reach monomial vectors we may need more than \(n\) jumps.

Let now \(c>0\) and \(\mathbb T =\bigcup _{k\in \mathbb Z } [2kc,(2k+1)c].\) Then we have the following criterion of positive reachability:

Proposition 12

Let \(i,k\in \mathbb Z \) and \(i\le k.\) Positive system (4) is positively reachable on \([(2i-1)c,2kc]_\mathbb T \) if and only if the matrix

$$\begin{aligned} \left[B,(I+cA)e^{cA}B,\left[(I+cA)e^{cA}\right]^2B,\ldots ,\left[(I+cA)e^{cA}\right]^{k-i}B\right] \end{aligned}$$
(8)

contains a monomial submatrix.

Proof

\(\Leftarrow \)” Let \([(I+cA)e^{cA}]^{k-j}b_s\) be \(r\)-monomial for some \(i\le j\le k.\) Observe that

$$\begin{aligned} \int _{(2j-1)c}^{2jc} e_A(2kc,\sigma (\tau ))b_s\Delta \tau = c[(I+cA)e^{cA}]^{k-j}b_s. \end{aligned}$$
(9)

Thus

$$\begin{aligned} \int _{(2j-1)c}^{2jc} e_A(2kc,\sigma (\tau ))b_sb_s^\mathrm{T}(e_A(2kc,\sigma (\tau )))^\mathrm{T}\Delta \tau \end{aligned}$$
(10)

is a diagonal matrix with the only nonzero entry at \(r\)th place of the diagonal. This means that we can construct a monomial Gram matrix with the aid of integrals of the form (10).

\(\Rightarrow \)” Assume now that system (4) is positively reachable on \([(2i-1)c,2kc]_\mathbb T .\) Then for some \(M\subset \{1,\ldots ,m\}\) and some \(\mathcal S _M=\{ S_k : k\in M \}\) the Gram matrix is monomial and diagonal (from Theorem 4 and its proof). Thus for every \(i=1,\ldots ,n\) there is \(k_i\in M\) and a subset \(T_i\) of \(S_{k_i}\) that is a union of intervals of the form \([a,b)_\mathbb T \) such that

$$\begin{aligned} \int _{T_i} e_A(2kc,\sigma (\tau )b_{k_i}\Delta \tau =\gamma e_i \end{aligned}$$
(11)

for some \(\gamma >0.\) If some \(T_i\) contains an ordinary interval, then, by Proposition 8 and its proof, \(A\) is diagonal. This implies that \(b_{k_i}=\alpha _i e_i\) for every \(i=1,\ldots ,n\) and some \(\alpha _i>0.\) Thus \(B\) contains a monomial submatrix and so does matrix (8). If none of \(T_i\) contains an ordinary interval, then each \(T_i\) is finite and the integral in (11) is a finite sum of integrals of the form (9). From this we conclude that the integral

$$\begin{aligned} \int _{(2j-1)c}^{2jc} e_A(2kc,\sigma (\tau ))b_s\Delta \tau \end{aligned}$$

is \(i\)-monomial for some \(j\) and some \(s.\) From (9) we get that matrix (8) contains an \(i\)-monomial column. Since the Gram matrix is monomial, the last statement is true for every \(i.\)

Remark 9

The example from Remark 6 shows that the condition stated in Proposition 12 is weaker than the condition of positive reachability for continuous-time systems given in Proposition 8. Proposition 12 may be extended to the time scale that is a union of arbitrary closed bounded disjoint intervals. But then matrix (8) becomes more complicated, involving products of \(I+cA\) and \(e^{dA}\) for different values of \(c\) and \(d.\)

5 Conclusion

Positive accessibility and positive reachability of linear positive control systems have been characterized. The characterizations hold on arbitrary time scales, but they have different features. Positive accessibility depends only on the matrices of the system. Necessary and sufficient conditions for positive reachability involve a modified Gram matrix in which the delta integral is used. For different time scales the delta integral has different properties related to positiveness. This feature is responsible for different criteria of positive reachability on specific time scales—very restrictive for continuous-time systems and more relaxed for discrete-time ones. Gram matrices have usually been used to study systems with time-variant coefficients. The concept of modified Gram matrix developed in this paper allows for natural passage to time-variant systems on time scales. Properties of observability and positive observability of positive systems on time scales have been studied in [1]. Similar characterizations, dual to these presented here, were obtained.