1 Introduction

We live in a hyperconnected world, where the size of networks and the number of connections between their components are rapidly growing. Emerging technologies such as the Internet of Things, Cloud Computing, 5G communication and so on make this trend even more distinct. Such complex networked systems include smart grids, connected vehicles, swarm robotics and smart cities in which the participating agents may be plugged in and out from the network at any time.

The unknown and possibly time-varying size of such networks poses new challenges for stability analysis and control design. One of the promising approaches to this problem is to over-approximate the network by an infinite network, and perform the stability analysis and control design for this infinite over-approximation [10, 11, 26]. This approach has received significant attention during the last two decades. In particular, a large body of literature is devoted to spatially invariant systems and/or linear systems consisting of an infinite number of components, interconnected with each other by means of the same pattern, see, e.g., [4, 5, 7, 10].

State of the art: ISS theory ISS theory was initiated in [44], and has quickly become one of the pillars of nonlinear control theory, including robust stabilization, nonlinear observer design and analysis of large-scale networks, see [2, 36, 45]. A tremendous progress in the development of the infinite-dimensional ISS theory has brought a number of powerful techniques for the robust stability analysis of distributed parameter systems, including Lyapunov methods [13, 48, 50], characterizations of ISS [39] for nonlinear systems, functional-analytic tools [21, 22] and spectral-based methods [30, 32] for linear systems, see [38] for a recent survey on this topic. The ISS small-gain approach is especially fruitful for the analysis of coupled systems. In this method, the influence of any subsystem on other subsystems of a network is characterized by so-called gain functions. The gain operator constructed from these functions characterizes the interconnection structure of the network. The small-gain theorems for interconnections of finitely many input-to-state stable systems governed by ordinary differential equations (ODEs) [16, 18, 23, 24] state that if the gains are small enough, i.e., the gain operator satisfies a small-gain condition, the network is stable.

Within the infinite-dimensional ISS theory, generalizations of these results to couplings of finitely many infinite-dimensional systems have been proposed in [6, 13, 37, 49]. We refer to [37] for more details and references on small-gain results for finite couplings. For the case of trajectory-based ISS small-gain theorems for finite networks, the main difficulties in going from finite to infinite dimensions stem from the fact that the characterizations of ISS developed for ODE systems in [47] are no more valid for infinite-dimensional systems. As argued in [37], more general characterizations shown in [39] have to be used, which requires major changes in the proof of the small-gain result.

Small-gain theorems for finite networks have been applied to the stability analysis of coupled parabolic-hyperbolic partial differential equations (PDEs) in [31]. Small-gain-based boundary feedback design for global exponential stabilization of 1-D semilinear parabolic PDEs has been proposed in [33].

State of the art: infinite networks On the other hand, recently a number of works appeared, devoted to stability and control of nonlinear infinite networks of ordinary differential equations, which are not necessarily spatially invariant, see, e.g., [14, 15, 34]. Small-gain analysis of infinite networks is especially challenging since the gain operator, collecting the information about the internal gains, acts on an infinite-dimensional space, in contrast to couplings of finitely many systems of arbitrary nature. This calls for a careful choice of the infinite-dimensional state space of the overall network, and motivates the use of the theory of positive operators on ordered Banach spaces for the small-gain analysis.

In [15], it is shown that a countably infinite network of continuous-time input-to-state stable systems is ISS, provided that the gain functions capturing the influence of the subsystems on each other are all less than the identity, which is a very conservative condition. In [14], it was shown that classic max-form strong small-gain conditions developed for finite networks in [16] do not ensure stability of infinite networks, even in the linear case. To address this issue, more restrictive robust strong small-gain conditions have been developed in [14], but still the main results in [14] have been shown under the quite strong restriction that there is a linear path of strict decay for the gain operator, which makes the result not fully nonlinear.

In contrast, for networks consisting of exponentially ISS systems, possessing exponential ISS Lyapunov functions with linear gains, it was shown in [34] that if the spectral radius of the gain operator is less than one, then the whole network is exponentially ISS and there is a coercive exponential ISS Lyapunov function for the whole network. This result is tight and provides a complete generalization of [12, Prop. 3.3] from finite to infinite networks.

Contribution In this work, we provide nonlinear ISS small-gain theorems for continuous-time infinite networks, whose components may be infinite-dimensional systems of different types. We do not impose any linearity and/or contractivity assumption for the gains, which makes the results truly nonlinear. Moreover, we do not restrict ourselves to couplings of ODE systems, but instead develop a framework, which allows for couplings of heterogeneous infinite-dimensional systems, which is important in the context of ODE-PDE, delay-PDE and PDE-PDE cascades. We derive our small-gain theorems for uniform global stability (UGS) and ISS properties in the trajectory formulation, in contrast to the papers [14, 15, 34], where the Lyapunov formulation was used.

We start by introducing a general class of infinite-dimensional control systems, which includes many classes of evolution PDEs, time-delay systems, ODEs, infinite switched systems, etc. Next, we introduce the concept of infinite interconnections for systems of this class, extending the framework developed in [27, 37].

Theorems 6.1and6.4are our small-gain results for uniform global stability of infinite networks. They use the monotone bounded invertibility (MBI) property of the gain operator, which is equivalent for finite networks (see Proposition 7.12) to the strong small-gain condition, employed in the small-gain analysis of finite networks in [18, Thm. 8] and [37]. The proof of this result is based on the proof of the corresponding result for finite networks, see [18, Thm. 8].

Theorems 6.2and6.5are our ISS small-gain results for infinite networks in semi-maximum and summation formulation, which state that an infinite network consisting of ISS systems is ISS provided that the discrete-time system induced by the gain operator has the so-called monotone limit (MLIM) property. This property concerns the input-to-state behavior of the discrete-time control system \(x(k+1) \le \Gamma (x(k)) + u(k)\) induced by the gain operator \(\Gamma \) and is implied by ISS of this system for monotonically decreasing solutions and in turn implies the monotone bounded invertibility property.

In Sect. 7, we analyze the MBI and MLIM properties, which are employed in the small-gain analysis of infinite networks of ISS systems. In Sect. 7.1, we characterize the MBI property in terms of the uniform small-gain condition, which is a uniform version of the classical small-gain condition \(\Gamma (x) \not \ge x\) for all \(x \ge 0\).

In Sect. 7.2, we relate the uniform small-gain condition to the strong and robust strong small-gain conditions, which have already been exploited in the small-gain analysis of finite [16, 18] and infinite [14] networks. In Sect. 7.3, we show in Proposition 7.12that the uniform and strong small-gain conditions as well as the MBI and MLIM properties are equivalent for finite-dimensional nonlinear systems, if the gain operator is of summation or max-type. As a consequence of Proposition 7.12, we see that our results extend those of [37], and thus also the classical small-gain theorems for finitely many finite-dimensional systems from [18], even with minimal regularity assumptions on the interconnection (we require well-posedness and the BIC property only).

In “Appendix A”, we derive a characterization of exponential ISS (eISS) for discrete-time systems with a generating and normal cone, induced by homogeneous of degree one and subadditive operators (Proposition A.1). We apply this and recent results in [19], to show in Proposition 7.16that for linear infinite-dimensional systems with a generating and normal cone the MBI, MLIM and the uniform small-gain condition all are equivalent to the spectral small-gain condition (saying that the spectral radius of the gain operator is less than one).

Finally, in “Appendix B”, we study relations between various uniform and non-uniform small-gain conditions for max-form gain operators with nonlinear gains, which are of particular importance in small-gain theory. Following [14], we study also the properties of the strong transitive closure of the gain operator. We use these properties to show (in Proposition 7.17) the equivalence of the MBI property, the MLIM property, and the existence of a path of strict decay for the max-form gain operator with linear gains. The results of that section are important for the development of linear and nonlinear Lyapunov-based small-gain theorems for infinite networks.

Propositions 7.16, 7.17 and A.1 are useful, in particular, to obtain efficient small-gain theorems for infinite networks with linear gains, see Corollaries 6.36.6.

2 Preliminaries

Notation We write \({\mathbb {R}}\) for the real numbers, \({\mathbb {Z}}\) for the integers, and \({\mathbb {N}}= \{1,2,3,\ldots \}\) for the natural numbers. \({\mathbb {R}}_+\) and \({\mathbb {Z}}_+\) denote the sets of nonnegative reals and integers, respectively.

We use the following classes of comparison functions:

$$\begin{aligned} {\mathcal {K}}&:= \left\{ \gamma :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+: \ \gamma \hbox { is continuous and strictly increasing, }\gamma (0)=0\right\} \\ {\mathcal {K}}_{\infty }&:= \left\{ \gamma \in {\mathcal {K}}:\ \gamma \hbox { is unbounded}\right\} \\ {\mathcal {L}}&:= \big \{\gamma :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+:\ \gamma \hbox { is continuous and strictly decreasing with} \lim \limits _{t\rightarrow \infty }\gamma (t)=0 \big \} \\ {{\mathcal {K}}}{{\mathcal {L}}}&:= \{\beta :{\mathbb {R}}_+^2 \rightarrow {\mathbb {R}}_+: \ \beta \hbox { is continuous and } \\&\qquad \qquad \qquad \qquad \qquad \beta (\cdot ,t)\in {{\mathcal {K}}},\ \forall t \ge 0,\ \beta (r,\cdot )\in {{\mathcal {L}}},\ \forall r > 0\}. \end{aligned}$$

For a normed linear space \((W,\Vert \cdot \Vert _W)\) and any \(r>0\), we write \(B_{r,W} :=\{w \in W: \Vert w\Vert _W < r\}\) (the open ball of radius r around 0 in W). By \({\overline{B}}_{r,W}\) we denote the corresponding closed ball. If the space W is clear from the context, we simply write \(B_r\) and \({\overline{B}}_r\), respectively. For any nonempty set \(S \subset W\) and any \(x \in W\), we denote the distance from x to S by \(\mathrm {dist}(x,S):=\inf _{y \in S}\Vert x-y\Vert _W\).

For a set U, we let \(U^{{\mathbb {R}}_+}\) denote the space of all maps from \({\mathbb {R}}_+\) to U. For a nonempty set \(J\subset {\mathbb {R}}_+\), we denote by \(\Vert w\Vert _{J}\) the sup-norm of a bounded function \(w:J \rightarrow W\), i.e., \(\Vert w\Vert _{J} = \sup _{s\in J}\Vert w(s)\Vert _W\). Given a nonempty index set I, we write \(\ell _{\infty }(I)\) for the Banach space of all functions \(x:I \rightarrow {\mathbb {R}}\) with \(\Vert x\Vert _{\ell _{\infty }(I)} := \sup _{i\in I}|x(i)| < \infty \). Moreover, \(\ell _{\infty }(I)^+ := \{ x \in \ell _{\infty }(I) : x(i) \ge 0,\ \forall i \in I \}\). We write \(\mathbf{1}\) for the vector in \(\ell _{\infty }(I)^+\) whose components are all equal to 1. If \(I = {\mathbb {N}}\), we simply write \(\ell _{\infty }\) and \(\ell _{\infty }^+\), respectively. By \(e_i\), \(i\in I\), we denote the i-th unit vector in \(\ell _{\infty }(I)\).

Throughout the paper, all considered vector spaces are vector spaces over \({\mathbb {R}}\).

Ordered vector spaces and positive operators In the following, X always denotes a real vector space. For two sets \(A,B \subset X\), we write \(A + B = \{a + b : a \in A, b \in B\}\), \(-A = \{-a : a \in A\}\), and \({\mathbb {R}}_+ \cdot A = \{ r \cdot a : a \in A, r \in {\mathbb {R}}_+ \}\).

Recall that a partial order on a set X is a relation on X which is reflexive, transitive and antisymmetric. A subset \(X^+ \subset X\) is called a (positive) cone in X if (i) \(X^+ \cap (-X^+) = \{0\}\), (ii) \({\mathbb {R}}_+ \cdot X^+ \subset X^+\), and (iii) \(X^+ + X^+ \subset X^+\). A cone \(X^+\) introduces a partial order “\(\le \)” on X via

$$\begin{aligned} x \le y \quad \text {whenever} \quad y - x \in X^+. \end{aligned}$$

The pair \((X,X^+)\) is also called an ordered vector space. If X is a Banach space and the cone \(X^+\) is closed, we call \((X,X^+)\) an ordered Banach space. In this case, the cone \(X^+\) is called generating if \(X^+ + (-X^+) = X\). Clearly, a cone \(X^+\) is generating if and only if \(X^+\) spans X. If the cone \(X^+\) is generating, then by [1, Thm. 2.37] there exists a constant \(M > 0\) such that every \(x \in X\) can be decomposed as

$$\begin{aligned} x = y-z \qquad \text {where} \quad y,z \ge 0 \quad \text {and} \quad \Vert y\Vert _X,\Vert z\Vert _X \le M \Vert x\Vert _X. \end{aligned}$$
(2.1)

The norm in X is called monotone if for any \(x_1,x_2 \in X\) with \(0 \le x_1 \le x_2\) it holds that \(\Vert x_1\Vert _X \le \Vert x_2\Vert _X\). The cone \(X^+\) is called normal if there exists \(\delta >0\) so that for any \(x_1,x_2 \in X\) with \(0 \le x_1 \le x_2\) it holds that \(\Vert x_1\Vert _X \le \delta \Vert x_2\Vert _X\). In this case, one can always find an equivalent norm which is monotone [1, Thm. 2.38].

Let \((X,X^+)\) and \((Y,Y^+)\) be ordered vector spaces. We say that a map \(f:X^+ \rightarrow Y^+\) is a (nonlinear) monotone operator if \(x_1 \le x_2\) implies \(f(x_1) \le f(x_2)\) for all \(x_1,x_2 \in X^+\).

3 Control systems and their stability

In this paper, we work with the following definition of a control system (which provides all the features that are necessary for a global stability analysis).

Definition 3.1

Consider a triple \(\Sigma = (X,{\mathcal {U}},\phi )\) consisting of the following:

  1. (i)

    A normed vector space \((X,\Vert \cdot \Vert _X)\), called the state space.

  2. (ii)

    A vector space U of input values and a normed vector space of inputs \(({\mathcal {U}},\Vert \cdot \Vert _{{\mathcal {U}}})\), where \({\mathcal {U}}\) is a linear subspace of \(U^{{\mathbb {R}}_+}\). We assume that the following axioms hold:

    • The axiom of shift invariance: for all \(u \in {\mathcal {U}}\) and all \(\tau \ge 0\), the time-shifted function \(u(\cdot + \tau )\) belongs to \({\mathcal {U}}\) with \(\Vert u\Vert _{\mathcal {U}}\ge \Vert u(\cdot + \tau )\Vert _{\mathcal {U}}\).

    • The axiom of concatenation: for all \(u_1,u_2 \in {\mathcal {U}}\) and for all \(t>0\) the concatenation of \(u_1\) and \(u_2\) at time t, defined by

      $$\begin{aligned} {u_1\, \underset{t}{\lozenge }\,{u_2}}(\tau ) := {\left\{ \begin{array}{ll} u_1(\tau ) &{} {\text { for all } \tau \in [0,t],} \\ u_2(\tau -t) &{} {\text { for all } \tau \in (t, \infty )} \end{array}\right. } \end{aligned}$$

      belongs to \({\mathcal {U}}\).

  3. (iii)

    A map \(\phi :D_{\phi } \rightarrow X\), \(D_{\phi }\subseteq {\mathbb {R}}_+ \times X \times {\mathcal {U}}\), called transition map, so that for all \((x,u)\in X \times {\mathcal {U}}\) it holds that \(D_{\phi } \cap ({\mathbb {R}}_+ \times \{(x,u)\}) = [0,t_m)\times \{(x,u)\}\), for a certain \(t_m = t_m(x,u)\in (0,+\infty ]\). The corresponding interval \([0,t_m)\) is called the maximal domain of definition of the mapping \(t\mapsto \phi (t,x,u)\), which we call a trajectory of the system.

The triple \(\Sigma \) is called a (control) system if it satisfies the following axioms:

(\(\Sigma {1}\)):

The identity property: for all \((x,u) \in X \times {\mathcal {U}}\), it holds that \(\phi (0,x,u) = x\).

(\(\Sigma {2}\)):

Causality: for all \((t,x,u) \in D_\phi \) and \({\tilde{u}} \in {\mathcal {U}}\) such that \(u(s) = {\tilde{u}}(s)\) for all \(s \in [0,t]\), it holds that \([0,t]\times \{(x,{\tilde{u}})\} \subset D_\phi \) and \(\phi (t,x,u) = \phi (t,x,{\tilde{u}})\).

(\(\Sigma {3}\)):

Continuity: for each \((x,u) \in X \times {\mathcal {U}}\), the trajectory \(t \mapsto \phi (t,x,u)\) is continuous on its maximal domain of definition.

(\(\Sigma {4}\)):

The cocycle property: for all \(x \in X\), \(u \in {\mathcal {U}}\) and \(t,h \ge 0\) so that \([0,t+h]\times \{(x,u)\} \subset D_{\phi }\), we have \(\phi (h,\phi (t,x,u),u(t+\cdot )) = \phi (t+h,x,u)\). \(\lhd \)

This class of systems encompasses control systems generated by ordinary differential equations, switched systems, time-delay systems, many classes of partial differential equations, important classes of boundary control systems and many other systems.

Definition 3.2

We say that a control system \(\Sigma = (X,{\mathcal {U}},\phi )\) is forward complete if \(D_\phi = {\mathbb {R}}_+ \times X \times {\mathcal {U}}\), i.e., \(\phi (t,x,u)\) is defined for all \((t,x,u) \in {\mathbb {R}}_+ \times X \times {\mathcal {U}}\). \(\lhd \)

An important property of ordinary differential equations with Lipschitz continuous right-hand sides is the possibility of extending a solution, which is bounded on a time interval [0, t), to a larger time interval \([0,t+\varepsilon )\). Evolution equations in Banach spaces with bounded control operators and Lipschitz continuous right-hand sides have similar properties [9, Thm. 4.3.4]; the same holds for many other classes of systems [29, Ch. 1]. The next property, adopted from [29, Def. 1.4], formalizes this behavior for general control systems.

Definition 3.3

We say that a system \(\Sigma \) satisfies the boundedness-implies-continuation (BIC) property if for each \((x,u)\in X \times {\mathcal {U}}\) such that the maximal existence time \(t_m = t_m(x,u)\) is finite, for any given \(M>0\) there exists \(t \in [0,t_m)\) with \(\Vert \phi (t,x,u)\Vert _X > M\). \(\lhd \)

Next, we introduce the input-to-state stability property, which unifies the classical asymptotic stability concept with the input-output stability notion, and is one of the cornerstones of nonlinear control theory [35, 45].

Definition 3.4

A system \(\Sigma = (X,{\mathcal {U}},\phi )\) is called (uniformly) input-to-state stable (ISS) if there exist \(\beta \in {{\mathcal {K}}}{{\mathcal {L}}}\) and

\(\gamma \in {\mathcal {K}}\cup \{0\}\) such that

$$\begin{aligned} \Vert \phi (t,x,u)\Vert _X \le \beta (\Vert x\Vert _X,t) + \gamma (\Vert u\Vert _{{\mathcal {U}}}),\quad (t,x,u) \in D_{\phi }. \end{aligned}$$

Two properties, implied by ISS, will be important in the sequel:

Definition 3.5

A system \(\Sigma = (X,{\mathcal {U}},\phi )\) is called uniformly globally stable (UGS) if there exist \(\sigma \in \mathcal {K_\infty }\) and \(\gamma \in {\mathcal {K}}\cup \{0\}\) such that

$$\begin{aligned} \Vert \phi (t,x,u)\Vert _X \le \sigma (\Vert x\Vert _X) + \gamma (\Vert u\Vert _{{\mathcal {U}}}),\quad (t,x,u) \in D_{\phi }. \end{aligned}$$

Definition 3.6

A forward complete system \(\Sigma = (X,{\mathcal {U}},\phi )\) has the bounded input uniform asymptotic gain (bUAG) property if there exists a \(\gamma \in {\mathcal {K}}\cup \{0\}\) such that for all \( \varepsilon ,r>0\) there is a time \(\tau = \tau (\varepsilon ,r) \ge 0\) for which

$$\begin{aligned} \Vert x\Vert _X\le r \ \wedge \ \Vert u\Vert _{{\mathcal {U}}} \le r \ \wedge \ t \ge \tau \ \quad \Rightarrow \quad \Vert \phi (t,x,u)\Vert _X \le \varepsilon + \gamma (\Vert u\Vert _{{\mathcal {U}}}). \end{aligned}$$

The UGS and bUAG properties are extensions of global Lyapunov stability and uniform global attractivity to systems with inputs.

The following lemma provides a useful criterion for the input-to-state stability in terms of uniform global stability and the bUAG property (see [37, Lem. 3.7]). It is a special case of stronger ISS characterizations shown in [39] and [37, Sec. 6].

Lemma 3.7

Let \(\Sigma = (X,{\mathcal {U}},\phi )\) be a control system with the BIC property. If \(\Sigma \) is UGS and has the bUAG property,Footnote 1 then \(\Sigma \) is ISS.

4 Infinite interconnections

In this subsection, we introduce (feedback) interconnections of an arbitrary number of control systems, indexed by some nonempty set I. For each \(i \in I\), let \((X_i,\Vert \cdot \Vert _{X_i})\) be a normed vector space which will serve as the state space of a control system \(\Sigma _i\). Before we can specify the space of inputs for \(\Sigma _i\), we first have to construct the overall state space. In the following, we use the sequence notation \((x_i)_{i \in I}\) for functions with domain I. The overall state space is then defined as

$$\begin{aligned} X := \Bigl \{ (x_i)_{i \in I} {\in \prod _{i \in I} X_i : } \, \sup _{i \in I} \Vert x_i\Vert _{X_i} < \infty \Bigr \}. \end{aligned}$$

It is a vector space with respect to pointwise addition and scalar multiplication, and we can turn it into a normed space in the following way:

Proposition 4.1

The state space X is a normed space with respect to the norm

$$\begin{aligned} \Vert x\Vert _X := \sup _{i\in I}\Vert x_i\Vert _{X_i}. \end{aligned}$$

If all of the spaces \((X_i,\Vert \cdot \Vert _{X_i})\) are Banach spaces, then so is \((X,\Vert \cdot \Vert _X)\).

The proof of the proposition is straightforward; hence, we omit it.

We also define for each \(i \in I\) the normed vector space \(X_{\ne i}\) by the same construction as above, but for the restricted index set \(I {\setminus } \{i\}\). Then, \(X_{\ne i}\) can be identified with the closed linear subspace \(\{ (x_j)_{j \in I} \in X : x_i = 0 \}\) of X.

Now consider for each \(i \in I\) a control system of the form

$$\begin{aligned} \Sigma _i = (X_i,\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}},{\bar{\phi }}_i), \end{aligned}$$

where \(\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\) is the space of all globally bounded piecewise continuous functions \(w:{\mathbb {R}}_+ \rightarrow X_{\ne i}\), with the norm \(\Vert w\Vert _{\infty } = \sup _{t \ge 0}\Vert w(t)\Vert _{X_{\ne i}}\). The norm on \(\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}}\) is defined by

$$\begin{aligned} \Vert (w,u)\Vert _{\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}}} := \max \left\{ \Vert w\Vert _{\infty }, \Vert u\Vert _{{\mathcal {U}}} \right\} . \end{aligned}$$
(4.1)

Here, we assume that \({\mathcal {U}}\subset U^{{\mathbb {R}}_+}\) for some vector space U, and \({\mathcal {U}}\) satisfies the axioms of shift invariance and concatenation. Then, by the definition of \(\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\) and the norm (4.1), these axioms are also satisfied for the product space \(\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}}\).

Definition 4.2

Given the control systems \(\Sigma _i\) (\(i \in I\)) as above, assume that there is a map \(\phi :D_\phi \rightarrow X\), defined on \(D_\phi \subset {\mathbb {R}}_+ \times X \times {\mathcal {U}}\), such that:

  1. (i)

    For each \(x \in X\) and each \(u \in {\mathcal {U}}\) there is \(\varepsilon >0\) such that \([0,\varepsilon ] \times \{(x,u)\} \subset D_\phi \).

  2. (ii)

    Furthermore, the components \(\phi _i\) of the transition map \(\phi :D_{\phi } \rightarrow X\) satisfy

    $$\begin{aligned} \phi _i(t,x,u) = {\bar{\phi }}_i(t,x_i,(\phi _{\ne i},u)) \quad \hbox { for all}\ (t,x,u) \in D_{\phi }, \end{aligned}$$

    where \(\phi _{\ne i}(\cdot ) = (\phi _j(\cdot ,x,u))_{j \in I {\setminus } \{i\}}\) for all \(i \in I\).Footnote 2

    We also assume that \(\phi \) is maximal in the sense that no other map \({{\tilde{\phi }}}:{\tilde{D}}_\phi \rightarrow X\) with \({\tilde{D}}_\phi \supset D_\phi \) exists, which satisfies all of the above properties, and coincides with \(\phi \) on \(D_\phi \).

If the map \(\phi \) is unique with above properties, and if \(\Sigma = (X,{\mathcal {U}},\phi )\) is a control system satisfying BIC property, then \(\Sigma \) is called the (feedback) interconnection of the systems \(\Sigma _i\).

We then call \(X_{\ne i}\) the space of internal input values, \(\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\) the space of internal inputs, and \({\mathcal {U}}\) the space of external inputs of the system \(\Sigma _i\). Moreover, we call \(\Sigma _i\) the i-th subsystem of \(\Sigma \). \(\lhd \)

The stability properties introduced above are defined in terms of the norm of the whole input, and this is not suitable for the consideration of coupled systems, as we are interested not only in the collective influence of all inputs on a subsystem, but in the influence of particular subsystems on a given subsystem. The next definition provides the needed flexibility.

Definition 4.3

Given the spaces \((X_j,\Vert \cdot \Vert _{X_j})\), \(j\in I\), and the system \(\Sigma _i\) for a fixed \(i \in I\), we say that \(\Sigma _i\) is input-to-state stable (ISS) (in semi-maximum formulation) if \(\Sigma _i\) is forward complete and there are \(\gamma _{ij},\gamma _j \in {\mathcal {K}}\cup \{0\}\) for all \(j \in I\), and \(\beta _i \in {{\mathcal {K}}}{{\mathcal {L}}}\) such that for all initial states \(x_i \in X_i\), all internal inputs \(w_{\ne i} = (w_j)_{j\in I {\setminus } \{i\}} \in \mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\), all external inputs \(u \in {\mathcal {U}}\) and \(t \ge 0\):

$$\begin{aligned} \Vert {\bar{\phi }}_i(t,x_i,(w_{\ne i},u))\Vert _{X_i} \le \beta _i(\Vert x_i\Vert _{X_i},t) + \sup _{j \in I}\gamma _{ij}(\Vert w_j\Vert _{[0,t]}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}). \end{aligned}$$

Here, we assume that the functions \(\gamma _{ij}\) satisfy \(\sup _{j \in I}\gamma _{ij}(r) < \infty \) for every \(r \ge 0\) (implying that the sum on the right-hand side is finite) and \(\gamma _{ii} = 0\). \(\lhd \)

The functions \(\gamma _{ij}\) and \(\gamma _i\) in this definition are called (nonlinear) gains.

Assuming that all systems \(\Sigma _i\), \(i\in I\), are ISS in semi-maximum formulation, we can define a nonlinear monotone operator \(\Gamma _{\otimes }:\ell _{\infty }(I)^+ \rightarrow \ell _{\infty }(I)^+\) from the gains \(\gamma _{ij}\) by

$$\begin{aligned} \Gamma _{\otimes }(s) := \left( \sup _{j\in I}\gamma _{ij}(s_j)\right) _{i\in I},\quad s = (s_i)_{i\in I} \in \ell _{\infty }(I)^+. \end{aligned}$$
(4.2)

In general, \(\Gamma _{\otimes }\) is not well-defined. It is easy to see that the following assumption is equivalent to \(\Gamma _{\otimes }\) being well-defined.

Assumption 4.4

For every \(r>0\), we have \(\sup _{i,j \in I}\gamma _{ij}(r) < \infty \).

Lemma 4.5

Assumption 4.4 is equivalent to the existence of \(\zeta \in \mathcal {K_\infty }\) and \(a\ge 0\) such that \(\sup _{i,j\in I}\gamma _{ij}(r) \le a + \zeta (r)\) for all \(r\ge 0\).

Proof

Obviously, the implication “\(\Leftarrow \)” holds. Conversely, define \(\xi : {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) by

$$\begin{aligned} \xi (r) := \sup _{i,j\in I}\gamma _{ij}(r). \end{aligned}$$

As a supremum of continuous increasing functions, \(\xi \) is lower semicontinuous and nondecreasing on its domain of definition. As \(\xi (r)\) is finite for every \(r\ge 0\) by assumption, define

$$\begin{aligned} {\tilde{\xi }}(r):= {\left\{ \begin{array}{ll} 0 &{} \hbox {if } r=0,\\ \xi (r)-a &{} \hbox {if } r>0, \end{array}\right. } \end{aligned}$$

where \(a:=\lim _{r\rightarrow +0}\xi (r)\ge 0\) (the limit exists as \(\xi \) is nondecreasing). By construction, \({\tilde{\xi }}\) is nondecreasing, continuous at 0 and satisfies \({\tilde{\xi }}(0)=0\). Hence, \({\tilde{\xi }}\) can be upper bounded by a certain \(\zeta \in \mathcal {K_\infty }\) (this follows from a more general result in [40, Prop. 9]). Overall, \(\sup _{i,j\in I} \gamma _{ij}(r) \le a + \zeta (r)\) for all \(r\ge 0\). \(\square \)

Also observe that \(\Gamma _{\otimes }\), if well-defined, is a monotone operator:

$$\begin{aligned} s^1 \le s^2 \quad \Rightarrow \quad \Gamma _{\otimes }(s^1) \le \Gamma _{\otimes }(s^2) \quad \hbox { for all}\ s^1,s^2 \in \ell _{\infty }(I)^+. \end{aligned}$$

Remark 4.6

If all gains \(\gamma _{ij}\) are linear, then \(\Gamma _{\otimes }\) satisfies the following two properties:

  • \(\Gamma _{\otimes }\) is a homogeneous operator of degree one, i.e., \(\Gamma _{\otimes }(as) = a\Gamma _{\otimes }(s)\) for all \(a \ge 0\) and \(s \in \ell _{\infty }(I)^+\).

  • \(\Gamma _{\otimes }\) is subadditive, i.e., \(\Gamma _{\otimes }(s^1 + s^2) \le \Gamma _{\otimes }(s^1) + \Gamma _{\otimes }(s^2)\) for all \(s^1,s^2 \in \ell _{\infty }(I)^+\).

Finally, we provide a criterion for continuity of \(\Gamma _{\otimes }\) (this criterion with a slightly different statement can already be found in [14, Lem. 2.1], however, without proof).

Proposition 4.7

Assume that the family \(\{\gamma _{ij}\}_{(i,j) \in I^2}\) is pointwise equicontinuous, i.e., for every \(r \in {\mathbb {R}}_+\) and \(\varepsilon >0\) there exists \(\delta >0\) such that \(|\gamma _{ij}(r) - \gamma _{ij}(s)| \le \varepsilon \) whenever \((i,j) \in I^2\) and \(|r - s| \le \delta \). Then \(\Gamma _{\otimes }\) is well-defined and continuous.

Proof

First, we show that \(\Gamma _{\otimes }\) is well-defined. Fixing some \(r>0\), the family \(\{\gamma _{ij}\}_{(i,j)\in I^2}\) is uniformly equicontinuous on the compact interval [0, r], which follows by a compactness argument. Hence, we can find \(\delta >0\) so that \(|s_1 - s_2| \le \delta \) with \(s_1,s_2 \in [0,r]\) implies \(|\gamma _{ij}(s_1) - \gamma _{ij}(s_2)| \le 1\) for all ij. We can assume that \(\delta \) is of the form r/n for an integer n. Then,

$$\begin{aligned} \gamma _{ij}(r) = \sum _{k=0}^{n-1} \left[ \gamma _{ij}\left( \frac{k+1}{n}r\right) - \gamma _{ij}\left( \frac{k}{n}r\right) \right] \le n < \infty \end{aligned}$$

for all \((i,j) \in I^2\). Hence, \(\Gamma _{\otimes }\) is well-defined.

Now we prove continuity. Choose any \(\varepsilon >0\), fix some \(s^0 \in \ell _{\infty }(I)^+\) and let \(s \in \ell _{\infty }(I)^+\) so that \(\Vert s - s^0\Vert _{\ell _{\infty }(I)} \le \delta \) for some \(\delta >0\) to be determined. By the required equicontinuity, we can choose \(\delta \) small enough so that \(|\gamma _{ij}(s^0_j) - \gamma _{ij}(s_j)| \le \varepsilon \) for all (ij) as \(|s^0_j - s_j| \le \Vert s^0 - s\Vert _{\ell _{\infty }(I)} \le \delta \). This also implies

$$\begin{aligned} \Vert \Gamma _{\otimes }(s^0) - \Gamma _{\otimes }(s)\Vert _{\ell _{\infty }(I)} = \sup _{i \in I}\Bigl |\sup _{j \in I} \gamma _{ij}(s^0_j) - \sup _{j \in I} \gamma _{ij}(s_j)\Bigr | \le \varepsilon . \end{aligned}$$

In the last inequality, we use the estimate

$$\begin{aligned} \sup _{j \in I} \gamma _{ij}(s^0_j) - \sup _{j \in I} \gamma _{ij}(s_j) \le \sup _{j \in I} (\gamma _{ij}(s_j) + \varepsilon ) - \sup _{j \in I} \gamma _{ij}(s_j) = \varepsilon , \end{aligned}$$

and the analogous estimate in the other direction. \(\square \)

Another formulation of ISS for the systems \(\Sigma _i\) is as follows. In this formulation, we need to assume that I is countable.

Definition 4.8

Assume that I is a nonempty countable set. Given the spaces \((X_j,\Vert \cdot \Vert _{X_j})\), \(j\in I\), and the system \(\Sigma _i\) for a fixed \(i \in I\), we say that \(\Sigma _i\) is input-to-state stable (ISS) (in summation formulation) if \(\Sigma _i\) is forward complete and there are \(\gamma _{ij},\gamma _j \in {\mathcal {K}}\cup \{0\}\) for all \(j \in I\), and \(\beta _i \in {{\mathcal {K}}}{{\mathcal {L}}}\) such that for all initial states \(x_i \in X_i\), all internal inputs \(w_{\ne i} = (w_j)_{j\in I {\setminus } \{i\}} \in \mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\), all external inputs \(u \in {\mathcal {U}}\) and \(t \ge 0\):

$$\begin{aligned} \Vert {\bar{\phi }}_i(t,x_i,(w_{\ne i},u))\Vert _{X_i} \le \beta _i(\Vert x_i\Vert _{X_i},t) + \sum _{j \in I}\gamma _{ij}(\Vert w_j\Vert _{[0,t]}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}). \end{aligned}$$

Here, we assume that the functions \(\gamma _{ij}\) are such that \(\sum _{j \in I}\gamma _{ij}(r) < \infty \) for every \(r \ge 0\) (implying that the sum on the right-hand side is finite) and \(\gamma _{ii} = 0\). \(\lhd \)

Remark 4.9

If a network has finitely many components, ISS in summation formulation, and ISS in semi-maximum formulation are equivalent concepts. Nevertheless, even for finite networks the gains in semi-maximum formulation and the gains in summation formulation are distinct, and for some systems one formulation is better than the other one in the sense that it produces tighter (and thus smaller) gains. This motivates the interest in analyzing both formulations. We illustrate this by examples in Sects. 6.3, 6.4. In fact, also more general formulations of input-to-state stability for networks are studied in the literature [16], using the formalism of monotone aggregation functions.

Assuming that all systems \(\Sigma _i\), \(i\in I\), are ISS, we can define a nonlinear monotone operator \(\Gamma _{\boxplus }:\ell _{\infty }(I)^+ \rightarrow \ell _{\infty }(I)^+\) from the gains \(\gamma _{ij}\) as follows:

$$\begin{aligned} \Gamma _{\boxplus }(s) := \left( \sum _{j\in I}\gamma _{ij}(s_j)\right) _{i \in I},\quad s = (s_i)_{i\in I} \in \ell _{\infty }(I)^+. \end{aligned}$$

Again, \(\Gamma _{\boxplus }\) might not be well-defined; hence, we need to make an appropriate assumption.

Assumption 4.10

For every \(r > 0\), we have

$$\begin{aligned} \sup _{i \in I}\sum _{j \in I} \gamma _{ij}(r) < \infty . \end{aligned}$$

Remark 4.11

Assume that all the gains \(\gamma _{ij}\), \((i,j) \in I^2\), are linear functions. Then, the gain operator \(\Gamma _{\boxplus }\) can be regarded as a linear operator on \(\ell _{\infty }(I)\) and Assumption 4.10 is equivalent to \(\Gamma _{\boxplus }\) being a bounded linear operator on \(\ell _{\infty }(I)\).

Proposition 4.12

Assume that the operator \(\Gamma _{\boxplus }\) is well-defined. A sufficient criterion for continuity of \(\Gamma _{\boxplus }\) is that each \(\gamma _{ij}\) is a \(C^1\)-function and

$$\begin{aligned} \sup _{i \in I} \sum _{j \in I} \sup _{0< s \le r} \gamma _{ij}'(s) < \infty \quad \hbox { for all}\ r > 0. \end{aligned}$$

Proof

Fix \(s^0 = (s^0_j)_{j\in I} \in \ell _{\infty }(I)^+\) and \(\varepsilon >0\). Let \(s \in \ell _{\infty }(I)^+\) with \(\Vert s - s^0\Vert _{\ell _{\infty }(I)} = \sup _{i \in I}|s_i - s^0_i| \le \delta \) for some \(\delta > 0\), to be determined later. Then

$$\begin{aligned} \Vert \Gamma _{\boxplus }(s^0) - \Gamma _{\boxplus }(s)\Vert _{\ell _{\infty }(I)} = \sup _{i \in I}\Bigl | \sum _{j \in I} (\gamma _{ij}(s^0_j) - \gamma _{ij}(s_j)) \Bigr |. \end{aligned}$$

Using the assumption that each \(\gamma _{ij}\) is a \(C^1\)-function and writing \(s^0_{\max } := \Vert s^0\Vert _{\ell _{\infty }(I)}\), we can estimate this by

$$\begin{aligned}&\Vert \Gamma _{\boxplus }(s) - \Gamma _{\boxplus }(s^0)\Vert _{\ell _{\infty }(I)} \le \sup _{i \in I} \sum _{j \in I} |\gamma _{ij}(s_j) - \gamma _{ij}(s^0_j)| \\&\le \sup _{i \in I} \sum _{j \in I} \sup _{r \in [s^0_j - \delta ,s^0_j + \delta ]}|\gamma _{ij}'(r)| |s_j - s^0_j| \le \delta \sup _{i \in I} \sum _{j \in I} \sup _{r \le s^0_{\max }+\delta } \gamma _{ij}'(r). \end{aligned}$$

By assumption, the last supremum is finite, which implies that \(\delta \) can be chosen small enough so that the whole expression is smaller than \(\varepsilon \). \(\square \)

We also need versions of UGS for the systems \(\Sigma _i\).

Definition 4.13

Given the spaces \((X_j,\Vert \cdot \Vert _{X_j})\), \(j\in I\), and the system \(\Sigma _i\) for a fixed \(i \in I\), we say that \(\Sigma _i\) is uniformly globally stable (UGS) (in semi-maximum formulation) if \(\Sigma _i\) is forward complete and there are \(\gamma _{ij},\gamma _j \in {\mathcal {K}}\cup \{0\}\) for all \(j \in I\), and \(\sigma _i \in \mathcal {K_\infty }\) such that for all initial states \(x_i \in X_i\), all internal inputs \(w_{\ne i} = (w_j)_{j\in I {\setminus } \{i\}} \in \mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\), all external inputs \(u \in {\mathcal {U}}\) and \(t \ge 0\):

$$\begin{aligned} \Vert {\bar{\phi }}_i(t,x_i,(w_{\ne i},u))\Vert _{X_i} \le \sigma _i(\Vert x_i\Vert _{X_i}) + \sup _{j \in I}\gamma _{ij}(\Vert w_j\Vert _{[0,t]}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}). \end{aligned}$$

Here, we assume that the functions \(\gamma _{ij}\) are such that \(\sup _{j \in I}\gamma _{ij}(r) < \infty \) for every \(r \ge 0\) (implying that the sum on the right-hand side is finite) and \(\gamma _{ii} = 0\). \(\lhd \)

Definition 4.14

Let I be a countable index set. Given the spaces \((X_j,\Vert \cdot \Vert _{X_j})\), \(j\in I\), and the system \(\Sigma _i\) for a fixed \(i \in I\), we say that \(\Sigma _i\) is uniformly globally stable (UGS) (in summation formulation) if \(\Sigma _i\) is forward complete and there are \(\gamma _{ij},\gamma _j \in {\mathcal {K}}\cup \{0\}\) for all \(j \in I\), and \(\sigma _i \in \mathcal {K_\infty }\) such that for all initial states \(x_i \in X_i\), all internal inputs \(w_{\ne i} = (w_j)_{j\in I {\setminus } \{i\}} \in \mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i})\), all external inputs \(u \in {\mathcal {U}}\) and \(t \ge 0\):

$$\begin{aligned} \Vert {\bar{\phi }}_i(t,x_i,(w_{\ne i},u))\Vert _{X_i} \le \sigma _i(\Vert x_i\Vert _{X_i}) + \sum _{j \in I}\gamma _{ij}(\Vert w_j\Vert _{[0,t]}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}). \end{aligned}$$

Here, we assume that the functions \(\gamma _{ij}\) are such that \(\sum _{j \in I}\gamma _{ij}(r) < \infty \) for every \(r \ge 0\) (implying that the sum on the right-hand side is finite) and \(\gamma _{ii} = 0\). \(\lhd \)

5 Stability of discrete-time systems

In this section, we study stability properties of the system

$$\begin{aligned} x(k+1) \le A(x(k)) + u(k),\quad k \in {\mathbb {Z}}_+. \end{aligned}$$
(5.1)

Here, \((X,X^+)\) is an ordered Banach space, \(A: X^+ \rightarrow X^+\) is a nonlinear operator on the cone \(X^+\), and the input u is an element of \(\ell _{\infty }({\mathbb {Z}}_+,X^+)\), where the latter space is defined as

$$\begin{aligned} \ell _{\infty }({\mathbb {Z}}_+,X^+) := \{u = (u(k))_{k\in {\mathbb {Z}}_+} : u(k) \in X^+,\ \Vert u\Vert _{\infty } := \sup _{k\in {\mathbb {Z}}_+}\Vert u(k)\Vert _X < \infty \}. \end{aligned}$$

A solution of equation (5.1) is a mapping \(x: {\mathbb {Z}}_+ \rightarrow X^+\) that satisfies (5.1). We call a mapping \(x: {\mathbb {Z}}_+ \rightarrow X^+\) decreasing if \(x(k+1) \le x(k)\) for all \(k \in {\mathbb {Z}}_+\).

As we will see, for the small-gain analysis of infinite interconnections, the properties of the gain operator and the discrete-time system (5.1) induced by the gain operator are essential. So we now relate the stability of the system (5.1) to the properties of the operator A.

Definition 5.1

The system (5.1) has the monotone limit property (MLIM) if there is \(\xi \in \mathcal {K_\infty }\) such that for every \(\varepsilon >0\), every constant input \(u(\cdot ) :\equiv w \in X^+\) and every decreasing solution \(x: {\mathbb {Z}}_+ \rightarrow X^+\) of (5.1), there exists \(N = N(\varepsilon ,u,x(\cdot )) \in {\mathbb {Z}}_+\) with

$$\begin{aligned} \Vert x(N)\Vert _X \le \varepsilon + \xi (\Vert w\Vert _X). \end{aligned}$$

Definition 5.2

Let \((X,X^+)\) be an ordered Banach space and let \(A:X^+ \rightarrow X^+\) be a nonlinear operator. We say that \(\mathrm {id}- A\) has the monotone bounded invertibility (MBI) property if there exists \(\xi \in \mathcal {K_\infty }\) such that for all \(v,w \in X^+\) the following implication holds:

$$\begin{aligned} (\mathrm {id}- A)(v) \le w \quad \Rightarrow \quad \Vert v\Vert _X \le \xi (\Vert w\Vert _X). \end{aligned}$$

Proposition 5.3

Let \((X,X^+)\) be an ordered Banach space and let \(A:X^+ \rightarrow X^+\) be a nonlinear operator. If system (5.1) has the MLIM property, then the operator \(\mathrm {id}- A\) has the MBI property.

Proof

Assume that \((\mathrm {id}- A)(v) \le w\) for some \(v,w \in X^+\). We write this as \(v \le A(v) + w\). Hence, \(x(\cdot ) :\equiv v\) is a constant solution of (5.1) corresponding to the constant input sequence \(u(\cdot ) :\equiv w\). By the MLIM property, there exists \(\xi \in \mathcal {K_\infty }\) (independent of vw) so that for every \(\varepsilon >0\) there is N with

$$\begin{aligned} \Vert v\Vert _X = \Vert x(N)\Vert _X \le \varepsilon + \xi (\Vert u\Vert _{\infty }) = \varepsilon + \xi (\Vert w\Vert _X). \end{aligned}$$

Since this holds for every \(\varepsilon >0\), we obtain \(\Vert v\Vert _X \le \xi (\Vert w\Vert _X)\), which completes the proof. \(\square \)

Whether the MBI property is strictly weaker than the MLIM property, or whether they are equivalent, is an open problem. In the next proposition, though, we show that they are equivalent under certain assumptions on the operator A or the cone \(X^+\). Later, in Propositions 7.16 and 7.17, we show their equivalence for linear operators and for the gain operator \(\Gamma _\otimes \) with linear gains, defined on \(\ell _{\infty }(I)\).

The cone \(X^+\) is said to have the Levi property if every decreasing sequence in \(X^+\) is norm-convergent [1, Def. 2.44(2)]. Typical examples are the standard cones in \(L^p\)-spaces for \(p \in [1,\infty )\), and the standard cone in the space \(c_0\) of real-valued sequences that converge to 0. We note in passing that if the cone \(X^+\) has the Levi property, then it is normal [1, Thm. 2.45].

Proposition 5.4

Let \((X,X^+)\) be an ordered Banach space with normal cone and let \(A:X^+ \rightarrow X^+\) be a nonlinear, continuous and monotone operator. If the cone \(X^+\) has the Levi property or if the operator A is compact (i.e., it maps bounded sets onto precompact sets), then the following statements are equivalent:

  1. (i)

    System (5.1) satisfies the MLIM property.

  2. (ii)

    The operator \(\mathrm {id}- A\) satisfies the MBI property.

Proof

In view of Proposition 5.3, it suffices to prove the implication “(ii) \(\Rightarrow \) (i)”. Hence, consider a constant input \(u(\cdot ) :\equiv w \in X^+\) and a decreasing sequence \(x(\cdot )\) in \(X^+\) such that \(x(k+1) \le A(x(k)) + w\) for all \(k \in {\mathbb {Z}}_+\). As A is monotone, the operator \({\tilde{A}}(x) := A(x) + w\), \({\tilde{A}}:X^+ \rightarrow X^+\), is monotone as well; if A is compact, then so is \({{\tilde{A}}}\). Moreover,

$$\begin{aligned} x(k+1) \le {\tilde{A}}(x(k)) \quad \hbox { for all}\ k \in {\mathbb {Z}}_+. \end{aligned}$$
(5.2)

Now consider the sequence \(y(k) := {\tilde{A}}(x(k))\), \(k \in {\mathbb {Z}}_+\). As x is decreasing, so is y. Next, we note that y converges in norm. Indeed, if the cone has the Levi property, this is clear. If the cone does not have the Levi property, then A, and thus \({\tilde{A}}\), is compact by assumption. So y has a convergent subsequence; since y is decreasing and the cone is normal, it thus follows that y converges itself.

Let \(y_* \in X^+\) denote the limit of the sequence y. Applying \({\tilde{A}}\) on both sides of (5.2) yields

$$\begin{aligned} y(k+1) \le {\tilde{A}}(y(k)) \quad \hbox { for all}\ k \in {\mathbb {Z}}_+. \end{aligned}$$

Taking the limit for \(k \rightarrow \infty \) and using continuity of A results in \(y_* \le {\tilde{A}}(y_*) = A(y_*) + w\). Since this can be written as \((\mathrm {id}- A)(y_*) \le w\), the MBI property of \(\mathrm {id}- A\) gives \(\Vert y_*\Vert \le \xi (\Vert w\Vert )\). As \(X^+\) is a normal cone, there is \(\delta >0\) such that for every \(\varepsilon >0\) there is \(k>0\) large enough, for which

$$\begin{aligned} \Vert x(k+1)\Vert _X \le \delta \Vert {\tilde{A}}(x(k))\Vert _X \le \varepsilon + \delta \xi (\Vert w\Vert _X). \end{aligned}$$

This completes the proof. \(\square \)

6 Small-gain theorems

6.1 Small-gain theorems in semi-maximum formulation

In this subsection, we prove small-gain theorems for UGS and ISS, both in semi-maximum formulation. We start with UGS.

Theorem 6.1

(UGS small-gain theorem in semi-maximum formulation) Let I be an arbitrary nonempty index set, \((X_i,\Vert \cdot \Vert _{X_i})\), \(i\in I\), normed spaces and \(\Sigma _i = (X_i,\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}},{\bar{\phi }}_i)\) forward complete control systems. Assume that the interconnection \(\Sigma = (X,{\mathcal {U}},\phi )\) of the systems \(\Sigma _i\) is well-defined. Furthermore, let the following assumptions be satisfied:

  1. (i)

    Each system \(\Sigma _i\) is UGS in the sense of Definition 4.13 with \(\sigma _i \in {\mathcal {K}}\) and nonlinear gains \(\gamma _{ij},\gamma _i \in {\mathcal {K}}\cup \{0\}\).

  2. (ii)

    There exist \(\sigma _{\max } \in \mathcal {K_\infty }\) and \(\gamma _{\max } \in \mathcal {K_\infty }\) so that \(\sigma _i \le \sigma _{\max }\) and \(\gamma _i \le \gamma _{\max }\), pointwise for all \(i \in I\).

  3. (iii)

    Assumption 4.4 is satisfied for the operator \(\Gamma _{\otimes }\) defined via the gains \(\gamma _{ij}\) from (i) and \(\mathrm {id}- \Gamma _{\otimes }\) has the MBI property.

Then \(\Sigma \) is forward complete and UGS.

Proof

Fix \((t,x,u) \in D_{\phi }\) and observe that

$$\begin{aligned} \Vert \phi (t,x,u)\Vert _X = \sup _{i \in I} \Vert \phi _i(t,x,u)\Vert _{X_i} = \sup _{i \in I} \Vert {\bar{\phi }}_i(t,x_i,(\phi _{\ne i},u))\Vert _{X_i}. \end{aligned}$$

Abbreviating \({\bar{\phi }}_j(\cdot ) = {\bar{\phi }}_j(\cdot ,x_j,(\phi _{\ne j},u))\) and using assumption (i), we can estimate

$$\begin{aligned} \sup _{s \in [0,t]}\Vert {\bar{\phi }}_i(s,x_i,(\phi _{\ne i},u))\Vert _{X_i} \le \sigma _i(\Vert x_i\Vert _{X_i}) + \sup _{j \in I}\gamma _{ij}(\Vert {\bar{\phi }}_j\Vert _{[0,t]}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}).\nonumber \\ \end{aligned}$$
(6.1)

From the inequalities (using continuity of \(s \mapsto \phi (s,x,u)\))

$$\begin{aligned} 0 \le \sup _{s\in [0,t]} \Vert {\bar{\phi }}_i(s,x_i,(\phi _{\ne i},u))\Vert _{X_i} \le \sup _{s\in [0,t]} \Vert \phi (s,x,u)\Vert _X < \infty \quad \hbox { for all}\ i \in I, \end{aligned}$$

it follows that

$$\begin{aligned} \vec {\phi }_{\max }(t) := \left( \sup _{s \in [0,t]}\Vert {\bar{\phi }}_i(s,x_i,(\phi _{\ne i},u))\Vert _{X_i} \right) _{i \in I} \in \ell _{\infty }(I)^+. \end{aligned}$$

From Assumption (ii), it follows that also the vectors \(\vec {\sigma }(x) := (\sigma _i(\Vert x_i\Vert _{X_i}))_{i\in I}\) and \(\vec {\gamma }(u) := ( \gamma _i(\Vert u\Vert _{{\mathcal {U}}}) )_{i \in I}\) are contained in \(\ell _{\infty }(I)^+\). Hence, we can write the inequalities (6.1) in vectorized form as

$$\begin{aligned} (\mathrm {id}- \Gamma _{\otimes })(\vec {\phi }_{\max }(t)) \le \vec {\sigma }(x) + \vec {\gamma }(u). \end{aligned}$$

By Assumption (iii), this yields for some \(\xi \in \mathcal {K_\infty }\), independent of xu:

$$\begin{aligned} \Vert \vec {\phi }_{\max }(t)\Vert _{\ell _{\infty }(I)} \le \xi ( \Vert \vec {\sigma }(x) + \vec {\gamma }(u) \Vert _{\ell _{\infty }(I)} ) \le \xi ( \Vert \vec {\sigma }(x)\Vert _{\ell _{\infty }(I)} + \Vert \vec {\gamma }(u)\Vert _{\ell _{\infty }(I)} ). \end{aligned}$$

Since \(\xi (a + b) \le \max \{\xi (2a),\xi (2b)\} \le \xi (2a) + \xi (2b)\) for all \(a,b\ge 0\), this implies

$$\begin{aligned} \Vert \vec {\phi }_{\max }(t)\Vert _{\ell _{\infty }(I)}\le & {} \xi (2\Vert \vec {\sigma }(x)\Vert _{\ell _{\infty }(I)}) + \xi (2\Vert \vec {\gamma }(u)\Vert _{\ell _{\infty }(I)})\\\le & {} \xi (2\sigma _{\max }(\Vert x\Vert _X)) + \xi (2\gamma _{\max }(\Vert u\Vert _{{\mathcal {U}}})), \end{aligned}$$

and we conclude that

$$\begin{aligned} \Vert \phi (t,x,u)\Vert _X \le \Vert \vec {\phi }_{\max }(t)\Vert _{\ell _{\infty }(I)} \le \xi (2\sigma _{\max }(\Vert x\Vert _X)) + \xi (2\gamma _{\max }(\Vert u\Vert _{{\mathcal {U}}})), \end{aligned}$$

which is a UGS estimate with \(\sigma (r) := \xi (2\sigma _{\max }(r))\), \(\gamma (r) := \xi (2\gamma _{\max }(r))\) for \(\Sigma \) for all \((t,x,u)\in D_\phi \). Since \(\Sigma \) has the BIC property by assumption, it follows that \(\Sigma \) is forward complete and UGS. \(\square \)

Now we are in position to state the ISS small-gain theorem.

Theorem 6.2

(Nonlinear ISS small-gain theorem in semi-maximum formulation) Let I be an arbitrary nonempty index set, \((X_i,\Vert \cdot \Vert _{X_i})\), \(i\in I\), normed spaces and \(\Sigma _i = (X_i,\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}},{\bar{\phi }}_i)\) forward complete control systems. Assume that the interconnection \(\Sigma = (X,{\mathcal {U}},\phi )\) of the systems \(\Sigma _i\) is well-defined. Furthermore, let the following assumptions be satisfied:

  1. (i)

    Each system \(\Sigma _i\) is ISS in the sense of Definition 4.3 with \(\beta _i \in {{\mathcal {K}}}{{\mathcal {L}}}\) and nonlinear gains \(\gamma _{ij},\gamma _i \in {\mathcal {K}}\cup \{0\}\).

  2. (ii)

    There are \(\beta _{\max } \in {{\mathcal {K}}}{{\mathcal {L}}}\) and \(\gamma _{\max } \in {\mathcal {K}}\) so that \(\beta _i \le \beta _{\max }\) and \(\gamma _i \le \gamma _{\max }\) pointwise for all \(i \in I\).

  3. (iii)

    Assumption 4.4 holds and the discrete-time system

    $$\begin{aligned} w(k+1) \le \Gamma _{\otimes }(w(k)) + v(k), \end{aligned}$$
    (6.2)

    with \(w(\cdot ),v(\cdot )\) taking values in \(\ell _{\infty }(I)^+\), has the MLIM property.

Then \(\Sigma \) is ISS.

Proof

We show that \(\Sigma \) is UGS and satisfies the bUAG property, which implies ISS by Lemma 3.7.

UGS. This follows from Theorem 6.1. Indeed, the assumptions (i) and (ii) of Theorem 6.1 are satisfied with \(\sigma _i(r) := \beta _i(r,0) \in {\mathcal {K}}\) and the gains \(\gamma _{ij},\gamma _i\) from the ISS estimates for \(\Sigma _i\), \(i\in I\). From Proposition 5.3 and Assumption (iii) of this theorem, it follows that Assumption (iii) of Theorem 6.1 is satisfied. Hence, \(\Sigma \) is forward complete and UGS.

bUAG. As \(\Sigma \) is the interconnection of the systems \(\Sigma _i\) and since \(\Sigma \) is forward complete, we have \(\phi _i(t,x,u) = {\bar{\phi }}_i(t,x_i,(\phi _{\ne i},u))\) for all \((t,x,u) \in {\mathbb {R}}_+ \times X \times {\mathcal {U}}\) and \(i \in I\), with the notation from Definition 4.2.

Pick any \(r > 0\), any \(u \in {\overline{B}}_{r,{\mathcal {U}}}\) and \(x \in {\overline{B}}_{r,X}\). As \(\Sigma \) is UGS, there are \(\sigma ^{\mathrm {UGS}},\gamma ^{\mathrm {UGS}} \in \mathcal {K_\infty }\) so that

$$\begin{aligned} \Vert \phi (t,x,u)\Vert _X \le \sigma ^{\mathrm {UGS}}(r) + \gamma ^{\mathrm {UGS}}(r) =: \mu (r) \quad \hbox { for all}\ t \ge 0. \end{aligned}$$

In view of the cocycle property, for all \(i \in I\) and \(t,\tau \ge 0\) we have

$$\begin{aligned} \phi _i(t + \tau ,x,u)&= {\bar{\phi }}_i(t+\tau ,x_i,(\phi _{\ne i},u)) \\&= {\bar{\phi }}_i(\tau ,{\bar{\phi }}_i(t,x_i,(\phi _{\ne i},u)),(\phi _{\ne i}(\cdot +t),u(\cdot +t))). \end{aligned}$$

Given \(\varepsilon >0\), choose \(\tau ^* = \tau ^*(\varepsilon ,r) \ge 0\) such that \(\beta _{\max }(\mu (r),\tau ^*) \le \varepsilon \). Then

$$\begin{aligned} \begin{aligned} x \in {\overline{B}}_{r,X}&\wedge u \in {\overline{B}}_{r,{\mathcal {U}}} \wedge \tau \ge \tau ^* \wedge t \ge 0 \\ \Rightarrow&\Vert \phi _i(t+\tau ,x,u)\Vert _{X_i} \le \beta _i(\Vert {\bar{\phi }}_i(t,x_i,(\phi _{\ne i},u))\Vert _{X_i},\tau ) \\&\qquad + \sup _{j \in I} \gamma _{ij}( \Vert \phi _j\Vert _{[t,t+\tau ]} ) + \gamma _i(\Vert u(\cdot +t)\Vert _{{\mathcal {U}}}) \\&\le \beta _{\max }(\Vert \phi (t,x,u)\Vert _X,\tau ^*) + \sup _{j \in I}\gamma _{ij}(\Vert \phi _j\Vert _{[t,\infty )}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}) \\&\le \varepsilon + \sup _{j \in I}\gamma _{ij}(\Vert \phi _j\Vert _{[t,\infty )}) + \gamma _i(\Vert u\Vert _{{\mathcal {U}}}). \end{aligned} \end{aligned}$$
(6.3)

Now pick any \(k \in {\mathbb {N}}\) and write

$$\begin{aligned} B(r,k) := {\overline{B}}_{r,X} \times \{ u \in {\mathcal {U}}: \Vert u\Vert _{{\mathcal {U}}} \in [2^{-k}r,2^{-k+1}r]\}. \end{aligned}$$

Then, taking the supremum in the above inequality over all \((x,u) \in B(r,k)\), we obtain for all \(i \in I\) and all \(t \ge 0\) that

$$\begin{aligned} \sup _{(x,u) \in B(r,k)}\Vert \phi _i(t+\tau ^*,x,u)\Vert _{X_i} \le \varepsilon + \sup _{j \in I} \gamma _{ij}\left( \sup _{(x,u) \in B(r,k)} \Vert \phi _j\Vert _{[t,\infty )} \right) + \gamma _i(2^{-k+1}r). \end{aligned}$$

This implies for all \(t \ge 0\) that

$$\begin{aligned}&\sup _{s \ge t + \tau ^*}\sup _{(x,u) \in B(r,k)} \Vert \phi _i(s,x,u)\Vert _{X_i} \\&\quad \le \varepsilon + \sup _{j \in I} \gamma _{ij}\left( \sup _{s \ge t} \sup _{(x,u) \in B(r,k)} \Vert \phi _j(s,x,u)\Vert _{X_j}\right) + \gamma _i(2^{-k+1}r). \end{aligned}$$

Now we define

$$\begin{aligned} w_i(t,r,k) := \sup _{s \ge t} \sup _{(x,u) \in B(r,k)} \Vert \phi _i(s,x,u)\Vert _{X_i} \end{aligned}$$

and note that \(w_i(t,r,k) \in [0,\mu (r)]\) for all \(i \in I\) and \(t \ge 0\). With this notation, we can rewrite the preceding inequality as

$$\begin{aligned} w_i(t + \tau ^*,r,k) \le \varepsilon + \sup _{j \in I} \gamma _{ij}(w_i(t,r,k)) + \gamma _i(2^{-k+1}r). \end{aligned}$$

Using vector notation \(\vec {w}(t,r,k) := (w_i(t,r,k))_{i\in I}\) and \(\vec {\gamma }(r) := (\gamma _i(r))_{i \in I}\), this can be written as

$$\begin{aligned} \vec {w}(t+\tau ^*,r,k) \le \Gamma _{\otimes }(\vec {w}(t,r,k)) + \varepsilon \mathbf{1} + \vec {\gamma }(2^{-k+1}r). \end{aligned}$$

Observe that \(\vec {w}(t,r,k) \in \ell _{\infty }(I)^+\), as the entries of the vector are uniformly bounded by \(\mu (r)\), and \(\vec {w}(t_2,r,k) \le \vec {w}(t_1,r,k)\) for \(t_2 \ge t_1\). Hence, \(\vec {w}(l) := \vec {w}(l\tau ^*,r,k)\), \(l \in {\mathbb {Z}}_+\), is a monotone solution of (6.2) for the constant input \(v(\cdot ) \equiv \varepsilon \mathbf{1} + \vec {\gamma }(2^{-k+1}r)\). By assumption (iii) of the theorem, this implies the existence of a time \({\tilde{\tau }} = {\tilde{\tau }}(\varepsilon ,r,k)\) and a \(\mathcal {K_\infty }\)-function \(\xi \) such that

$$\begin{aligned} \Vert \vec {w}({\tilde{\tau }},r,k)\Vert _{\ell _{\infty }(I)}&\le \varepsilon + \xi (\Vert \varepsilon \mathbf{1} + \vec {\gamma }(2^{-k+1}r)\Vert _{\ell _{\infty }(I)}) \\&\le \varepsilon + \xi (\Vert \varepsilon \mathbf{1}\Vert _{\ell _{\infty }(I)} + \Vert \vec {\gamma }(2^{-k+1}r)\Vert _{\ell _{\infty }(I)}) \\&\le \varepsilon + \xi (\varepsilon + \gamma _{\max }(2^{-k+1}r)) \\&\le \varepsilon + \xi (2\varepsilon ) + \xi (2\gamma _{\max }(2^{-k+1}r)). \end{aligned}$$

By definition, this implies

$$\begin{aligned}&i \in I \wedge (x,u) \in B(r,k) \wedge t \ge {\tilde{\tau }}(\varepsilon ,r,k) \\&\quad \Rightarrow \Vert \phi _i(t,x,u)\Vert _{X_i} \le \varepsilon + \xi (2\varepsilon ) + \xi (2\gamma _{\max }(2^{-k+1}r)). \end{aligned}$$

Now define \(k_0 = k_0(\varepsilon ,r)\) as the minimal \(k \ge 1\) so that \(\xi (2\gamma _{\max }(2^{1-k}r)) \le \varepsilon \) and let

$$\begin{aligned} {\hat{\tau }}(\varepsilon ,r) := \max \{ {\tilde{\tau }}(\varepsilon ,r,k) : 1 \le k \le k_0(\varepsilon ,r) \}. \end{aligned}$$

Pick any \(0 \ne u \in {\overline{B}}_{r,{\mathcal {U}}}\). Then, there is \(k \in {\mathbb {N}}\) with \(\Vert u\Vert _{{\mathcal {U}}} \in (2^{-k}r,2^{-k+1}r]\). If \(k \le k_0\) (large input), then for \(t \ge {\hat{\tau }}(\varepsilon ,r)\) we have

$$\begin{aligned} \begin{aligned} \Vert \phi (t,x,u)\Vert _X&\le \varepsilon + \xi (2\varepsilon ) + \xi (\gamma _{\max }(2^{-k+1}r)) \\&\le \varepsilon + \xi (2\varepsilon ) + \xi (2\gamma _{\max }(2\Vert u\Vert _{{\mathcal {U}}})). \end{aligned} \end{aligned}$$
(6.4)

It remains to consider the case when \(k > k_0\) (small input). For any \(q \in [0,r]\), one can take the supremum in (6.3) over \(x \in {\overline{B}}_{r,X}\) and \(u \in {\overline{B}}_{q,{\mathcal {U}}}\) to obtain

$$\begin{aligned}&\sup _{(x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{q,{\mathcal {U}}}}\Vert \phi _i(t+\tau ,x,u)\Vert _{X_i} \\&\qquad \le \varepsilon + \sup _{j \in I}\gamma _{ij}\left( \sup _{(x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{q,{\mathcal {U}}}}\Vert \phi _j\Vert _{[t,\infty )}\right) + \gamma _i(q). \end{aligned}$$

With \(z_i(t,r,q) := \sup _{s \ge t}\sup _{(x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{q,{\mathcal {U}}}}\Vert \phi _i(s,x,u)\Vert _{X_i}\), analogous steps as above lead to the following: for every \(\varepsilon >0\), \(r>0\) and \(q \in [0,r]\) there is a time \({\bar{\tau }} = {\bar{\tau }}(\varepsilon ,r,q)\) such that

$$\begin{aligned} (x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{q,{\mathcal {U}}} \wedge t \ge {\bar{\tau }} \quad \Rightarrow \quad \Vert \phi (t,x,u)\Vert _X \le \varepsilon + \xi (2\varepsilon ) + \xi (2\gamma _{\max }(q)). \end{aligned}$$

In particular, for \(q_0 := 2^{-k_0(\varepsilon ,r)+1}\), we have

$$\begin{aligned} (x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{q_0,{\mathcal {U}}} \wedge t \ge {\bar{\tau }} \quad \Rightarrow \quad \Vert \phi (t,x,u)\Vert _X \le 2\varepsilon + \xi (2\varepsilon ), \end{aligned}$$
(6.5)

since \(\xi (2\gamma _{\max }(q_0)) = \xi (2\gamma _{\max }(2^{-k_0(\varepsilon ,r)+1})) \le \varepsilon \) by definition of \(k_0\). Define \(\tau (\varepsilon ,r) := \max \{{\hat{\tau }}(\varepsilon ,r),{\bar{\tau }}(\varepsilon ,r,q_0)\}\). Combining (6.4) and (6.5), we obtain

$$\begin{aligned}&(x,u) \in {\overline{B}}_{r,X} \times {\overline{B}}_{r,{\mathcal {U}}} \wedge t \ge \tau (\varepsilon ,r) \\&\qquad \Rightarrow \quad \Vert \phi (t,x,u)\Vert _X \le 2\varepsilon + \xi (2\varepsilon ) + \xi (2\gamma _{\max }(2\Vert u\Vert _{{\mathcal {U}}})). \end{aligned}$$

As \(r \mapsto \xi (2\gamma _{\max }(2r))\) is a \(\mathcal {K_\infty }\)-function, we have proved that \(\Sigma \) has the bUAG property which completes the proof. \(\square \)

For finite networks, Theorem 6.2 was shown in [37]. However, in the proof of the infinite-dimensional case there are essential novelties, which are due to the fact that the trajectories of an infinite number of subsystems do not necessarily have a uniform speed of convergence. This resulted also in a strengthening of the employed small-gain condition.

In the special case when all interconnection gains \(\gamma _{ij}\) are linear, the small-gain condition in our theorem can be formulated more directly in terms of the gains, as the following corollary shows.

Corollary 6.3

(Linear ISS small-gain theorem in semi-maximum formulation) Given an interconnection \((\Sigma ,{\mathcal {U}},\phi )\) of systems \(\Sigma _i\) as in Theorem 6.2, additionally to the assumptions (i) and (ii) of this theorem, assume that all gains \(\gamma _{ij}\) are linear functions (and hence can be identified with nonnegative real numbers), \(\Gamma _{\otimes }\) is well-defined and the following condition holds:

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( \sup _{j_1,\ldots ,j_{n+1}\in I} \gamma _{j_1j_2} \cdots \gamma _{j_{n}j_{n+1}}\right) ^{1/n} < 1. \end{aligned}$$
(6.6)

Then \(\Sigma \) is ISS.

Proof

We only need to show that Assumption (iii) of Theorem 6.2 is implied by (6.6). The linearity of the gains \(\gamma _{ij}\) implies that the operator \(\Gamma _{\otimes }\) is homogeneous of degree one and subadditive, see Remark 4.6. Then Proposition A.1 and Remark B.2 together show that (6.6) implies that the system

$$\begin{aligned} w(k+1) \le \Gamma _{\otimes }(w(k)) + v(k) \end{aligned}$$

is eISS (according to Definition 7.15), which easily implies the MLIM property for this system. \(\square \)

6.2 Small-gain theorems in summation formulation

Now we formulate the small-gain theorems for UGS and ISS in summation formulation.

Theorem 6.4

(UGS small-gain theorem in summation formulation) Let I be a countable index set, \((X_i,\Vert \cdot \Vert _{X_i})\), \(i\in I\), be normed spaces and \(\Sigma _i = (X_i,\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}},{\bar{\phi }}_i)\), \(i\in I\) be forward complete control systems. Assume that the interconnection \(\Sigma = (X,{\mathcal {U}},\phi )\) of the systems \(\Sigma _i\) is well-defined. Furthermore, let the following assumptions be satisfied:

  1. (i)

    Each system \(\Sigma _i\) is UGS in the sense of Definition 4.14 (summation formulation) with \(\sigma _i \in {\mathcal {K}}\) and nonlinear gains \(\gamma _{ij},\gamma _i \in {\mathcal {K}}\cup \{0\}\).

  2. (ii)

    There exist \(\sigma _{\max } \in \mathcal {K_\infty }\) and \(\gamma _{\max } \in \mathcal {K_\infty }\) so that \(\sigma _i \le \sigma _{\max }\) and \(\gamma _i \le \gamma _{\max }\), pointwise for all \(i \in I\).

  3. (iii)

    Assumption 4.10 is satisfied for the operator \(\Gamma _{\boxplus }\) defined via the gains \(\gamma _{ij}\) from (i) and \(\mathrm {id}- \Gamma _{\boxplus }\) has the MBI property.

Then \(\Sigma \) is forward complete and UGS.

Proof

The proof is exactly the same as for Theorem 6.1, with the operator \(\Gamma _{\boxplus }\) in place of \(\Gamma _{\otimes }\). \(\square \)

Theorem 6.5

(Nonlinear ISS small-gain theorem in summation formulation) Let I be a countable index set, \((X_i,\Vert \cdot \Vert _{X_i})\), \(i\in I\) be normed spaces and \(\Sigma _i = (X_i,\mathrm {PC}_b({\mathbb {R}}_+,X_{\ne i}) \times {\mathcal {U}},{\bar{\phi }}_i)\), \(i\in I\) be forward complete control systems. Assume that the interconnection \(\Sigma = (X,{\mathcal {U}},\phi )\) of the systems \(\Sigma _i\) is well-defined. Furthermore, let the following assumptions be satisfied:

  1. (i)

    Each system \(\Sigma _i\) is ISS in the sense of Definition 4.8 with \(\beta _i \in {{\mathcal {K}}}{{\mathcal {L}}}\) and nonlinear gains \(\gamma _{ij},\gamma _i \in {\mathcal {K}}\cup \{0\}\).

  2. (ii)

    There are \(\beta _{\max } \in {{\mathcal {K}}}{{\mathcal {L}}}\) and \(\gamma _{\max } \in {\mathcal {K}}\) so that \(\beta _i \le \beta _{\max }\) and \(\gamma _i \le \gamma _{\max }\), pointwise for all \(i \in I\).

  3. (iii)

    Assumption (4.10) holds and the discrete-time system

    $$\begin{aligned} w(k+1) \le \Gamma _{\boxplus }(w(k)) + v(k), \end{aligned}$$
    (6.7)

    with \(w(\cdot ),v(\cdot )\) taking values in \(\ell _{\infty }(I)^+\), has the MLIM property.

Then \(\Sigma \) is ISS.

Proof

The proof is almost completely the same as for Theorem 6.2. The only difference is that instead of interchanging the order of two suprema \(\sup _{s \ge t}\) and \(\sup _{j \in I}\), we now have to use the estimate \(\sup _{s \ge t} \sum _{j \in I} \ldots \le \sum _{j \in I} \sup _{s \ge t} \ldots \),

which is trivially satisfied. \(\square \)

Again, we formulate a corollary for the case when all gains \(\gamma _{ij}\) are linear.

Corollary 6.6

(Linear ISS small-gain theorem in summation formulation) Given an interconnection \((\Sigma ,{\mathcal {U}},\phi )\) of systems \(\Sigma _i\) as in Theorem 6.5, additionally to the assumptions (i) and (ii) of this theorem, assume that all gains \(\gamma _{ij}\) are linear functions (and hence can be identified with nonnegative real numbers), the linear operator \(\Gamma _{\boxplus }\) is well-defined (thus bounded) and satisfies the spectral radius condition \(r(\Gamma _{\boxplus }) < 1\). Then, \(\Sigma \) is ISS.

Proof

By Proposition 7.16, \(r(\Gamma _{\boxplus }) < 1\) is equivalent to the MLIM property of the system (6.7), hence to Assumption (iii) of Theorem 6.5. \(\square \)

6.3 Example: a linear spatially invariant system

Let us analyze the stability of a spatially invariant infinite network

$$\begin{aligned} {\dot{x}}_i = ax_{i-1} - x_i + b x_{i+1} + u,\quad i\in {\mathbb {Z}}, \end{aligned}$$
(6.8)

where \(a,b>0\) and each \(\Sigma _i\) is a scalar system with the state \(x_i \in {\mathbb {R}}\), internal inputs \(x_{i-1}\), \(x_{i+1}\) and an external input u, belonging to the input space \({\mathcal {U}}:=L_\infty ({\mathbb {R}}_+,{\mathbb {R}})\).

Following the general approach in Sect. 4, we define the state space for the interconnection of \((\Sigma _i)_{i\in {\mathbb {Z}}}\) as \(X:=\ell _\infty ({\mathbb {Z}})\). Similarly as for finite-dimensional ODEs, it is possible to introduce the concept of a mild (Carathéodory) solution for equation (6.8), for which we refer, e.g., to [34]. As (6.8) is linear, it is easy to see that for each initial condition \(x_0 \in X\) and for each input \(u \in {\mathcal {U}}\) the corresponding mild solution is unique and exists on \({\mathbb {R}}_+\). We denote it by \(\phi (\cdot ,x_0,u)\). One can easily check that the triple \(\Sigma :=(X,{\mathcal {U}},\phi )\) defines a well-posed and forward complete interconnection in the sense of this paper.

Having a well-posed control system \(\Sigma \), we proceed to its stability analysis.

Proposition 6.7

The coupled system (6.8) is ISS if and only if \(a+b<1\).

Proof

\(\Rightarrow \)”: For any \(a,b>0\), the function \(y: t \mapsto (\mathrm {e}^{(a+b-1)t}x^*)_{i\in {\mathbb {Z}}}\) is a solution of (6.8) subject to an initial condition \((x^*)_{i\in {\mathbb {Z}}}\) and input \(u\equiv 0\). This shows that \(a+b \ge 1\) implies that the system (6.8) is not ISS.

\(\Leftarrow \)”: By variation of constants, we see that for any \(i\in {\mathbb {Z}}\), treating \(x_{i-1}, x_{i+1}\) as external inputs from \(L_\infty ({\mathbb {R}}_+,{\mathbb {R}})\), we have the following ISS estimate for the \(x_i\)-subsystem:

$$\begin{aligned} |x_i(t)|&= \Big |\mathrm {e}^{-t}x_i(0) + \int _0^t \mathrm {e}^{s-t}[a x_{i-1}(s) + b x_{i+1}(s) + u(s)] \mathrm {d}s\Big |\\&\le \mathrm {e}^{-t}|x_i(0)| + a \Vert x_{i-1}\Vert _\infty + b \Vert x_{i+1}\Vert _\infty + \Vert u\Vert _\infty , \end{aligned}$$

for any \(t\ge 0\), \(x_i(0)\in {\mathbb {R}}\) and all \(x_{i-1}, x_{i+1},u \in L_\infty ({\mathbb {R}}_+,{\mathbb {R}})\).

This shows that the \(x_i\)-subsystem is ISS in summation formulation and the corresponding gain operator is a linear operator \(\Gamma :\ell _\infty ^+({\mathbb {Z}}) \rightarrow \ell _\infty ^+({\mathbb {Z}})\), acting on \(s=(s_i)_{i\in {\mathbb {Z}}}\) as \(\Gamma (s) = (as_{i-1} + b s_{i+1})_{i\in {\mathbb {Z}}}\). It is easy to see that

$$\begin{aligned} \Vert \Gamma \Vert&:= \sup _{\Vert s\Vert _{\ell _\infty ({\mathbb {Z}})}=1}\Vert \Gamma s\Vert _{\ell _\infty ({\mathbb {Z}})} = \Vert \Gamma \mathbf{1}\Vert _{\ell _\infty ({\mathbb {Z}})} = a+b <1, \end{aligned}$$

and thus \(r(\Gamma )<1\), and the network is ISS by Corollary 6.6. \(\square \)

6.4 Example: a nonlinear spatially invariant system

Consider an infinite interconnection (in the sense of the previous sections)

$$\begin{aligned} {\dot{x}}_i = - x_i^3 + \max \{ax_{i-1}^3,b x_{i+1}^3,u \},\quad i\in {\mathbb {Z}}, \end{aligned}$$
(6.9)

where \(a,b>0\). As in Sect. 6.3, each \(\Sigma _i\) is a scalar system with the state \(x_i \in {\mathbb {R}}\), internal inputs \(x_{i-1}\), \(x_{i+1}\) and an external input u, belonging to the input space \({\mathcal {U}}:=L_\infty ({\mathbb {R}}_+,{\mathbb {R}})\). Let the state space for the interconnection \(\Sigma \) be \(X:=\ell _\infty ({\mathbb {Z}})\).

First, we analyze the well-posedness of the interconnection (6.9). Define for \(x = (x_i)_{i\in {\mathbb {Z}}} \in X\) and \(v \in {\mathbb {R}}\)

$$\begin{aligned} f_i(x,v):= - x_i^3 + \max \{ax_{i-1}^3,b x_{i+1}^3,v \},\quad i \in {\mathbb {Z}}, \end{aligned}$$

as well as

$$\begin{aligned} f(x,v):=(f_i(x,v))_{i\in {\mathbb {Z}}} \in {\mathbb {R}}^{{\mathbb {Z}}}. \end{aligned}$$

It holds that

$$\begin{aligned} |f_i(x,v)| \le \Vert x\Vert _X^3 + {\max \{a,b\}}\max \{\Vert x\Vert ^3_X,|v|\}, \end{aligned}$$

and thus \(f(x,v) \in X\) with \(\Vert f(x,v)\Vert _X \le \Vert x\Vert _X^3 + {\max \{a,b\}} \max \{\Vert x\Vert ^3_X,|v|\}\).

Furthermore, f is clearly continuous in the second argument. Let us show Lipschitz continuity of f on bounded balls with respect to the first argument. For any \(x = (x_i)_{i\in {\mathbb {Z}}} \in X\), \(y = (y_i)_{i\in {\mathbb {Z}}} \in X\) and any \(v \in {\mathbb {R}}\) we have

$$\begin{aligned} \Vert f(x,u)-&f(y,u)\Vert _X = \sup _{i\in {\mathbb {Z}}}|f_i(x,u)-f_i(y,u)|\\&= \sup _{i\in {\mathbb {Z}}}\big |- x_i^3 + \max \{ax_{i-1}^3,b x_{i+1}^3,v \} + y_i^3 - \max \{ay_{i-1}^3,b y_{i+1}^3,v \}\big |\\&\le \sup _{i\in {\mathbb {Z}}}\big | x_i^3 - y_i^3\big | + \sup _{i\in {\mathbb {Z}}}\big |\max \{ax_{i-1}^3,b x_{i+1}^3,v \} - \max \{ay_{i-1}^3,b y_{i+1}^3,v \}\big |. \end{aligned}$$

By Birkhoff’s inequality \(|\max \{a_1,a_2,a_3\} - \max \{b_1,b_2,b_3\}|\le \sum _{i=1}^3|a_i-b_i|\), which holds for all real \(a_i,b_i\), we obtain

$$\begin{aligned}&\Vert f(x,u) -f(y,u)\Vert _X \le \sup _{i\in {\mathbb {Z}}}\big | x_i^3 - y_i^3\big | + a\sup _{i\in {\mathbb {Z}}}\big |x_{i-1}^3 -y_{i-1}^3\big | + b\sup _{i\in {\mathbb {Z}}}\big |x_{i+1}^3 -y_{i+1}^3\big |\\&\quad = (1+a+b)\sup _{i\in {\mathbb {Z}}}\big | x_i^3 - y_i^3\big | \le (1+a+b) \sup _{i\in {\mathbb {Z}}}\big | x_i - y_i\big | \sup _{i\in {\mathbb {Z}}}\big | x_i^2 +x_iy_i + y_i^2\big | \\&\quad \le (1+a+b)\Vert x-y\Vert _X \left( \Vert x\Vert ^2_X + \Vert x\Vert _X\Vert y\Vert _X + \Vert y\Vert ^2_X\right) , \end{aligned}$$

which shows Lipschitz continuity of f with respect to the first argument on the bounded balls in X, uniformly with respect to the second argument.

According to [3, Thm. 2.4],Footnote 3 this ensures that the Carathéodory solutions of (6.9) exist locally, are unique for any fixed initial condition \(x_0\in X\) and external input \(u\in {\mathcal {U}}\). We denote the corresponding maximal solution by \(\phi (\cdot ,x_0,u)\). One can easily check that the triple \(\Sigma :=(X,{\mathcal {U}},\phi )\) defines a well-posed interconnection in the sense of this paper, and furthermore, \(\Sigma \) has BIC property (cf. [9, Thm. 4.3.4]).

We proceed to the stability analysis:

Proposition 6.8

The coupled system (6.9) is ISS if and only if \(\max \{a,b\}<1\).

Proof

\(\Rightarrow \)”: For any \(a,b>0\) consider the scalar equation

$$\begin{aligned} {\dot{z}} = - (1-\max \{a,b\})z^3, \end{aligned}$$

subject to an initial condition \(z(0)=x^*\). The function \(t \mapsto (z(t))_{i\in {\mathbb {Z}}}\) is a solution of (6.9) subject to an initial condition \((x^*)_{i\in {\mathbb {Z}}}\) and input \(u\equiv 0\). This shows that for \(\max \{a,b\} \ge 1\) the system (6.9) is not ISS.

\(\Leftarrow \)”: Consider \(x_{i-1}\), \(x_{i+1}\) and u as inputs to the \(x_i\)-subsystem of (6.9) and define \(q:=\max \{ax_{i-1}^3,b x_{i+1}^3,u \}\). The derivative of \(|x_i(\cdot )|\) along the trajectory satisfies for almost all t the following inequality:

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t}|x_i(t)|\le -|x_i(t)|^3 + q(t) \le -|x_i(t)|^3 + \Vert q\Vert _\infty . \end{aligned}$$

For any \(\varepsilon >0\), if \(\Vert q\Vert _\infty \le \frac{1}{1+\varepsilon }|x_i(t)|^3\), we obtain

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t}|x_i(t)|\le -\frac{\varepsilon }{1+\varepsilon }|x_i(t)|^3. \end{aligned}$$

Arguing as in the proof of direct Lyapunov theorems (\(x_i \mapsto |x_i|\) is an ISS Lyapunov function for the \(x_i\)-subsystem), see, e.g., [46, Lem. 2.14], we obtain that there is a certain \(\beta \in {{\mathcal {K}}}{{\mathcal {L}}}\) such that for all \(t\ge 0\) it holds that

$$\begin{aligned} |x_i(t)|&\le \beta (|x_i(0)|,t) + \left( (1+\varepsilon )\Vert q\Vert _\infty \right) ^{1/3}\\&= \beta (|x_i(0)|,t) + \max \{a_1 \Vert x_{i-1}\Vert _\infty ,b_1 \Vert x_{i+1}\Vert _\infty ,(1+\varepsilon )^{1/3}\Vert u\Vert _\infty ^{1/3} \}\\&\le \beta (|x_i(0)|,t) + \max \{a_1 \Vert x_{i-1}\Vert _\infty ,b_1 \Vert x_{i+1}\Vert _\infty \} +(1+\varepsilon )^{1/3}\Vert u\Vert _\infty ^{1/3}, \end{aligned}$$

where \(a_1=(1+\varepsilon )^{1/3}a^{1/3}\), \(b_1=(1+\varepsilon )^{1/3}b^{1/3}\).

This shows that the \(x_i\)-subsystem is ISS in semi-maximum formulation with the corresponding homogeneous of degree one gain operator \(\Gamma :\ell _\infty ^+({\mathbb {Z}}) \rightarrow \ell _\infty ^+({\mathbb {Z}})\) given for all \(s=(s_i)_{i\in {\mathbb {Z}}}\) by \(\Gamma (s) = (\max \{a_1 s_{i-1}, b_1 s_{i+1}\})_{i\in {\mathbb {Z}}}\).

The previous computations are valid for all \(\varepsilon >0\). Now pick \(\varepsilon >0\) such that \(a_1<1\) and \(b_1<1\), which is possible as \(a \in (0,1)\) and \(b\in (0,1)\). The ISS of the network follows by Corollary 6.3. \(\square \)

7 Small-gain conditions

Key assumptions in the ISS and UGS small-gain theorems are the monotone limit property and monotone bounded invertibility property, respectively. In this section, we thoroughly investigate these properties. More precisely, in Sect. 7.1, we characterize the MBI property in terms of the uniform small-gain condition; in Sect. 7.2, we relate the uniform small-gain condition to several types of non-uniform small-gain conditions which have already been exploited in the small-gain analysis of finite and infinite networks. In Sect. 7.3, we derive new relationships between small-gain conditions in the finite-dimensional case. Finally, in Sect. 7.4, we provide efficient criteria for the MLIM and the MBI property in case of linear operators and operators of the form \(\Gamma _\otimes \) induced by linear gains.

7.1 A uniform small-gain condition and the MBI property

As we have seen in Sect. 6, the monotone bounded invertibility is a crucial property for the small-gain analysis of finite and infinite networks. The next proposition yields small-gain type criteria for the MBI property. Although in the context of small-gain theorems in terms of trajectories, derived in this paper, we are interested primarily in the case of \((X,X^+) = (\ell _\infty (I),\ell ^+_\infty (I))\), we prove the results in a more general setting, which besides the mathematical appeal also has important applications to Lyapunov-based small-gain theorems for infinite networks, where other choices for X are useful, see, e.g., [34] where \(X=\ell _p\) for finite \(p\ge 1\).

Proposition 7.1

Let \((X,X^+)\) be an ordered Banach space with a generating cone \(X^+\). For every nonlinear operator \(A: X^+ \rightarrow X^+\), the following conditions are equivalent:

  1. (i)

    \(\mathrm {id}- A\) satisfies the MBI property.

  2. (ii)

    The uniform small-gain condition holds: There exists \(\eta \in \mathcal {K_\infty }\) such that

    $$\begin{aligned} \mathrm {dist}(A(x) - x,X^+) \ge \eta (\Vert x\Vert _X) \quad \hbox { for all}\ x \in X^+. \end{aligned}$$
    (7.1)

Proof

(i) \(\Rightarrow \) (ii). Fix \(x \in X^+\) and write \(a := (A - \mathrm {id})(x)\). Let \(\varepsilon > 0\). We choose \(z \in X^+\) such that \(\Vert a-z\Vert _X \le \mathrm {dist}(a,X^+) + \varepsilon \) and we set \(y := a-z\). If the constant \(M > 0\) is chosen as in (2.1), we can decompose y as \(y = u-v\), where \(u,v \in X^+\) and \(\Vert u\Vert _X,\Vert v\Vert _X \le M \Vert y\Vert _X \le M\mathrm {dist}(a,X^+) + M\varepsilon \). Then, we have

$$\begin{aligned} (\mathrm {id}- A)(x) = -a = -y-z = v - (u+z) \le v, \end{aligned}$$

so it follows from the MBI property of \(\mathrm {id}- A\) that

$$\begin{aligned} \Vert x\Vert _X \le \xi (\Vert v\Vert _X) \le \xi \left( M \mathrm {dist}(a,X^+)+M\varepsilon \right) . \end{aligned}$$

Consequently,

$$\begin{aligned} \mathrm {dist}(a,X^+) \ge \frac{1}{M} \xi ^{-1}(\Vert x\Vert _X) - \varepsilon . \end{aligned}$$

Since \(\varepsilon \) was arbitrary, this implies (ii) with \(\eta := \frac{1}{M}\xi ^{-1}\).

(ii) \(\Rightarrow \) (i). Let \(v,w \in X^+\) and \((\mathrm {id}-A)(v) \le w\). The vector \(z := w + (A-\mathrm {id})(v)\) is positive, so from (ii) it follows that

$$\begin{aligned} \eta (\Vert v\Vert _X) \le \mathrm {dist}\left( (A-\mathrm {id})(v), X^+ \right) \le \Vert (A-\mathrm {id})(v) - z\Vert _X = \Vert -w\Vert _X = \Vert w\Vert _X. \end{aligned}$$

Hence, \(\Vert v\Vert _X \le \eta ^{-1}(\Vert w\Vert _X)\). \(\square \)

Remark 7.2

The uniform small-gain condition in Proposition 7.1(ii) is a uniform version of the well-known small-gain condition, sometimes also called no-joint-increase condition:

$$\begin{aligned} A(x) \not \ge x \quad \hbox { for all}\ x \in X^+ {{\setminus }} \{0\}. \end{aligned}$$

Indeed, \(A(x) \not \ge x\) is equivalent to \(A(x) - x \not \ge 0\), which in turn is equivalent to \(\mathrm {dist}(A(x) - x, X^+) > 0\).

Remark 7.3

It is important to point out that the distance to the positive cone which occurs in the uniform small-gain condition in Proposition 7.1 can be explicitly computed on many concrete spaces. Indeed, many important real-valued sequence or function spaces such as \(X = \ell _p\) or \(X = L_p(\Omega ,\mu )\) (for \(p \in [1,\infty ]\) and a measure space \((\Omega ,\mu )\)) are not only ordered Banach spaces but so-called Banach lattices.

An ordered Banach space \((X,X^+)\) is called a Banach lattice if, for all \(x \in X\), the set \(\{-x,x\}\) has a smallest upper bound in X, which is usually called the modulus of x and denoted by |x|, and if \(\Vert x\Vert _X \le \Vert y\Vert _X\) whenever \(|x| \le |y|\). In concrete sequence and function spaces, the modulus of a function is just the pointwise (respectively, almost everywhere) modulus.

Now, assume that \((X,X^+)\) is a Banach lattice and let \(x \in X\). Then the vectors \(x^+ := \frac{|x|+x}{2} \ge 0\) and \(x^- := \frac{|x|-x}{2} \ge 0\) are called the positive and negative part of x, respectively; clearly, they satisfy \(x^+ - x^- = x\) and \(x^+ + x^- = |x|\). If X is a concrete sequence or function space, then \(x^-\) is simply 0 at all points where x is positive, and equal to \(-x\) at all points where x is negative.

In a Banach lattice \((X,X^+)\), we have the formula

$$\begin{aligned} \mathrm {dist}(x,X^+) = \Vert x^-\Vert _X \end{aligned}$$

for each \(x \in X\), as can easily be verified.

If the cone of the ordered Banach space \((X,X^+)\) has nonempty interior, the uniform small-gain condition from Proposition 7.1 can also be expressed by a condition that involves a fixed interior point of \(X^+\).

Proposition 7.4

Let \((X,X^+)\) be an ordered Banach space, assume that the cone \(X^+\) has nonempty interior and let z be an interior point of \(X^+\). For every nonlinear operator \(A: X^+ \rightarrow X^+\), the following conditions are equivalent:

  1. (i)

    There is \(\eta \in \mathcal {K_\infty }\) such that

    $$\begin{aligned} A(x) \not \ge x - \eta (\Vert x\Vert _X)z \quad \hbox { for all}\ x \in X^+ {{\setminus }} \{0\}. \end{aligned}$$
    (7.2)
  2. (ii)

    The uniform small-gain condition from Proposition 7.1(ii) holds.

Proof

(i) \(\Rightarrow \) (ii). Let (i) hold with some \(\eta \in \mathcal {K_\infty }\). By [20, Prop. 2.11], we can find a number \(c>0\) such that for every \(y \in X\) we have

$$\begin{aligned} \Vert y\Vert _X \le c \quad \Rightarrow \quad y \ge -z. \end{aligned}$$
(7.3)

Assume toward a contradiction that (ii) does not hold. Then, (7.1) fails, in particular, for the function \(c\eta \). Thus, we can infer that there is \(x \in X^+ {\setminus } \{0\}\) so that

$$\begin{aligned} \mathrm {dist}\left( (A-\mathrm {id})(x), X^+ \right) < c\eta (\Vert x\Vert _X). \end{aligned}$$

Hence, there exists \(y \in X^+\) such that

$$\begin{aligned} \left\| (A-\mathrm {id})(x) - y \right\| _X \le c\eta (\Vert x\Vert _X). \end{aligned}$$

Consequently, the vector \(\frac{(A-\mathrm {id})(x) - y}{\eta (\Vert x\Vert _X)}\) has norm at most c, so it follows from (7.3) that \((A-\mathrm {id})(x) - y \ge - \eta (\Vert x\Vert _X) z\). Thus,

$$\begin{aligned} (A-\mathrm {id})(x) \ge -\eta (\Vert x\Vert _X)z + y \ge -\eta (\Vert x\Vert _X)z, \end{aligned}$$

which shows that (7.2) fails for the function \(\eta \), a contradiction.

(ii) \(\Rightarrow \) (i). Let (ii) hold with a certain \(\eta \in \mathcal {K_\infty }\). We show that (7.2) holds for the function \(\frac{\eta }{2\Vert z\Vert _X}\) substituted for \(\eta \). Assume toward a contradiction that (7.2) fails for the function \(\frac{\eta }{2\Vert z\Vert _X}\). Then, there is \(x \in X^+ {\setminus } \{0\}\) such that

$$\begin{aligned} (A - \mathrm {id})(x) + \frac{\eta (\Vert x\Vert _X)}{2\Vert z\Vert _X} z \ge 0. \end{aligned}$$

Hence, it follows that

$$\begin{aligned} \mathrm {dist}\left( (A - \mathrm {id})(x),X^+\right)&\le \left\| (A-\mathrm {id})(x) - \left( (A - \mathrm {id})(x) {+} \frac{\eta (\Vert x\Vert _X)}{2\Vert z\Vert _X} z\right) \right\| _X {=} \frac{\eta (\Vert x\Vert _X)}{2}, \end{aligned}$$

which shows that (7.1) fails for the function \(\eta \). \(\square \)

A typical example of an ordered Banach space whose cone has nonempty interior is \((X,X^+) = (\ell _{\infty }(I),\ell _{\infty }(I)^+)\) for some index set I. For instance, the vector \(\mathbf{1}\) is an interior point of the positive cone in this space.

7.2 Non-uniform small-gain conditions

In Propositions 7.1 and 7.4, we characterized the MBI property in terms of the uniform small-gain condition. In this subsection, we recall several further small-gain conditions, which have been used in the literature for the small-gain analysis of finite and infinite networks [14, 16, 18], and relate them to the uniform small-gain condition.

In this subsection, we always suppose that \((X,X^+) = (\ell _{\infty }(I),\ell _{\infty }^+(I))\) for some nonempty index set I (which is precisely the space in which gain operators act).

Definition 7.5

We say that a nonlinear operator \(A:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\) satisfies

  1. (i)

    the small-gain condition if

    $$\begin{aligned} A(x)\not \ge x \quad \hbox { for all}\ x \in \ell _{\infty }^+(I){\setminus }\{0\}. \end{aligned}$$
    (7.4)
  2. (ii)

    the strong small-gain condition if there exists \(\rho \in \mathcal {K_\infty }\) and a corresponding operator \({D_{\rho }}:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\), defined for any \(x\in \ell _{\infty }^+(I)\) by

    $$\begin{aligned} {D_{\rho }}(x) := \left( (\mathrm {id}+ \rho )(x_i)\right) _{i\in I}, \end{aligned}$$

    such that

    $$\begin{aligned} {D_{\rho }}\circ A(x) \not \ge x \quad \hbox { for all}\ x\in \ell _{\infty }^+(I){\setminus }\{0\}. \end{aligned}$$
    (7.5)
  3. (iii)

    the robust small-gain condition if there is \(\omega \in \mathcal {K_\infty }\) with \(\omega <\mathrm {id}\) such that for all \(i,j \in I\) the operator \(A_{i,j}\) given by

    $$\begin{aligned} A_{i,j}(x) := A(x) + \omega (x_j) e_i \quad \hbox { for all}\ x \in \ell _{\infty }^+(I) \end{aligned}$$
    (7.6)

    satisfies the small-gain condition (7.4); here, \(e_i \in \ell _{\infty }(I)\) denotes the i-th canonical unit vector.

  4. (iv)

    the robust strong small-gain condition if there are \(\omega ,\rho \in \mathcal {K_\infty }\) with \(\omega <\mathrm {id}\) such that for all \(i,j \in I\) the operator \(A_{i,j}\) defined by (7.6) satisfies the strong small-gain condition (7.5) with the same \(\rho \) for all ij. \(\lhd \)

The strong small-gain condition was introduced in [18], where it was shown that if the gain operator satisfies the strong small-gain condition, then a finite network consisting of ISS systems (defined in a summation formulation) is ISS. The robust strong small-gain condition has been introduced in [14] in the context of the Lyapunov-based small-gain analysis of infinite networks.

Remark 7.6

For finite networks, also so-called cyclic small-gain conditions play an important role, as they help to effectively check the small-gain condition (7.4) in the case when \(A = \Gamma _\otimes \), which is important for the small-gain theorems in the maximum formulation, see [37] for more discussions on this topic. For infinite networks, the cyclic condition for \(\Gamma _\otimes \) is implied by (7.4), see [14, Lem. 4.1], but is far too weak for the small-gain analysis. For max-linear systems, Remark B.2 and Corollary 6.3 are reminiscent of the cyclic small-gain conditions.

We say that a continuous function \(\alpha :{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) is of class \({\mathcal {P}}\) if \(\alpha (0)=0\) and \(\alpha (r)>0\) for \(r>0\).

The following lemma is an extension of the considerations in [28, p. 130].

Lemma 7.7

The following statements hold:

  1. (i)

    For any \(\alpha \in {\mathcal {P}}\) and \(L>0\), the function defined by

    $$\begin{aligned} \rho (r) := \inf _{y\ge 0} \big \{\alpha (y) + L|y-r|\big \} \end{aligned}$$
    (7.7)

    is in \({\mathcal {P}}\), satisfies \(\rho (s) \le \alpha (s)\) for all \(s \in {\mathbb {R}}_+\), and is globally Lipschitz with Lipschitz constant L.

  2. (ii)

    If in (i) \(\alpha \in {\mathcal {K}}\), then \(\rho \) given by (7.7) is a \({\mathcal {K}}\)-function.

  3. (iii)

    If in (i) \(\alpha \in \mathcal {K_\infty }\), then \(\rho \) given by (7.7) is a \(\mathcal {K_\infty }\)-function.

Proof

  1. (i).

    Consider \(\rho \) given by (7.7). Note that for any \(r>0\) it holds that \(\alpha (y) + L|y-r|\rightarrow \infty \) as \(y \rightarrow \infty \). Thus, there is \(r^*>0\) such that \(\rho (r) = \inf _{y\in [0,r^*]} \big \{\alpha (y) + L|y-r|\big \}\), and as \(\alpha \) is continuous, there is \(y^*=y^*(r)\) such that \(\rho (r) = \alpha (y^*) + L|y^*-r|\).

    Clearly, \(0\le \rho (r)\le \alpha (r)\) for all \(r\ge 0\). Assume that \(\rho (r)= 0\) for some \(r\ge 0\). By the above argument, \(\rho (r) = \alpha (r)\), and as \(\alpha (r)=0\) if and only if \(r=0\), it follows that \(\rho (0)=0\) and \(\rho (r)>0\) for \(r>0\).

    Next, for any \(r_1,r_2 \ge 0\) we have by the triangle inequality

    $$\begin{aligned} \rho (r_1)-\rho (r_2)= & {} \inf _{y\ge 0} \big \{\alpha (y) + L|y-r_1|\big \} - \inf _{y\ge 0} \big \{\alpha (y) + L|y-r_2|\big \}\\\le & {} \inf _{y\ge 0} \big \{\alpha (y) + L|y-r_2| + L|r_2-r_1|\big \} - \inf _{y\ge 0} \big \{\alpha (y) + L|y-r_2|\big \}\\= & {} L|r_2-r_1|. \end{aligned}$$

    Similarly, using the triangle inequality for the second term, we obtain

    $$\begin{aligned} \rho (r_1)-\rho (r_2) \ge -L|r_2-r_1|, \end{aligned}$$

    and thus \(\rho \) is globally Lipschitz with Lipschitz constant L, and is of class \({\mathcal {P}}\).

  2. (ii).

    Let \(\alpha \in {\mathcal {K}}\). Pick any \(r_1,r_2 \ge 0\) with \(r_1 >r_2\) and let \(y_1 \ge 0\) be so that \(\rho (r_1) = \alpha (y_1) + |y_1 - r_1|\). Consider the expression

    $$\begin{aligned} \rho (r_1) - \rho (r_2) = \alpha (y_1) + L|y_1-r_1| - \inf _{y\ge 0} \{\alpha (y) + L|y-r_2|\}. \end{aligned}$$
    (7.8)

    If \(y_1\ge r_2\), then \(\rho (r_1) - \rho (r_2) \ge \alpha (y_1) + L|y_1-r_1| -\alpha (r_2) >0\), as \(\alpha \) is increasing.

    If \(y_1<r_2\), then

    $$\begin{aligned} \rho (r_1) - \rho (r_2)\ge & {} \alpha (y_1) + L|y_1-r_1| - \left( \alpha (y_1) + L|y_1 - r_2|\right) = L(r_1-r_2)>0. \end{aligned}$$
  3. (iii).

    Let \(\alpha \in \mathcal {K_\infty }\). Assume to the contrary that \(\rho \) is bounded: \(\rho (r) \le M\) for all r. Then, for every r there is \(r'\) with \(\alpha (r') + L|r - r'| \le 2M\). Looking at the second term, we see that \(r' \rightarrow \infty \) as \(r \rightarrow \infty \). But then \(\alpha (r') \rightarrow \infty \), a contradiction. \(\square \)

Items (ii) and (iii) of the following elementary lemma are variations of [41, Lem. 1.1.5] and [41, Lem. 1.1.3, item 1], respectively.

Lemma 7.8

  1. (i)

    For any \(\alpha \in \mathcal {K_\infty }\) there is \(\eta \in \mathcal {K_\infty }\) such that \(\eta (r)\le \alpha (r)\) for all \(r\ge 0\), and \(\mathrm {id}-\eta \in \mathcal {K_\infty }\).

  2. (ii)

    For any \(\eta \in \mathcal {K_\infty }\) with \(\mathrm {id}- \eta \in \mathcal {K_\infty }\), there is \(\rho \in \mathcal {K_\infty }\) such that \((\mathrm {id}-\eta )^{-1} = \mathrm {id}+ \rho \).

  3. (iii)

    For any \(\eta \in \mathcal {K_\infty }\) such that \(\mathrm {id}- \eta \in \mathcal {K_\infty }\) there are \(\eta _1,\eta _2\in \mathcal {K_\infty }\) such that \(\mathrm {id}-\eta _1,\mathrm {id}-\eta _2\in \mathcal {K_\infty }\) and \(\mathrm {id}-\eta = (\mathrm {id}-\eta _1) \circ (\mathrm {id}-\eta _2)\).

Proof

  1. (i)

    Take any \(L\in (0,1)\) and construct \(\rho \in \mathcal {K_\infty }\), globally Lipschitz withLipschitz constant L as in Lemma 7.7. Clearly, \((\mathrm {id}- \rho )(0) = 0\), and \(\mathrm {id}- \rho \) is continuous. For \(r,s\ge 0\) with \(r>s\), we have

    $$\begin{aligned} r-\rho (r) - (s-\rho (s))&= r-s - (\rho (r)-\rho (s))\ge r-s -L(r-s) \\&= (1-L)(r-s)>0, \end{aligned}$$

    and thus \(\mathrm {id}-\rho \) is increasing. Furthermore, \(r-\rho (r)\ge (1-L)r\rightarrow \infty \) as \(r\rightarrow \infty \), and thus \(\mathrm {id}-\rho \in \mathcal {K_\infty }\).

  2. (ii)

    Define \(\rho :=\eta \circ (\mathrm {id}-\eta )^{-1}\). As \(\rho \) is a composition of \(\mathcal {K_\infty }\)-functions, \(\rho \in \mathcal {K_\infty }\). It holds that \((\mathrm {id}+ \rho )\circ (\mathrm {id}- \eta )= \mathrm {id}- \eta + \eta \circ (\mathrm {id}-\eta )^{-1}\circ (\mathrm {id}- \eta ) = \mathrm {id}- \eta + \eta = \mathrm {id}\), and thus \(\mathrm {id}+ \rho = (\mathrm {id}-\eta )^{-1}\).

  3. (iii)

    Choose \(\eta _2:=\frac{1}{2}\eta \) and \(\eta _1:=\frac{1}{2}\eta \circ (\mathrm {id}-\eta _2)^{-1}\). A direct calculation shows the claim. \(\square \)

Now we give a criterion for the robust strong small-gain condition.

Proposition 7.9

A nonlinear operator \(A:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\) satisfies the robust strong small-gain condition if and only if there are \(\omega ,\eta \in \mathcal {K_\infty }\) and an operator \(\vec {\eta }:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\), defined by

$$\begin{aligned} \vec {\eta }(x) := (\eta (x_i))_{i\in I} \quad \hbox { for all}\ x\in \ell _{\infty }^+(I), \end{aligned}$$
(7.9)

such that for all \(k\in I\) it holds that

$$\begin{aligned} A(x)\not \ge x - \vec {\eta }(x) - \omega (\Vert x\Vert _{\ell _{\infty }(I)}) e_k \quad \hbox { for all}\ x\in \ell _{\infty }^+(I){\setminus }\{0\}. \end{aligned}$$
(7.10)

Proof

\(\Rightarrow \)”: Let the robust strong small-gain condition hold with corresponding \(\rho ,\omega \) and \(D_{\rho }\). Then for any \(x = (x_i)_{i\in I} \in \ell _{\infty }^+(I){\setminus }\{0\}\) and any \(j,k\in I\), it holds that

$$\begin{aligned} \exists i\in I:\quad \big [{D_{\rho }}\left( A(x) + \omega (x_j) e_k\right) \big ]_i = (\mathrm {id}+ \rho )\left( [A(x) + \omega (x_j) e_k]_i\right) < x_i. \end{aligned}$$
(7.11)

As \(\rho \in \mathcal {K_\infty }\), there is \(\eta \in \mathcal {K_\infty }\) such that \(\mathrm {id}-\eta = (\mathrm {id}+ \rho )^{-1}\in \mathcal {K_\infty }\), which can be shown as in Lemma 7.8(ii). Thus, (7.11) is equivalent to

$$\begin{aligned} \exists i\in I:\quad A(x)_i < x_i - \eta (x_i) - \big [\omega ( x_j) e_k\big ]_i. \end{aligned}$$
(7.12)

As for each \(x \in \ell _{\infty }^+(I)\) there is \(j\in I\) such that \(x_j \ge \frac{1}{2}\Vert x\Vert _{\ell _{\infty }(I)}\), the condition (7.12) with this particular j implies that

$$\begin{aligned}&\exists i\in I:\quad A(x)_i < x_i - \eta (x_i) - \Big [\omega \left( \frac{1}{2} \Vert x\Vert _{\ell _{\infty }(I)}\right) e_k\Big ]_i \\&= \Big [x - \vec {\eta }(x) - \omega \left( \frac{1}{2} \Vert x\Vert _{\ell _{\infty }(I)}\right) e_k\Big ]_i, \end{aligned}$$

which is up to the constant the same as (7.10).

\(\Leftarrow \)”: Let (7.10) hold with a certain \(\eta _1\in \mathcal {K_\infty }\) and a corresponding \(\vec {\eta }_1\). By Lemma 7.8(i), one can choose \(\eta \in \mathcal {K_\infty }\), such that \(\eta \le \eta _1\) and \(\mathrm {id}-\eta \in \mathcal {K_\infty }\). Then, (7.10) holds with this \(\eta \) and a corresponding \(\vec {\eta }\), i.e., for all \(k\in I\) we have

$$\begin{aligned} \exists i\in I:\quad A(x)_i < x_i - \eta (x_i) - \big [\omega ( \Vert x\Vert _{\ell _{\infty }(I)}) e_k\big ]_i. \end{aligned}$$

As \(\Vert x\Vert _{\ell _{\infty }(I)}\ge x_j\) for any \(j\in I\), this implies that for all \(j,k\in I\) it holds that

$$\begin{aligned} \exists i\in I:\quad A(x)_i < x_i - \eta (x_i) - \big [\omega ( x_j) e_k\big ]_i, \end{aligned}$$

and thus

$$\begin{aligned} \exists i\in I:\quad \big [ A(x) + \omega ( x_j) e_k\big ]_i < (\mathrm {id}-\eta )(x_i). \end{aligned}$$

As \(\eta \in \mathcal {K_\infty }\) satisfies \(\mathrm {id}-\eta \in \mathcal {K_\infty }\), by Lemma 7.8(ii) there is \(\rho \in \mathcal {K_\infty }\) such that \((\mathrm {id}-\eta )^{-1} = \mathrm {id}+ \rho \), and thus for all \(j,k\in I\) property (7.11) holds, which shows that A satisfies the robust strong small-gain condition. \(\square \)

Specialized to the strong small-gain condition, Proposition 7.9 reads as follows.

Corollary 7.10

A nonlinear operator \(A:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\) satisfies the strong small-gain condition if and only if there are \(\eta \in \mathcal {K_\infty }\) and an operator \(\vec {\eta }:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\), defined via (7.9) such that

$$\begin{aligned} A(x)\not \ge x - \vec {\eta }(x) \quad \hbox { for all}\ x\in \ell _{\infty }^+(I){\setminus }\{0\}. \end{aligned}$$

The next proposition shows that the uniform small-gain condition is at least not weaker than the robust strong small-gain condition.

Proposition 7.11

Let \(A:\ell _{\infty }^+(I)\rightarrow \ell _{\infty }^+(I)\) be a nonlinear operator. If A satisfies the uniform small-gain condition, then A satisfies the robust strong small-gain condition.

Proof

As A satisfies the uniform small-gain condition with \(\eta \), from the proof of Proposition 7.4 with \(z:= \mathbf{1}\), we see that for all \(x \in \ell _{\infty }^+(I) {\setminus } \{0\}\)

$$\begin{aligned} A(x) \not \ge x - \frac{1}{2\Vert \mathbf{1}\Vert _{\ell _{\infty }(I)}}\eta (\Vert x\Vert _{\ell _{\infty }(I)})\mathbf{1} = x - \frac{1}{2}\eta (\Vert x\Vert _{\ell _{\infty }(I)})\mathbf{1}. \end{aligned}$$

For any \(x\in \ell _{\infty }^+(I)\) and any \(k\in I\), it holds that

$$\begin{aligned} \frac{1}{2}\eta (\Vert x\Vert _{\ell _{\infty }(I)})\mathbf{1}&=\frac{1}{4}\eta (\Vert x\Vert _{\ell _{\infty }(I)})\mathbf{1} + \frac{1}{4}\eta (\Vert x\Vert _{\ell _{\infty }(I)})\mathbf{1}\\&\ge \frac{1}{4}\vec {\eta }(x) + \frac{1}{4}\eta (\Vert x\Vert _{\ell _{\infty }(I)})e_k, \end{aligned}$$

and by Proposition 7.9, A satisfies the robust strong small-gain condition. \(\square \)

7.3 The finite-dimensional case

The case of a finite-dimensional X is particularly important as it is a key to the stability analysis of finite networks.

Proposition 7.12

Assume that \((X,X^+) = ({\mathbb {R}}^n,{\mathbb {R}}^n_+)\) for some \(n\in {\mathbb {N}}\), where \({\mathbb {R}}^n\) is equipped with the maximum norm \(\Vert \cdot \Vert \) and \({\mathbb {R}}^n_+\) denotes the standard positive cone in \({\mathbb {R}}^n\). Further assume that the operator A is continuous and monotone. Then, the following statements are equivalent:

  1. (i)

    System (5.1) has the MLIM property.

  2. (ii)

    The operator \(\mathrm {id}- A\) has the MBI property.

  3. (iii)

    The uniform small-gain condition holds: There is an \(\eta \in \mathcal {K_\infty }\) such that \(\mathrm {dist}(A(x) - x,X^+) \ge \eta (\Vert x\Vert )\) for all \(x \in X^+\).

  4. (iv)

    There is an \(\eta \in \mathcal {K_\infty }\) such that

    $$\begin{aligned} A(x) \not \ge x - \eta (\Vert x\Vert )\mathbf{1} \quad \hbox { for all}\ x \in X^+ {\setminus }\{0\}. \end{aligned}$$

Additionally, if A is either \(\Gamma _\boxplus \) or \(\Gamma _\otimes \), then the above conditions are equivalent to

  1. (v)

    A satisfies the robust strong small-gain condition.

  2. (vi)

    A satisfies the strong small-gain condition.

Proof

(i) \(\Rightarrow \) (ii). Follows from Proposition 5.3.

(ii) \(\Leftrightarrow \) (iii) \(\Leftrightarrow \) (iv). Follows from Propositions 7.47.1.

(ii) \(\Rightarrow \) (i). This follows from Proposition 5.4 since the cone \({\mathbb {R}}^n_+\) has the Levi property.

(iv) \(\Rightarrow \) (v). Follows by Proposition 7.11.

(v) \(\Rightarrow \) (vi). Clear.

(vi) \(\Rightarrow \) (ii). Follows by [42, Thm. 6.1]. \(\square \)

Remark 7.13

The class of operators for which the equivalence between (i)–(iv) and (v), (vi) can be shown, can be made considerably larger using the monotone aggregation functions formalism, see [42, Thm. 6.1]. However, the proof of this implication in [18, Lem. 13] uses more structure of the gain operator than merely monotonicity. Thus, the question if this implication is valid for general monotone A is still open.

Remark 7.14

An interesting research direction could be the development of the small-gain theorems for the case when the subsystems obtain the outputs of other subsystems, instead of their full states, as inputs (so-called IOS small-gain theorems). For finite networks, such trajectory-based results have been reported in [24] for couplings of two systems, and in [25, 41] for any finite number of finite-dimensional systems. The authors are not aware of such trajectory-based results for networks with infinite-dimensional components and/or infinite networks.

7.4 Systems with linear gains

Here, we show that in the case of linear and sup-linear gain operators the MBI and MLIM properties are equivalent and can be characterized via the spectral condition.

Definition 7.15

Let \((X,X^+)\) be an ordered Banach space. System (5.1) is exponentially input-to-state stable (eISS) if there are \(M\ge 1\), \(a\in (0,1)\) and \(\gamma \in \mathcal {K_\infty }\) such that for every \(u \in \ell _{\infty }({\mathbb {Z}}_+,X^+)\) and any solution \(x(\cdot ) = (x(k))_{k\in {\mathbb {Z}}_+}\) of (5.1) it holds that

$$\begin{aligned} \Vert x(k)\Vert _X \le M\Vert x(0)\Vert _X a^k + \gamma (\Vert u\Vert _{\infty }) \quad \hbox { for all}\ k \in {\mathbb {Z}}_+. \end{aligned}$$
(7.13)

For linear systems, we obtain the following result, that we use to formulate an efficient small-gain theorem in summation formulation, see Corollary 6.6.

Proposition 7.16

Let \((X,X^+)\) be an ordered Banach space with a generating and normal cone \(X^+\). Let the operator \(A:X^+ \rightarrow X^+\) be the restriction to \(X^+\) of a positive linear operator on X. Then the following statements are equivalent:

  1. (i)

    System (5.1) is exponentially ISS.

  2. (ii)

    System (5.1) satisfies the MLIM property.

  3. (iii)

    The operator \(\mathrm {id}-A\) satisfies the MBI property.

  4. (iv)

    The spectral radius of A satisfies \(r(A) < 1\).

Proof

The implication “(i) \(\Rightarrow \) (ii)” is trivial. By Proposition 5.3, (ii) implies (iii).

(iii) \(\Rightarrow \) (iv). It is easy to check that if A is homogeneous of degree one and \(\mathrm {id}- A\) satisfies the MBI property with a certain \(\xi \in \mathcal {K_\infty }\), then \(\mathrm {id}- A\) satisfies the MBI property with \(r\mapsto \xi (1)r\) instead of \(\xi \). The application of [19, Thm. 3.3] shows (iv).

(iv) \(\Rightarrow \) (i). Follows from Proposition A.1. \(\square \)

For sup-linear systems, MBI is again equivalent to eISS, and the following holds:

Proposition 7.17

Assume that the gains \(\gamma _{ij}\), \((i,j) \in I^2\), are all linear and that the associated gain operator \(\Gamma _{\otimes }\) is well-defined. Then, the following statements are equivalent:

  1. (i)

    The operator \(\mathrm {id}- \Gamma _{\otimes }\) satisfies the MBI property.

  2. (ii)

    There are \(\lambda \in (0,1)\) and \(s_0 \in \mathrm {int}(\ell _{\infty }^+(I))\) such that

    $$\begin{aligned} \Gamma _{\otimes }(s_0) \le \lambda s_0. \end{aligned}$$
    (7.14)
  3. (iii)

    The spectral radius of \(\Gamma _{\otimes }:\ell _{\infty }^+(I) \rightarrow \ell _{\infty }^+(I)\) satisfies

    $$\begin{aligned} r(\Gamma _{\otimes }) = \lim _{n\rightarrow \infty } \sup _{s \in \ell _{\infty }^+(I) ,\; \Vert s\Vert _{\ell _{\infty }} = 1} \Vert \Gamma _{\otimes }^n(s)\Vert _{\ell _{\infty }(I)}^{1/n} = \lim _{n\rightarrow \infty } \Vert \Gamma _{\otimes }^n(\mathbf{1})\Vert _{\ell _{\infty }(I)}^{1/n} < 1. \end{aligned}$$
  4. (iv)

    The system (5.1) with \(A = \Gamma _{\otimes }\) is eISS.

  5. (v)

    The system (5.1) with \(A = \Gamma _{\otimes }\) has the MLIM property.

Proof

By Proposition A.1, (iii) is equivalent to (iv). Clearly, (iv) implies (v). By Proposition 5.3, (v) implies (i).

(i) \(\Rightarrow \) (ii). By Proposition 7.1, the MBI property of \(\mathrm {id}- \Gamma _{\otimes }\) is equivalent to the uniform small-gain condition. Then Proposition 7.4 shows that

$$\begin{aligned} \Gamma _{\otimes }(s) \not \ge s - \eta (\Vert s\Vert _{\ell _{\infty }})\mathbf{1} \quad \hbox { for all}\ s \in \ell _{\infty }^+(I) {\setminus } \{0\} \end{aligned}$$

for some \(\eta \in \mathcal {K_\infty }\). In particular,

$$\begin{aligned} \Gamma _{\otimes }\left( \frac{s}{\Vert s\Vert _{\ell _{\infty }}}\right) \not \ge \frac{s}{\Vert s\Vert _{\ell _{\infty }}} - \eta (1)\mathbf{1} \quad \hbox { for all}\ s \in \ell _{\infty }^+(I) {\setminus } \{0\}. \end{aligned}$$

Multiplying this inequality by \(\Vert s\Vert _{\ell _{\infty }}\), putting \(\eta := \eta (1)\) and using the homogeneity of degree one of \(\Gamma _{\otimes }\) yields

$$\begin{aligned} \Gamma _{\otimes }(s) \not \ge s - \eta \Vert s\Vert _{\ell _{\infty }} \mathbf{1} \quad \hbox { for all}\ s \in \ell _{\infty }^+(I) {\setminus } \{0\}. \end{aligned}$$

Then for any \(s \in \ell _{\infty }^+(I)\) we have

$$\begin{aligned} (1 + \varepsilon )\Gamma _{\otimes }(s)&\not \ge (1 + \varepsilon )(s - \eta \Vert s\Vert _{\ell _{\infty }} \mathbf{1}) = s + \varepsilon s - (1 + \varepsilon ) \eta \Vert s\Vert _{\ell _{\infty }} \mathbf{1}. \end{aligned}$$

As \(s + \varepsilon s - (1 + \varepsilon ) \eta \Vert s\Vert _{\ell _{\infty }} \mathbf{1} \le s - [(1 + \varepsilon )\eta - \varepsilon ] \Vert s\Vert _{\ell _{\infty }}{} \mathbf{1}\), we have

$$\begin{aligned} (1 + \varepsilon )\Gamma _{\otimes }(s)&\not \ge s - [(1 + \varepsilon )\eta - \varepsilon ] \Vert s\Vert _{\ell _{\infty }}{} \mathbf{1}. \end{aligned}$$

Choosing \(\varepsilon >0\) small enough, Proposition 7.11 implies that \((1 + \varepsilon )\Gamma _{\otimes }\) satisfies the robust strong small-gain condition. By Lemma B.5, the operator

$$\begin{aligned} Q^{\varepsilon }(s) := \sup _{k \in {\mathbb {Z}}_+} (1 + \varepsilon )^k \Gamma _{\otimes }^k(s) \quad \hbox { for all}\ s \in \ell _{\infty }^+(I) \end{aligned}$$

is well-defined and satisfies

$$\begin{aligned} \Gamma _{\otimes }(Q^{\varepsilon }(s)) \le \frac{1}{1 + \varepsilon }Q^{\varepsilon }(s) \quad \hbox { for all}\ s \in \ell _{\infty }^+(I). \end{aligned}$$

In particular, this holds for \(s = \mathbf{1}\). Since \(s_0 := Q^{\varepsilon }(\mathbf{1}) \ge \mathbf{1}\), we have \(s_0 \in \mathrm {int}(\ell _{\infty }^+(I))\).

(ii) \(\Rightarrow \) (iii). By monotonicity and homogeneity of degree one of \(\Gamma _{\otimes }\), we have

$$\begin{aligned} \Gamma _{\otimes }^k(s_0) \le \lambda ^k s_0 \quad \hbox { for all}\ k \ge 1. \end{aligned}$$

There exists \(n \in {\mathbb {N}}\) such that any \(s \in \ell _{\infty }^+(I)\) with \(\Vert s\Vert _{\ell _{\infty }} = 1\) satisfies \(s \le ns_0\). Hence,

$$\begin{aligned} \Gamma _{\otimes }^k(s) \le \Gamma _{\otimes }^k(ns_0) = n \Gamma _{\otimes }^k(s_0) \le n \lambda ^k s_0 \quad \hbox { for all}\ k \ge 1,\ \Vert s\Vert _{\ell _{\infty }} = 1. \end{aligned}$$

This implies \(r(\Gamma _{\otimes }) \le \lambda < 1\), which completes the proof. \(\square \)

Remark 7.18

The special form of the operator \(\Gamma _\otimes \) is used in Proposition 7.17 only for the proof of the implication (i) \(\Rightarrow \) (ii). The remaining implications are valid for considerably more general types of operators. Note that if \(s_0\) is as in item (ii), then \(ts_0\) also satisfies all conditions in item (ii), for any \(t>0\) and thus we can construct a path of strict decay \(t\mapsto t s_0\) for the gain operator \(\Gamma _\otimes \), which is an important ingredient for the proof of the Lyapunov-based ISS small-gain theorem, see [17].