Skip to main content
Log in

Characterizing the Codimension of Zero Singularities for Time-Delay Systems

A Link with Vandermonde and Birkhoff Incidence Matrices

  • Published:
Acta Applicandae Mathematicae Aims and scope Submit manuscript

Abstract

The analysis of time-delay systems mainly relies on detecting and understanding the spectral values bifurcations when crossing the imaginary axis. This paper deals with the zero singularity, essentially when the zero spectral value is multiple. The simplest case in such a configuration is characterized by an algebraic multiplicity two and a geometric multiplicity one, known as the Bogdanov-Takens singularity. Moreover, in some cases the codimension of the zero spectral value exceeds the number of the coupled scalar-differential equations. Nevertheless, to the best of the author’s knowledge, the bounds of such a multiplicity have not been deeply investigated in the literature. It is worth mentioning that the knowledge of such an information is crucial for nonlinear analysis purposes since the dimension of the projected state on the center manifold is none other than the sum of the dimensions of the generalized eigenspaces associated with spectral values with zero real parts. Motivated by a control-oriented problems, this paper provides an answer to this question for time-delay systems, taking into account the parameters’ algebraic constraints that may occur in applications. We emphasize the link between such a problem and the incidence matrices associated with the Birkhoff interpolation problem. In this context, symbolic algorithms for LU-factorization for functional confluent Vandermonde as well as some classes of bivariate functional Birkhoff matrices are also proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The sum of the degrees of the polynomials involved in the quasi-polynomial plus the number of polynomials involved minus one is called the degree of a given quasi-polynomial. Further discussions on such a notion can be found in [21].

References

  1. Boussaada, I., Irofti, D., Niculescu, S.-I.: Computing the codimension of the singularity at the origin for delay systems in the regular case: A Vandermonde-based approach. In: 13th European Control Conference, June 24–27, 2014, Strasbourg, France, pp. 97–102 (2014)

    Google Scholar 

  2. Boussaada, I., Niculescu, S.-I.: Computing the codimension of the singularity at the origin for delay systems: The missing link with Birkhoff incidence matrices. In: 21st International Symposium on Mathematical Theory of Networks and Systems, July 7–11, 2014, Groningen, The Netherlands, pp. 1699–1706 (2014)

    Google Scholar 

  3. Bini, D., Boito, P.: A fast algorithm for approximate polynomial GCD based on structured matrix computations. In: Bini, D., Mehrmann, V., Olshevsky, V., Tyrtyshnikov, E., van Barel, M. (eds.) Numerical Methods for Structured Matrices and Applications. Operator Theory: Advances and Applications, vol. 199, pp. 155–173. Birkhäuser, Basel (2010)

    Chapter  Google Scholar 

  4. Diekmann, O., Gils, S.V., Lunel, S.V., Walther, H.: Delay Equations. Applied Mathematical Sciences, Functional, Complex, and Nonlinear Analysis, vol. 110. Springer, New York (1995)

    MATH  Google Scholar 

  5. Bellman, R., Cooke, K.L.: Differential-Difference Equations. Academic Press, New York (1963)

    MATH  Google Scholar 

  6. Ahlfors, L.V.: Complex Analysis. McGraw-Hill, New York (1979)

    MATH  Google Scholar 

  7. Levin, B.J., Boas, R.P.: Distribution of Zeros of Entire Functions. Translations of Mathematical Monographs. Am. Math. Soc., Providence (1964), trad. du russe: Raspredelenie kosnej celyh funkcij

    Google Scholar 

  8. Michiels, W., Niculescu, S.-I.: Stability and Stabilization of Time-Delay Systems. Advances in Design and Control, vol. 12. SIAM, Philadelphia (2007)

    Book  MATH  Google Scholar 

  9. Hale, J.K., Huang, W.: Period doubling in singularly perturbed delay equations. J. Differ. Equ. 114, 1–23 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  10. Boussaada, I., Mounier, H., Niculescu, S.-I., Cela, A.: Control of drilling vibrations: A time-delay system approach. In: MED 2012, 20th Mediterranean Conference on Control and Automation, Barcelona (2012), 5 pp.

    Google Scholar 

  11. Marquez, M.S., Boussaada, I., Mounier, H., Niculescu, S.-I.: Analysis and Control of Oilwell Drilling Vibrations, Advances in Industrial Control. Springer, Berlin (2015)

    Google Scholar 

  12. Campbell, S., Yuan, Y.: Zero singularities of codimension two and three in delay differential equations. Nonlinearity 22(11), 2671 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Sieber, J., Krauskopf, B.: Bifurcation analysis of an inverted pendulum with delayed feedback control near a triple-zero eigenvalue singularity. Nonlinearity 17, 85–103 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Boussaada, I., Morarescu, I.-C., Niculescu, S.-I.: Inverted pendulum stabilization: Characterisation of codimension-three triple zero bifurcation via multiple delayed proportional gains. Syst. Control Lett. 82, 1–8 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Polya, G., Szegö, G.: Problems and Theorems in Analysis. Volume I: Series, Integral Calculus, Theory of Functions. Springer, New York (1972)

    Book  MATH  Google Scholar 

  16. Hassard, B.: Counting roots of the characteristic equation for linear delay-differential systems. J. Differ. Equ. 136(2), 222–235 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Guckenheimer, J., Holmes, P.: Nonlinear Oscillations, Dynamical Systems, and Bifurcation of Vector Fields. Springer, Berlin (2002)

    MATH  Google Scholar 

  18. Carr, J.: Application of Center Manifold Theory. Springer, Berlin (1981)

    Book  MATH  Google Scholar 

  19. Kuznetsov, Y.: Elements of Applied Bifurcation Theory, 2nd edn. Applied Mathematics Sciences, vol. 112. Springer, New York (1998)

    MATH  Google Scholar 

  20. Berenstein, C.A., Gay, R.R.: Complex Analysis and Special Topics in Harmonic Analysis. Springer, New York (1995)

    Book  MATH  Google Scholar 

  21. Wielonsky, F.: A Rolle’s theorem for real exponential polynomials in the complex domain. J. Math. Pures Appl. 4, 389–408 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  22. Marden, M.: Geometry of Polynomials. American Mathematical Society Mathematical Surveys (1966)

    MATH  Google Scholar 

  23. Björck, A., Elfving, T.: Algorithms for confluent Vandermonde systems. Numer. Math. 21, 130–137 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gautshi, W.: On inverses of Vandermonde and confluent Vandermonde matrices. Numer. Math. 4, 117–123 (1963)

    Article  MathSciNet  Google Scholar 

  25. Gautshi, W.: On inverses of Vandermonde and confluent Vandermonde matrices ii. Numer. Math. 5, 425–430 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  26. Gonzalez-Vega, L.: Applying quantifier elimination to the Birkhoff interpolation problem. J. Symb. Comput. 22(1), 83–104 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kailath, T.: Linear Systems. Prentice-Hall Information and System Sciences Series. Prentice Hall International, Englewood Cliffs (1998)

    MATH  Google Scholar 

  28. Ha, T., Gibson, J.: A note on the determinant of a functional confluent Vandermonde matrix and controllability. Linear Algebra Appl. 30(0), 69–75 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  29. Respondek, J.S.: Numerical recipes for the high efficient inverse of the confluent Vandermonde matrices. Appl. Math. Comput. 218(5), 2044–2054 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Niculescu, S.-I., Michiels, W.: Stabilizing a chain of integrators using multiple delays. IEEE Trans. Autom. Control 49(5), 802–807 (2004)

    Article  MathSciNet  Google Scholar 

  31. Lorentz, G.G., Zeller, K.L.: Birkhoff interpolation. SIAM J. Numer. Anal. 8(1), 43–48 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  32. Rouillier, F., Din, M., Schost, E.: Solving the Birkhoff interpolation problem via the critical point method: An experimental study. In: Richter-Gebert, J., Wang, D. (eds.) Automated Deduction in Geom. LNCS, vol. 2061, pp. 26–40. Springer, Berlin (2001)

    Chapter  Google Scholar 

  33. Melkemi, L., Rajeh, F.: Block LU-factorization of confluent Vandermonde matrices. Appl. Math. Lett. 23(7), 747–750 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Respondek, J.: Dynamic data structures in the incremental algorithms operating on a certain class of special matrices. In: Murgante, B., Misra, S., Rocha, A., Torre, C., Rocha, J., Falcao, M., Taniar, D., Apduhan, B., Gervasi, O. (eds.) Computational Science and Its Applications, ICCSA, 2014, Strasbourg, France. Lecture Notes in Computer Science, vol. 8584, pp. 171–185. Springer, Berlin (2014)

    Google Scholar 

  35. Olver, P.J.: On multivariate interpolation. Stud. Appl. Math. 116, 201–240 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  36. Hou, S.-H., Pang, W.-K.: Inversion of confluent Vandermonde matrices. Comput. Math. Appl. 43(12), 1539–1547 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  37. Respondek, J.S.: On the confluent Vandermonde matrix calculation algorithm. Appl. Math. Lett. 24(2), 103–106 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Cooke, K.L.: Stability analysis for a vector disease model. Rocky Mt. J. Math. 9, 31–42 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  39. Ruan, S.: Delay differential equations in single species dynamics. In: Delay Differential Equations and Applications. Fields Inst. Commun., vol. 29, pp. 477–517. Springer, Berlin (2006)

    Chapter  Google Scholar 

  40. Fantoni, I., Lozano, R.: Non-linear Control for Underactuated Mechanical Systems. Springer, Berlin (2001)

    MATH  Google Scholar 

  41. Quanser: Control rotary challenges. http://www.quanser.com/english/html/challenges

  42. Oruc, H.: Factorization of the Vandermonde matrix and its applications. Appl. Math. Lett. 20(9), 982–987 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  43. Melkemi, L.: Confluent Vandermonde matrices using Sylvester’s structures. Research Report of the Ecole Normale, Supérieure de Lyon (98–16), pp. 1–14 (1998)

  44. Cox, D., Little, J., O’Shea, D.: Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra. Undergraduate Texts in Mathematics. Springer, New York (2007)

    MATH  Google Scholar 

  45. Atay, F.M.: Balancing the inverted pendulum using position feedback. Appl. Math. Lett. 12(5), 51–56 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  46. Sieber, J., Krauskopf, B.: Extending the permissible control loop latency for the controlled inverted pendulum. Dyn. Syst. 20(2), 189–199 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  47. Boussaada, I., Morarescu, I.-C., Niculescu, S.-I.: Inverted pendulum stabilization via a Pyragas-type controller: Revisiting the triple zero singularity. In: Proceedings of the 19th IFAC World Congress, 2014, Cape Town, pp. 6806–6811 (2015)

    Google Scholar 

  48. Kharitonov, V., Niculescu, S.-I., Moreno, J., Michiels, W.: Static output feedback stabilization: Necessary conditions for multiple delay controllers. IEEE Trans. Autom. Control 50(1), 82–86 (2005)

    Article  MathSciNet  Google Scholar 

  49. Landry, M., Campbell, S., Morris, K., Aguilar, C.O.: Dynamics of an inverted pendulum with delayed feedback control. SIAM J. Appl. Dyn. Syst. 4(2), 333–351 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  50. Lorentz, R.: Multivariate Birkhoff Interpolation. Lecture Notes in Mathematics. Springer, Berlin (1992)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous Referee and the Corresponding Editor for carefully reading our manuscript and for giving comments and suggestions that helped improving the overall quality of the paper. We wish to thank Alban Quadrat (Inria Lille, France) for fruitful discussions on Vandermonde matrices. We would like to thank Jean-Marie Strelcyn (Université Paris 13, France) for discussions and valuable bibliographical suggestions. Last but not least, we thank Karim L. Trabelsi (IPSA Paris, France) for careful reading of the manuscript and for valuable remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Islam Boussaada.

Additional information

Some of the results proposed in this work have been presented in The 13th European Control Conference, June 24–27, 2014, Strasbourg, France [1] and The 21st International Symposium on Mathematical Theory of Networks and Systems July 7–11, 2014, Groningen, The Netherlands [2].

Appendix

Appendix

In this section, we first summarize the main notations in Table 1. Then, for the sake of self-containment, we report some results selected from the literature. Finally, some useful auxiliary lemmas are presented and proved. The proofs of Theorems 4.4 and 4.6 are provided.

Table 1 Table of the main notations

Here, we report some useful results from the mentioned literature. The main theorem from [16] emphasizes the link between \(\mathbf{card}(\chi_{+})\) and \(\mathbf{card}(\chi_{0})\), both take into account the multiplicity.

Theorem 8.1

(Hassard [16, p. 223])

Consider the quasipolynomial function \(\Delta\) defined by (4). Let \(\rho_{1},\ldots,\rho_{r}\) be the positive roots of \(\mathcal{R}(y)= \Re(i^{n} \Delta(i y))\), counted by their multiplicities and ordered so that \(0<\rho_{1}\leq\cdots\leq\rho_{r}\). For each \(j=1,\ldots,r\) such that \(\Delta(i \rho_{j})=0\), assume that the multiplicity of \(i\rho_{j}\) as a zero of \(\Delta(\lambda,\tau)\) is the same as the multiplicity of \(\rho_{j}\) as a root of \(\mathcal{R}(y)\). Then \(\mathbf{card}(\chi_{+})\) is given by the formula:

$$ \mathbf{card}(\chi_{+})=\frac{n-\mathbf{card}(\chi_{0})}{2}+ \frac{(-1)^{r}}{2}\operatorname{sgn}\mathcal{I}^{(\mu)}(0)+\sum _{j=1} ^{r} \operatorname{sgn}\mathcal{I}( \rho_{j}), $$
(41)

where \(\mu\) designate the multiplicity of the zero spectral value of \(\Delta(\lambda,\tau)=0\) and \(\mathcal{I}(y)=\Im(i^{-n}\Delta(iy))\). Furthermore, \(\mathbf{card}( \chi_{+})\) is odd (respectively, even) if \(\Delta^{(\mu)}(0)<0\) \((\Delta^{(\mu)}(0)>0)\). If \(\mathcal{R}(y)=0\) has no positive zeros, set \(r=0\) and omit the summation term in the expression of \(\mathbf{card}(\chi_{+})\). If \(\lambda=0\) is not a root of the characteristic equation, set \(\mu=0\) and interpret \(\mathcal{I}^{(0)}(0)\) as \(\mathcal{I}(0)\) and \(\Delta^{(0)}(0)\) as \(\Delta(0)\).

The following result from [15] gives a valuable information allowing to have a first estimation on the bound for the codimension of the zero spectral value.

Proposition 8.2

(Pólya-Szegö [15, p. 144])

Let \(\tau_{1}, \ldots, \tau_{N}\) denote real numbers such that

$$ \tau_{1}< \tau_{2}< \cdots < \tau_{N}, $$

and \(d_{1}, \ldots, d_{N}\) positive integers satisfying

$$ d_{1}\geq1, d_{2}\geq1 \ldots d_{N}\geq1, \qquad d_{1}+d_{2}+ \cdots+d_{N}=D+N. $$

Let \(f_{i,j}(s)\) stands for the function \(f_{i,j}(s)=s^{j-1} e^{\tau _{i} s}\), for \(1\leq j\leq d_{i}\) and \(1\leq i\leq N\).

Let \(\sharp\) be the number of zeros of the function

$$ f(s)=\sum_{1\leq i\leq N, 1\leq j\leq d_{i}}c_{i,j} f_{i,j}(s), $$

that are contained in the horizontal strip \(\alpha\leq\mathcal{I}(z) \leq\beta\).

Assuming that

$$ \sum_{1\leq k\leq d_{1}}|c_{1,k}|>0, \ldots, \sum _{1\leq k\leq d _{N}}|c_{N,k}|>0, $$

then

$$ \frac{ ( \tau_{N}-\tau_{1} ) ( \beta-\alpha ) }{2 \pi}-D+1\leq\sharp\leq\frac{ ( \tau_{N}-\tau_{1} ) ( \beta-\alpha ) }{2 \pi}+D+N-1. $$

Setting \(\alpha=\beta=0\), the above proposition allows to \(\sharp_{\mathit{PS}}\leq D+N-1\) where \(D\) stands for the sum of the degrees of the polynomials involved in the quasipolynomial function \(f\) and \(N\) designate the associated number of polynomials. This gives a sharp bound in the case of complete polynomials.

In the sequel, we present some useful lemmas as well as the proofs of the claimed theorems.

Lemma 1

Zero is a root of \(\Delta^{(k)}(\lambda)\) for \(k\geq0\) if, and only if, the coefficients of \(P_{M^{j}}\) for \(0\leq j\leq\tilde{N}_{N,n}\) satisfy the following assertion

$$ a_{0,k}=-\sum_{i\in S_{N,n}} \Biggl[ a_{{i,k}}+\sum_{l=0}^{k-1} { \frac{a_{ {i,l}}{\sigma_{{i}}}^{k-l}}{ ( k-l ) !}} \Biggr]. $$
(A.1)

Proof

We define the family \(\nabla_{k}\) for all \(k\geq0\) by

$$ \nabla_{k}(\lambda)=\sum _{i=0}^{\tilde{N}_{N,n}}{\frac{d ^{k}}{d{\lambda}^{k}}}P_{{M^{i}}} ( \lambda ) +\sum_{j=0} ^{k-1} \Biggl( {k \choose j}\sum_{i=1}^{\tilde{N}_{N,n}}{ \sigma_{{i}}} ^{k-j}{\frac{d^{j}}{d{\lambda}^{j}}} {P}_{{M^{i}}} ( \lambda ) \Biggr), $$
(A.2)

here, \(M^{0}\triangleq0\) and \({\frac{d^{0}}{d{\lambda}^{0}}}f( \lambda)\triangleq f(\lambda)\). Obviously, the defined family \(\nabla_{k}\) is polynomial since \(P_{i}\) and their derivatives are polynomials. Moreover, zero is a root of \(\Delta^{(k)}(\lambda)\) for \(k\geq0\) if, and only if, zero is a root of \(\nabla_{k}(\lambda)\). This can be proved by induction. More precisely, differentiating \(k\) times \(\Delta(\lambda,\tau)\) the following recursive formula is obtained:

$$ \Delta^{(k)}(\lambda)=\sum _{i=0}^{\tilde{N}_{N,n}}{\frac{d^{k}}{d {\lambda}^{k}}}P_{{M^{i}}} ( \lambda ) e^{\sigma_{i}\lambda } +\sum_{j=0}^{k-1} \Biggl( {k\choose j}\sum_{i=1}^{\tilde{N}_{N,n}} { \sigma_{{i}}}^{k-j}{\frac{d^{j}}{d{\lambda}^{j}}} { P}_{{M^{i}}} ( \lambda ) e^{\sigma_{i}\lambda} \Biggr). $$

Since only the zero root is of interest, we can set \(e^{\sigma_{i} \lambda}=1\) which define the polynomial functions \(\nabla_{k}\). Moreover, careful inspection of the obtained quantities presented in (A.2) and substituting \({\frac{d^{k}}{d{\lambda}^{k}}}P_{{i}}(0)=k! a_{i,k}\) leads to the formula (A.1). □

Here, we prove the results given in Sect. 4.2.1, that is, we consider the incidence vector:

$$ \mathcal{V}=(\underbrace{x_{1},\ldots,x_{1}}_{d_{1}}, \underbrace{ \star,\ldots,\star}_{d_{*}},x_{2}). $$

The right hand side of the last equality from (23) defining \(U_{i,d_{1}+1}\) for \(2\leq i\leq d_{1}+1\) can be also written as follows.

Lemma 2

For \(2\leq i\leq d_{1}+1\) the following equality is satisfied:

$$ \varUpsilon_{i,d_{1}+1}-(i-1) \int_{0}^{x_{1}} U_{i-1,d_{1}+1}(y,x_{2})dy= \sum_{k=0}^{i-1}{i-1\choose k}(-1)^{i-1-k}x_{1}^{i-1-k} \varUpsilon_{k+1,d_{1}+1}. $$

Proof of Lemma 2

First, one has \(U_{2,d_{1}+1}= \varUpsilon_{2,d_{1}+1}-x_{1} \varUpsilon_{1,d_{1}+1}=\varUpsilon_{2,d_{1}+1}- \int_{0}^{x_{1}}U_{1,d_{1}+1}(y, x_{2})dy\) since \(U_{1,d1+1}= \varUpsilon_{1,d_{1}+1}(x_{2})\).

Now, let assume that for \(2\leq i\leq p\) where \(p< d_{1}+1\) the following equality is satisfied:

$$ \sum_{l=0}^{i-1}{i-1\choose l} (-1)^{i-1-l} x_{1}^{i-1-l} \varUpsilon_{l+1,d_{1}+1}= \varUpsilon_{i,d_{1}+1}-(i-1) \int_{0}^{x_{1}}U _{i-1,d_{1}+1}(y,x_{2}) dy. $$

One has to show that for \(i=p+1\):

$$ \sum_{l=0}^{p}{p\choose l} (-1)^{p-l} x_{1}^{p-l} \varUpsilon_{l+1,d_{1}+1}= \varUpsilon_{p+1,d_{1}+1}-(p) \int_{0}^{x_{1}}U _{p,d_{1}+1}(y,x_{2}) dy. $$

Indeed,

$$ \textstyle\begin{cases} \displaystyle-\int_{0}^{x_{1}}p U_{p,d_{1}+1}(y,x_{2}) dy=- \int_{0}^{x_{1}}p \sum_{l=0}^{p-1}{p-1 \choose l} (-1)^{p-1-l} s^{p-1-l} \varUpsilon_{l+1,d_{1}+1} ds, \\ \displaystyle\phantom{-\int_{0}^{x_{1}}p U_{p,d_{1}+1}(y,x_{2}) dy}=-\sum_{l=0}^{p-1}\frac{p!}{l! (p-l-1)!} (-1)^{p-1-l} \varUpsilon_{l+1,d_{1}+1} \int_{0}^{x_{1}}s^{p-1-l}ds, \\ \displaystyle\phantom{-\int_{0}^{x_{1}}p U_{p,d_{1}+1}(y,x_{2}) dy}=\sum_{l=0}^{p-1}{p\choose l} (-1)^{p-l} x_{1}^{p-l} \varUpsilon_{l+1,d_{1}+1}. \end{cases} $$

 □

Proof of Theorem 4.4

The only difference between algorithms (23) and (20) lies in definition of the last column of the matrix \(U\). Thus, one has to show that for any \(2\leq i\leq d_{1}+1\) the following equality holds \(\varUpsilon_{i,d_{1}+1}=\sum_{k=1}^{i}L_{i,k}U _{k,d_{1}+1}\). By definition, one has:

$$ \textstyle\begin{cases} \displaystyle\varUpsilon_{2,d_{1}+1} =\sum_{k=1}^{2} L_{2,k} U_{k,d_{1}+1} \\ \phantom{\varUpsilon_{2,d_{1}+1}}=L_{2,1} U_{1,d_{1}+1}+L_{2,2} U_{2,d_{1}+1} \\ \phantom{\varUpsilon_{2,d_{1}+1}}= x_{1} \varUpsilon_{1,d_{1}+1}+U_{2,d_{1}+1}. \end{cases} $$
(42)

Now, let assume that for \(2\leq i\leq p\) where \(p< d_{1}+1\) the following equality is satisfied:

$$ U_{i,d_{1}+1}=\varUpsilon_{i,d_{1}+1}-(i-1) \int_{0}^{x_{1}}U_{i-1,d_{1}+1}(y,x _{2}) dy, $$

or equivalently, from Lemma 2

$$ U_{i,d_{1}+1}=\sum_{l=0}^{i-1}{i-1 \choose l} (-1)^{i-1-l} x _{1}^{i-1-l} \varUpsilon_{l+1,d_{1}+1}. $$

It stills to show that the last equality from (23) holds for \(U_{p+1,d_{1}+1}\) when \(p< d_{1}+1\). Indeed, by definition

$$ U_{p+1,d_{1}+1}=\varUpsilon_{p+1,d_{1}+1}-\sum_{k=1}^{p}L_{p+1,k}U_{k,d _{1}+1}. $$

Moreover (for same arguments as the ones given in the proof of Lemma 6 presented in the sequel), one has \(L_{p+1,k}= \frac{1}{k-1}\frac{\partial L_{p+1,k-1}}{\partial x_{1}}\). Thus, \(L_{p+1,k}=\frac{1}{(k-1)!}\frac{\partial^{k-1} L_{p+1,1}}{\partial x _{1}^{k-1}}=\frac{1}{(k-1)!}\frac{\partial^{k-1}x_{1}^{p} }{\partial x_{1}^{k-1}}=\frac{p! x_{1}^{p-k+1}}{(p-k+1)! (k-1)!}\). So that, one has:

$$ L_{p+1,k}={p\choose k-1} x_{1}^{p-(k-1)}. $$
(43)

Now, by definition of \(U_{p+1,d_{1}+1}\) and using (43) as well as the recurrence assumption, we obtain

$$ \textstyle\begin{cases} \displaystyle U_{p+1,d_{1}+1}=\varUpsilon_{p+1,d_{1}+1}- \sum_{\l=1}^{p}L_{p+1,\l}U _{\l,d_{1}+1} \\ \displaystyle\phantom{U_{p+1,d_{1}+1}} =\varUpsilon_{p+1,d_{1}+1}-\sum_{\l=1}^{p} \sum_{l=0}^{k-1} {\l-1\choose l} {p\choose \l-1} (-1)^{\l-l-1} x_{1}^{\l-l-1} x _{1}^{p-(\l-1)} \varUpsilon_{l+1,d_{1}+1} \\ \displaystyle \phantom{U_{p+1,d_{1}+1}} =\varUpsilon_{p+1,d_{1}+1}-\sum_{\l=1}^{p} \sum_{l=0}^{\l-1} {\l-1\choose l} {p\choose \l-1} (-1)^{\l-1-l} x_{1}^{p-l} \varUpsilon_{l+1,d_{1}+1} \end{cases} $$

Thus, one has to prove that

$$\begin{aligned} \sum_{k=0}^{p-1}{p\choose k} (-1)^{p-k} x_{1}^{p-k} \varUpsilon_{k+1,d_{1}+1}=- \sum_{\l=1}^{p}\sum _{l=0}^{\l-1} {\l-1\choose l} {p\choose \l-1} (-1)^{\l-1-l} x_{1}^{p-l} \varUpsilon_{l+1,d_{1}+1}. \end{aligned}$$
(44)

Recall that, the two side expressions of (44) are polynomials in \(x_{1}\) and \(x_{2}\). The only quantities depending in \(x_{2}\) are \((\varUpsilon_{k,d_{1}+1})_{1\leq k\leq p}\). Since, \(\deg( \varUpsilon_{k,d_{1}+1})\neq\deg(\varUpsilon_{k',d_{1}+1})\) for \(k\neq k'\), it will be enough to we examine the equality of coefficients of the two side expressions in \(\varUpsilon_{m+1,d_{1}+1}\) for arbitrarily chosen \(0\leq m\leq p-1\). So that, let \(m=k_{0}\) for which corresponds \(m=l_{0}\) in the right hand side quantity from (44). Then consider the coefficient of \(x_{1}^{p-m} \varUpsilon_{m+1,d_{1}+1}\) from the two sides of (44). Now, one easily check that \(\sum_{\l=m}^{p}{\l-1\choose m} {p\choose \l-1} ( -1 ) ^{\l-m}= ( -1 ) ^{p-m}{p\choose m}\) is always satisfied, which ends the proof. □

In what follow, we propose some lemmas exhibiting some interesting properties of functional Birkhoff matrices. Those will be useful for the analytical proof of Theorem 4.6.

Lemma 3

Equation (30) is equivalent to:

$$ U_{i,j}=\sum_{l=0}^{i-1}{i-1 \choose l} (-1)^{l} x_{1}^{l} \varUpsilon_{i-l,j} \quad\textit{for } j=d_{1}+d_{2}^{-}+1 \textit{ and } 2 \leq i\leq d_{1}+1. $$
(45)

Proof of Lemma 3

The equality (45) follows directly by induction. First, one checks that

$$ \varUpsilon_{2,d_{1}+d_{2}^{-}+1}=U_{2,d_{1}+d_{2}^{-}+1}+x_{1} \varUpsilon_{1,d_{1}+d_{2}^{-}+1}. $$

Indeed,

$$ \textstyle\begin{cases} \displaystyle\varUpsilon_{2,d_{1}+d_{2}^{-}+1}=\sum _{k=1}^{2} L_{2,k} U_{k,d_{1}+d _{2}^{-}+1} \\ \displaystyle\phantom{\varUpsilon_{2,d_{1}+d_{2}^{-}+1}} =L_{2,1} U_{1,d_{1}+d_{2}^{-}+1}+L_{2,2} U_{2,d_{1}+d_{2}^{-}+1} \\ \displaystyle\phantom{\varUpsilon_{2,d_{1}+d_{2}^{-}+1}} = x_{1} \varUpsilon_{1,d_{1}+d_{2}^{-}+1}+U_{2,d_{1}+d_{2}^{-}+1}, \end{cases} $$
(46)

since \(L_{2,2}=1\). Now, let assume that

$$\begin{aligned} U_{i,j}=\sum_{l=0}^{i-1}{i-1 \choose l} (-1)^{l} x_{1}^{l} \varUpsilon_{i-l,j} \quad\text{for } j=d_{1}+d_{2}^{-}+1 \mbox{ and } 2 \leq i\leq p \mbox{ and }p< d_{1}+1. \end{aligned}$$
(47)

From Eq. (30) one has

$$ U_{p+1,d_{1}+d_{2}^{-}+1}=\varUpsilon_{p+1,d_{1}+d_{2}^{-}+1}-p \int _{0}^{x_{1}}U_{p,d_{1}+d_{2}^{-}+1}(y,x_{2})dy. $$

Using (47), one has,

$$ \textstyle\begin{cases} \displaystyle U_{p+1,d_{1}+d_{2}^{-}+1}\\ \displaystyle \quad= \varUpsilon_{p+1,d_{1}+d_{2}^{-}+1} \\ \displaystyle\qquad{}-p \int_{0}^{x_{1}} \Biggl( \varUpsilon_{p,d_{1}+d_{2}^{-}+1}(y,x_{2})+ \sum_{l=1}^{p-1}{{p-1}\choose l} (-1)^{l} y^{l} \varUpsilon_{p-l,d_{1}+d_{2}^{-}+1}(y,x_{2}) \Biggr) dy \\ \displaystyle\quad=\varUpsilon_{p+1,d_{1}+d_{2}^{-}+1}-p \varUpsilon_{p,d_{1}+d_{2}^{-}+1}x _{1}+ \sum_{l=1}^{p-1}p {{p-1}\choose l} (-1)^{l} \varUpsilon_{p-l,d_{1}+d_{2}^{-}+1} \int_{0}^{x_{1}}y^{l}dy \\ \displaystyle\quad =\sum_{l=0}^{p}{p\choose l} (-1)^{l} x_{1}^{l} \varUpsilon_{p+1-l,d_{1}+d_{2}^{-}+1}, \end{cases} $$

which ends the proof. □

Lemma 4

$$\begin{aligned} \varUpsilon_{i+1,j}=x_{2} \varUpsilon_{i,j}+\bigl(d_{2}^{-}+d^{*} \bigr) \int_{0} ^{x_{2}}\varUpsilon_{i,j}(y)dy \quad\textit{for } j=d_{1}+d_{2}^{-}+1 \textit{ and } 1\leq i\leq d_{1}+d_{2}^{-}. \end{aligned}$$
(48)

Proof of Lemma 4

Let consider the coalescence [50] confluent Vandermonde matrix \(\hat{\varUpsilon}\) which regularize the considered Birkhoff matrix \(\varUpsilon\). That is \(\hat{\varUpsilon}\) is the rectangular matrix associated with the incidence matrix

$$ \mathcal{V}=(\underbrace{x_{1},\ldots,x_{1}}_{d_{1}}, \underbrace{x _{2},\ldots,x_{2}}_{d_{2}^{-}}, \underbrace{x_{2},\ldots,x_{2}}_{d _{*}},x_{2}). $$

Here, the “stars” ⋆ in (32) are simply replaced by \(x_{2}\). Thus, \(\varUpsilon\) and \(\hat{\varUpsilon}\) have the same number of rows, but the number of columns of \(\hat{\varUpsilon}\) exceeds the columns number of \(\varUpsilon\) by \(d^{*}\). We point out that \(\varUpsilon_{i+1,d_{1}+d_{2}^{-}+1}\) is nothing but \(\hat{\varUpsilon} _{i+1,d_{1}+d_{2}^{-}+1+d^{*}}\). This means that the term \((d_{2}^{-}+d ^{*}) \int_{0}^{x_{2}}\varUpsilon_{i,j}\) in (48) is exactly \(\hat{\varUpsilon}_{i+1,d_{1}+d_{2}^{-}+d^{*}}\). Thus, equality (48) turns to be

$$ \bar{\varUpsilon}_{i+1,j}=x_{2} \bar{\varUpsilon}_{i,j}+ \bar{\varUpsilon} _{i,j-1} \quad\text{for } j=d_{1}+d_{2}^{-}+1+d^{*} \mbox{ and } 1 \leq i\leq d_{1}+d_{2}^{-}. $$

This last equality can be easily proved by using a 2-D recurrence in terms of \(\bar{\varUpsilon}\) (regular matrix) as in the proof of Theorem 4.1 to show that it applies even for \(d_{1}+2\leq j\leq d _{1}+d_{2}^{-}+1+d^{*}\). □

The following lemma provides an other way defining the components of \(U\) given by (30).

Lemma 5

For all \(i=1,\ldots,d_{1}\) and \(j=d_{1}+d_{2}^{-}+1\) the following equality applies

$$ U_{i+1,j^{*}}=(x_{2}-x_{1}) U_{i,j^{*}}+\bigl(d_{2}^{-}+d^{*}\bigr) \int_{0} ^{x_{2}}U_{i,j^{*}}(y)dy. $$
(49)

Proof of Lemma 5

Let set

$$\begin{aligned} \mathcal{I}_{k}=U_{k+1,j^{*}}+(x_{1}-x_{2}) U_{k,j^{*}}-\bigl(d_{2}^{-}+d ^{*}\bigr) \int_{0}^{x_{2}}U_{k,j^{*}}(y)dy, \end{aligned}$$

where \(j^{*}=d_{1}+d_{2}^{-}+1+d^{*}\) and \(1\leq k\leq d_{1}+1\).

Substitute Eq. (45) from Lemma 3 in \(\mathcal{I}_{k}\), to obtain

$$\begin{aligned} \mathcal{I}_{k} =&\sum_{l=0}^{k}{k \choose l}(-1)^{l}x_{1}^{l} \varUpsilon_{k+1-l,j^{*}} \\ &{}-\sum_{l=0}^{k-1}{k-1\choose l}(-1)^{l}x_{1}^{l} \biggl( (x_{2}-x _{1})\varUpsilon_{k-l,j^{*}}+\bigl(d_{2}^{-}+d^{*} \bigr) \int_{0}^{x_{2}} \varUpsilon_{k-l,j^{*}}(y)dy \biggr). \end{aligned}$$

Using Lemma 4, one obtains

$$\begin{aligned} \mathcal{I}_{k} = &\sum_{l=1}^{k-1}(-1)^{l} x_{1}^{l} \biggl[ \biggl( {k\choose l}- {k-1\choose l} \biggr) \varUpsilon_{k+1-l,j^{*}}+x_{1}{k-1\choose l} \varUpsilon_{k-l,j^{*}} \biggr] \\ &{}+(-1)^{k} x_{1}^{k} \varUpsilon_{1,j^{*}}+x_{1} \varUpsilon_{k,j^{*}} \\ = &\sum_{l=1}^{k-1}(-1)^{l} x_{1}^{l} \biggl( {k-1\choose l-1} \varUpsilon_{k+1-l,j^{*}}+x_{1}{k-1 \choose l}\varUpsilon_{k-l,j^{*}} \biggr) +(-1)^{k} x_{1}^{k} \varUpsilon_{1,j^{*}} +x_{1} \varUpsilon_{k,j^{*}} \end{aligned}$$

which is as expected identically zero, that ends the proof. □

The following lemma provides a differential relation between the coefficients of \(L\) matrix.

Lemma 6

For all \(1\leq k\leq p\) the following equality holds

$$ \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}=k L_{d_{1}+p,d _{1}+k+1} $$
(50)

The following result applies when dealing with \(\varUpsilon_{i,j}\) and \(\varUpsilon_{i,j-1}\) are in the same variable block. We emphasize that such a property is inherited by the expressions of \(L\) defined in (28).

Proof of Lemma 6

The proof is 2-D recurrence-based. First, one easily check that for \(p=2\) then \(k=1\)

$$ L_{d_{1}+2,d_{1}+2}=\frac{\partial L_{d_{1}+2,d_{1}+1}}{\partial x _{2}} $$

since by definition of \(L\) one has \(L_{d_{1}+2,d_{1}+1}=L_{d_{1}+1,d _{1}}+x_{2} L_{d_{1}+1,d_{1}+1}=L_{d_{1}+1,d_{1}}+x_{2}\) and \(\frac{\partial L_{d_{1}+1,d_{1}}}{\partial x_{2}}=0\). When assuming that

$$ L_{d_{1}+p,d_{1}+2}=\frac{\partial L_{d_{1}+p,d_{1}+1}}{\partial x _{2}}, $$

and again, using the definition of \(L\), one obtains,

$$\begin{aligned} L_{d_{1}+p+1,d_{1}+2} &=L_{d_{1}+p,d_{1}+1}+x_{2}L_{d_{1}+p,d_{1}+2}, \\ L_{d_{1}+p+1,d_{1}+1} &=L_{d_{1}+p,d_{1}}+x_{2}L_{d_{1}+p,d_{1}+1}, \end{aligned}$$

which as expected gives:

$$\begin{aligned} \frac{\partial L_{d_{1}+p+1,d_{1}+1}}{\partial x_{2}} &=L_{d_{1}+p,d _{1}+1}+x_{2}\frac{\partial L_{d_{1}+p,d_{1}+1}}{\partial x_{2}} \\ &=L_{d_{1}+p,d_{1}+1}+x_{2}L_{d_{1}+p,d_{1}+2}=L_{d_{1}+p+1,d_{1}+2}. \end{aligned}$$

Let assume that for any \(2< p< d_{2}^{-}+1\) and \(k=1,\ldots,p-1\) one has

$$ \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}=k L_{d_{1}+p,d _{1}+k+1}. $$

One has to prove the following equalities:

$$ \textstyle\begin{cases} \displaystyle\frac{\partial L_{d_{1}+p+1,d_{1}+k}}{\partial x_{2}} =k L_{d_{1}+p+1,d _{1}+k+1}, \\ \displaystyle\frac{\partial L_{d_{1}+p,d_{1}+k+1}}{\partial x_{2}} =(k+1) L_{d _{1}+p,d_{1}+k+2}, \\ \displaystyle\frac{\partial L_{d_{1}+p+1,d_{1}+k+1}}{\partial x_{2}} =(k+1) L_{d _{1}+p+1,d_{1}+k+2}. \end{cases} $$
(51)

Let us consider the first equality of (51), using the definition of \(L\) that asserts that

$$\begin{aligned} L_{d_{1}+p+1,d_{1}+k+1} &=L_{d_{1}+p,d_{1}+k}+x_{2}L_{d_{1}+p,d_{1}+k+1} \\ L_{d_{1}+p+1,d_{1}+k} &=L_{d_{1}+p,d_{1}+k-1}+x_{2}L_{d_{1}+p,d_{1}+k}. \end{aligned}$$

Which gives

$$\begin{aligned} \frac{\partial L_{d_{1}+p+1,d_{1}+k}}{\partial x_{2}} &=\frac{\partial L_{d_{1}+p,d_{1}+k-1}}{\partial x_{2}}+x_{2}\frac{\partial L_{d_{1}+p,d _{1}+k}}{\partial x_{2}}+L_{d_{1}+p,d_{1}+k} \\ &=(k-1)L_{d_{1}+p,d_{1}+k}+k L_{d_{1}+p,d_{1}+k+1}+L_{d_{1}+p,d_{1}+k} \\ &=k L_{d_{1}+p+1,d_{1}+k+1}. \end{aligned}$$

By the same way, the remaining two equality from (51) are obtained:

$$\begin{aligned} \frac{\partial L_{d_{1}+p,d_{1}+k+1}}{\partial x_{2}} &=\frac{\partial ( L_{d_{1}+p-1,d_{1}+k}+x_{2}L_{d_{1}+p-1,d_{1}+k+1} ) }{ \partial x_{2}} \\ &=\frac{\partial L_{d_{1}+p-1,d_{1}+k}}{\partial x_{2}}+x_{2}\frac{ \partial L_{d_{1}+p-1,d_{1}+k+1}}{\partial x_{2}}+L_{d_{1}+p-1,d_{1}+k+1} \\ &=kL_{d_{1}+p-1,d_{1}+k+1}+(k+1) x_{2}L_{d_{1}+p-1,d_{1}+k+2}+L_{d _{1}+p-1,d_{1}+k+1} \\ &=(k+1) L_{d_{1}+p,d_{1}+k+2}, \end{aligned}$$

and

$$\begin{aligned} \frac{\partial L_{d_{1}+p+1,d_{1}+k+1}}{\partial x_{2}} &=\frac{\partial ( L_{d_{1}+p,d_{1}+k}+x_{2}L_{d_{1}+p,d_{1}+k+1} ) }{\partial x_{2}} \\ &=\frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}+x_{2}\frac{ \partial L_{d_{1}+p,d_{1}+k+1}}{\partial x_{2}}+L_{d_{1}+p,d_{1}+k+1} \\ &=kL_{d_{1}+p,d_{1}+k+1}+(k+1) x_{2}L_{d_{1}+p,d_{1}+k+2}+L_{d_{1}+p,d _{1}+k+1} \\ &=(k+1) L_{d_{1}+p+1,d_{1}+k+2}, \end{aligned}$$

that ends the proof. □

Proof of Theorem 4.6

The only change occurring in (26)–(31) compared with (20) is the way in defining the column \(d_{1}+d_{2}^{-}+1\) of \(U\). Moreover, such a column is only involved in computing the column \(d_{1}+d_{2}^{-}+1\) of \(\varUpsilon\). Thus, it stills to show that the equalities (30) and (31); this will be done by recurrence. Equation (30) follow directly by induction from Lemma 3.

Let us focus now on (31) and denote \(j^{*}=d_{1}+d_{2}^{-}+1\). First, let us check that

$$\begin{aligned} U_{d_{1}+2,j^{*}}= &\bigl(d_{2}^{-}+d^{*}\bigr) \int_{0}^{x_{2}}U_{d_{1}+1,j ^{*}}(x_{1},y)dy. \end{aligned}$$

From the one side, using Lemma 4 one has

$$\begin{aligned} \varUpsilon_{d+2,j^{*}} &=x_{2} \varUpsilon_{d+1,j^{*}}+ \bigl(d_{2}^{-}+d^{*}\bigr) \int_{0}^{x_{2}}\varUpsilon_{d+1,j^{*}}(y)dy, \\ &=x_{2} \sum_{k=1}^{d_{1}+1}L_{d_{1}+1,k}U_{k,j^{*}}+ \bigl(d_{2}^{-}+d ^{*}\bigr) \int_{0}^{x_{2}}\sum_{k=1}^{d_{1}+1}L_{d_{1}+1,k}U_{k,j^{*}}(y) dy. \end{aligned}$$

Since by definition one has \(L_{d_{1}+1,d_{1}+1}=1\) and \(L_{d_{1}+1,k}=L _{d_{1}+1,k}(x_{1})\) for \(k\in\{1,\ldots,d_{1}\}\) then

$$\begin{aligned} \varUpsilon_{d+2,j^{*}} = &x_{2}\sum_{k=1}^{d_{1}+1}L_{d_{1}+1,k}U_{k,j ^{*}}+ \bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=1}^{d_{1}}L_{d_{1}+1,k} \int_{0}^{x_{2}}U _{k,j^{*}}(y)dy \\ &{}+\bigl(d_{2}^{-}+d^{*}\bigr) \int_{0}^{x_{2}}U_{d_{1}+1,j^{*}}(y)dy. \end{aligned}$$

From the other side and by definition of \(\varUpsilon\),

$$\begin{aligned} \varUpsilon_{d_{1}+2,j^{*}} &=\sum_{k=1}^{d_{1}+2}L_{d_{1}+2,k}U_{k,j^{*}}=U _{d_{1}+2,j^{*}}+\sum_{k=1}^{d_{1}+1}L_{d_{1}+2,k}U_{k,j^{*}}. \end{aligned}$$

To prove (31) for \(i=d_{1}+2\) one has to prove that

$$\begin{aligned} \sum_{k=1}^{d_{1}+1}L_{d_{1}+2,k}U_{k,j^{*}}=x_{2} \sum_{k=1}^{d_{1}+1}L _{d_{1}+1,k}U_{k,j^{*}}+ \bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=1}^{d_{1}}L_{d_{1}+1,k} \int_{0}^{x_{2}}U_{k,j^{*}}(y)dy, \end{aligned}$$

or equivalently to prove

$$\begin{aligned} \sum_{k=1}^{d_{1}+1} ( L_{d_{1}+2,k}-x_{2} L_{d_{1}+1,k} ) U _{k,j^{*}}-\bigl(d_{2}^{-}+d^{*} \bigr)\sum_{k=1}^{d_{1}}L_{d_{1}+1,k} \int_{0} ^{x_{2}}U_{k,j^{*}}(y)dy=0 . \end{aligned}$$
(52)

Using Eq. (28), one obtain

$$\begin{aligned} &L_{d_{1}+2,k}-x_{2} L_{d_{1}+1,k}=L_{d_{1}+1,k-1}+ ( x_{1}-x_{2} ) L_{d_{1}+1,k}, \quad\text{for } k=1,\ldots,d_{1}, \\ &L_{d_{1}+2,d_{1}+1}-x_{2} L_{d_{1}+1,d_{1}+1}=L_{d_{1}+1,d_{1}}. \end{aligned}$$

Thus, the right hand side of (52) becomes

$$\begin{aligned} &\sum_{k=1}^{d_{1}}L_{d_{1}+1,k}U_{k+1,j^{*}}+(x_{1}-x_{2}) \sum_{k=1} ^{d_{1}}L_{d_{1}+1,k}U_{k,j^{*}} -\bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=1}^{d_{1}}L_{d_{1}+1,k} \int_{0}^{x_{2}}U _{k,j^{*}}(y)dy \\ &\quad= \sum_{k=1}^{d_{1}}L_{d_{1}+1,k} \biggl( U_{k+1,j^{*}}+(x_{1}-x_{2}) U _{k,j^{*}}- \bigl(d_{2}^{-}+d^{*}\bigr) \int_{0}^{x_{2}}U_{k,j^{*}}(y)dy \biggr). \end{aligned}$$

Lemma 5 asserts that for all \(i=1,\ldots,d_{1}\) and \(j=d_{1}+d_{2}^{-}+1\) one has

$$\begin{aligned} U_{k+1,j^{*}}+(x_{1}-x_{2}) U_{k,j^{*}}-\bigl(d_{2}^{-}+d^{*}\bigr) \int_{0} ^{x_{2}}U_{k,j^{*}}(y)dy=0 \end{aligned}$$
(53)

which implies that (31) applies for \(i=d_{1}+2\).

Let assume now that, (31) is satisfied for \(i=d_{1}+2, \ldots,d_{1}+p\) where \(1< p< d_{2}^{-}+d^{*}\). It stills to prove that (31) is satisfied for \(i=d_{1}+p+1\).

By the same argument as for \(i=d_{1}+2\), one has

$$\begin{aligned} \varUpsilon_{d_{1}+p+1,j^{*}} =&x_{2} \varUpsilon_{d+p,j^{*}}+ \bigl(d_{2}^{-}+d ^{*}\bigr) \int_{0}^{x_{2}}\varUpsilon_{d+p,j^{*}}(y)dy, \\ =&x_{2}\sum_{k=1}^{d_{1}+p}L_{d_{1}+p,k}U_{k,j^{*}} \bigl(d_{2}^{-}+d^{*}\bigr) \sum _{k=1}^{d_{1}+p-1} \int_{0}^{x_{2}}L_{d_{1}+p,k}U_{k,j^{*}}(y)dy \\ &{}+\bigl(d_{2}^{-}+d^{*}\bigr) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy \\ =&x_{2}\sum_{k=1}^{d_{1}+p}L_{d_{1}+p,k}U_{k,j^{*}}+ \bigl(d_{2}^{-}+d^{*}\bigr) \sum _{k=1}^{d_{1}+p-1} \int_{0}^{x_{2}}L_{d_{1}+p,k}U_{k,j^{*}}(y)dy \\ &{}+(p-1) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy+ \bigl(d_{2}^{-}+d^{*}-p+1\bigr) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy \end{aligned}$$

From the other side, we obtain

$$\begin{aligned} \varUpsilon_{d_{1}+p+1,j^{*}} &=\sum_{k=1}^{d_{1}+p}L_{d_{1}+p+1,k}U_{k,j ^{*}}+U_{d_{1}+p+1,j^{*}}. \end{aligned}$$

Hence, we have to prove that

$$\begin{aligned} &\sum_{k=1}^{d_{1}+p}L_{d_{1}+p+1,k}U_{k,j^{*}}-x_{2} \sum_{k=1}^{d _{1}+p}L_{d_{1}+p,k}U_{k,j^{*}}- \bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=1}^{d_{1}+p-1} \int_{0}^{x_{2}}L_{d_{1}+p,k}U_{k,j^{*}}(y)dy \\ &\quad=(p-1) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy. \end{aligned}$$

Now, using the result from Lemma 5, one has to prove that

$$\begin{aligned} &\sum_{k=d_{1}+2}^{d_{1}+p}L_{d_{1}+p+1,k}U_{k,j^{*}}-x_{2} \sum_{k=d_{1}+2}^{d_{1}+p}L_{d_{1}+p,k}U_{k,j^{*}} \end{aligned}$$
(54)
$$\begin{aligned} &\qquad{}-\bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=d_{1}+1}^{d_{1}+p-1} \int_{0}^{x_{2}}L_{d _{1}+p,k}U_{k,j^{*}}(y)dy \end{aligned}$$
(55)
$$\begin{aligned} &\quad=(p-1) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy. \end{aligned}$$
(56)

Using Eq. (28), one obtains

$$\begin{aligned} &L_{d_{1}+p+1,k}-x_{2} L_{d_{1}+p,k}=L_{d_{1}+p,k-1}, \text{ for } k=d_{1}+2,\ldots,d_{1}+p. \end{aligned}$$

Finally, Eq. (54) becomes

$$\begin{aligned} E =&\sum_{k=1}^{p-1}L_{d_{1}+p,d_{1}+k}U_{d_{1}+k+1,j^{*}}- \bigl(d_{2}^{-}+d ^{*}\bigr)\sum _{k=1}^{p-1} \int_{0}^{x_{2}}L_{d_{1}+p,d_{1}+k}U_{d_{1}+k,j ^{*}}(y)dy \\ &{}-(p-1) \int_{0}^{x_{2}}U_{d_{1}+p,j^{*}}(y)dy=0. \end{aligned}$$
(57)

Differentiating \(E\) given in (57) with respect to the variable \(x_{2}\) one obtains

$$\begin{aligned} \frac{\partial E}{\partial x_{2}} = &\sum_{k=1}^{p-1} \biggl( \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}U_{d_{1}+k+1,j^{*}}+L_{d_{1}+p,d _{1}+k}\frac{\partial U_{d_{1}+k+1,j^{*}}}{\partial x_{2}} \biggr) \\ &{}-\bigl(d_{2}^{-}+d^{*}\bigr)\sum _{k=1}^{p-1}L_{d_{1}+p,d_{1}+k}U_{d_{1}+k,j ^{*}}-(p-1)U_{d_{1}+p,j^{*}} \\ = &\sum_{k=1}^{p-1} \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}U_{d_{1}+k+1,j^{*}} \\ &{}+\sum_{k=1}^{p-1}L_{d_{1}+p,d_{1}+k} \biggl( \frac{\partial U_{d_{1}+k+1,j ^{*}}}{\partial x_{2}}-\bigl(d_{2}^{-}+d^{*} \bigr)U_{d_{1}+k,j^{*}} \biggr) -(p-1)U _{d_{1}+p,j^{*}} \end{aligned}$$

By using the recurrence assumption, one obtains,

$$\begin{aligned} \frac{\partial E}{\partial x_{2}} = &\sum_{k=1}^{p-1} \frac{\partial L _{d_{1}+p,d_{1}+k}}{\partial x_{2}}U_{d_{1}+k+1,j^{*}}-(p-1)U_{d_{1}+p,j ^{*}} \\ &{}+\sum_{k=1}^{p-1}L_{d_{1}+p,d_{1}+k} \bigl( \bigl(d_{2}^{-}+d^{*}-(k-1)\bigr)U _{d_{1}+k,j^{*}}-\bigl(d_{2}^{-}+d^{*} \bigr)U_{d_{1}+k,j^{*}} \bigr) \\ = &\sum_{k=1}^{p-1} \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}U_{d_{1}+k+1,j^{*}}- \sum_{k=2}^{p-1}(k-1) L_{d_{1}+p,d_{1}+k} U_{d_{1}+k,j^{*}}-(p-1)U _{d_{1}+p,j^{*}} \\ = &\sum_{k=1}^{p-2} \biggl( \frac{\partial L_{d_{1}+p,d_{1}+k}}{\partial x_{2}}-k L_{d_{1}+p,d_{1}+1+k} \biggr) U_{d_{1}+1+k,j^{*}} \\ &{}+ \biggl( \frac{\partial L_{d_{1}+p,d_{1}+p-1}}{\partial x_{2}}-(p-1) \biggr) U _{d_{1}+p,j^{*}}\equiv0, \end{aligned}$$

which is as expected zero since Lemma 6 asserts that each factor is identically zero, that ends the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boussaada, I., Niculescu, SI. Characterizing the Codimension of Zero Singularities for Time-Delay Systems. Acta Appl Math 145, 47–88 (2016). https://doi.org/10.1007/s10440-016-0050-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10440-016-0050-9

Keywords

Navigation