Skip to main content
Advertisement
  • Loading metrics

Testing structural identifiability by a simple scaling method

  • Mario Castro ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Writing – original draft, Writing – review & editing

    marioc@comillas.edu

    ☯ Both authors also contributed equally to this work.

    Affiliations Grupo Interdisciplinar de Sistemas Complejos (GISC), Madrid, Spain, Instituto de Investigación Tecnológica (IIT), Universidad Pontificia Comillas, Madrid, E28015, Spain

  • Rob J. de Boer

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    ☯ Both authors also contributed equally to this work.

    Affiliation Theoretical Biology and Bioinformatics, Utrecht University, Utrecht, The Netherlands

Abstract

Successful mathematical modeling of biological processes relies on the expertise of the modeler to capture the essential mechanisms in the process at hand and on the ability to extract useful information from empirical data. A model is said to be structurally unidentifiable, if different quantitative sets of parameters provide the same observable outcome. This is typical (but not exclusive) of partially observed problems in which only a few variables can be experimentally measured. Most of the available methods to test the structural identifiability of a model are either too complex mathematically for the general practitioner to be applied, or require involved calculations or numerical computation for complex non-linear models. In this work, we present a new analytical method to test structural identifiability of models based on ordinary differential equations, based on the invariance of the equations under the scaling transformation of its parameters. The method is based on rigorous mathematical results but it is easy and quick to apply, even to test the identifiability of sophisticated highly non-linear models. We illustrate our method by example and compare its performance with other existing methods in the literature.

Author summary

Theoretical Biology is a useful approach to explain, generate hypotheses, or discriminate among competing theories. A well-formulated model has to be complex enough to capture the relevant mechanisms of the problem, and simple enough to be fitted to data. Structural identifiability tests aim to recognize, in advance, if the structure of the model allows parameter fitting even with unlimited high-quality data. Available methods require advanced mathematical skills, or are too costly for high-dimensional non-linear models. We propose an analytical method based on scale invariance of the equations. It provides definite answers to the structural identifiability problem while being simple enough to be performed in a few lines of calculations without any computational aid. It favorably compares with other existing methods.

This is a PLOS Computational Biology Methods paper.

Introduction

Mathematical models contribute to our understanding of Biology in several ways ranging from the quantification of biological processes to reconciling conflicting experiments [1]. In many cases, this requires formulating a mathematical model and extracting quantitative estimates of its parameters from the experimental data. Parameters are typically unknown constants that change the behavior of the model. While it is usually recognized that parameter estimation requires the availability of sufficient informative data, sometimes it is not possible to estimate all parameters due to the structure of the model (whatever the quantity or quality of the data), even with large amounts of noiseless observations. This inability is referred to as ‘structural identifiability’, a concept introduced decades ago by Bellman and Åström [2, 3], as opposed to the ‘practical identifiability’ that depends on limitations set by the data. Practical identifiability has important consequences that can lead to questionable interpretations of the data leading to some recent controversy around this point [4, 5]. Structural identifiability poses an unsolvable limitation as it is unrelated to the resolution of the experimental data collection or the number of observations.

Structural identifiability is a necessary condition for model fitting and should be used before any attempt to extract information about the parameters, and as a test of the applicability of the model itself. Importantly, the quality of the fit does not guarantee that the estimated parameters are meaningful. In practice, this is both uncontrolled and misleading, as many fitting tools provide information about the goodness of fit but do not check sensitivity or identifiability. Structural identifiability can be qualified as global or local [610]. Global structural identifiability tests the ability to estimate unique sets of parameters, while local (or simply, structural identifiability) means that parameters can be estimated only in a limited subset of the space of parameters. In practical terms, these definitions can be translated into the language of sensitivity analysis as identifiability requires that (i) the columns of the sensitivity matrix are linearly independent, and (ii) each of its columns has at least one large entry [11, 12].

Traditionally, work primarily focused on linear systems [2, 3, 13] based on ordinary differential equations (ODE). For non-linear models, those methods cannot be applied, so many methods have been proposed in the literature to address structural identifiability. Early attempts were based on power series expansions of the original non-linear system [14], the similarity transformation method [1517] or the so-called direct-test method proposed by Denis-Vidal and Joly-Blanchard [18, 19]. These methods exploit the definition of identifiability either analytically [18] or numerically [2025], but they are not generically suitable for high-dimensional problems. Xia and Moog [6, 26] proposed an alternative to these classical methods based on the implicit function theorem, but this method also becomes involved to apply for complex models [27].

Another approach that is becoming mainstream is based on the framework of differential algebra [2831]. These methods are also difficult to apply, requiring advanced mathematical skills and, in some cases, replace highly non-linear terms by polynomial approximations that simplify the analysis. On the positive side, they are based on rigorous mathematical theories, are suitable for non-linear models and, more importantly, they can be coded using existing symbolic computational libraries. In this regard, it is worth mentioning DAISY [32], GenSSI [33], COMBOS [34] or, more recently, SIAN [35].

In almost all cases, the major disadvantage of these methods is their difficulty to apply them to even a few differential equations, hence requiring advanced mathematical skills and/or dedicated numerical or symbolic software (that is frequently unable to handle the complexity of the problem). This explains why, despite the huge volume of publications in the field of theoretical biology, only a few address parameter identifiability explicitly. In this paper, we introduce a simple method to assess local structural identifiability of ODE models that reduces the complexity of existing methods and can bring identifiability testing to a broader audience. Our method is based on simple scaling transformations, and the solution of simple sparse systems of equations. Identifiability for stochastic models [36] is out of the scope of our work.

Method

A couple of motivating examples

Consider a simple death model in which the death rate is the product of two parameters λ1 and λ2, namely (1) with the solution (2) It is evident that from an experiment only the product λ1λ2 can be inferred, and not any of the two independently. Following the ‘actionable’ definition in Ref. [11], local structural identifiability is directly linked to the linear independence of the columns of the sensitivity matrix, Sij, of the variable xi with respect to parameter λj (3)

Here, we will work with a related (dimensionless) quantity called the relative sensitivity, or simply the elasticity matrix K with elements Kij given by (4) The logarithm in the definition of the elasticity matrix provides a clear-cut interpretation of its coefficients. Thus, if Kij = 1, a 10% increase in λj implies a 10% increase in xi, and if Kij = 0.5, that very same increase in λj translates only to a 5% increase in xi.

For Eq (1), the elasticity matrix would be simply a 1 × 2 matrix, with (5)

We now propose to multiply λ1 with a generic scale factor u, and to divide λ2 by the same factor, such that the solution remains invariant. Deriving the scaled solution of Eq (2) with respect to that scale factor u, and by the chain rule, (6) and, also, (7) where the last equality follows from Eq (6)

Rearranging Eq (7) and dividing by x, (8) so both columns of the elasticity matrix are linearly dependent and, accordingly, λ1 and λ2 are unidentifiable. In this particular case, the exact solution confirms this result:

In this case we had complete knowledge of the solution, and consequently, it was straightforward to find the right way to introduce the scaling u. Fortunately, this simple scaling calculation can also be performed directly on Eq (1). Introducing two unknown scaling factors, u1 and u2, into that equation, Requiring that this remains identical (or, more formally, invariant) to Eq (1), i.e., λ1λ2x = u1λ1 u2λ2 x, we conclude that u1 u2 = 1. The fact that u1 and u2 cannot be solved individually, also means that the real values of λ1 and λ2 cannot be determined, namely both parameters are unidentifiable.

Next consider a death model with immigration: (9) In this case, to leave the system invariant we need to find u1 and u2 such that for all values of x at any time. Rearranging the latter equation, where the left-hand side of the last equation is a constant and the right-hand side depends on time. Hence the only possible solution to the latter equation is u1 = u2 = 1 implying that both λ1 and λ2 are locally identifiable. Notice the difference with the preceding case, Eq (1), in which an infinite number of combinations of the scaling factors satisfy the invariance condition.

These simple examples illustrate how scaling invariance of the model equations can be used to determine whether the parameters are unidentifiable or not. We prove this result more rigorously in S1 Text.

Description of the method

Let us define a general ODE model characterized by the time evolution of n variables, xi(t), depending on m parameters λj, (10) (11) where the functions fi depend on the specific details of the problem at hand and xi,0 are the initial conditions. We need to distinguish between those variables that can be observed (measured) in the experiment, x1xr, and those which cannot (they are often referred to as latent variables), xr+1xn.

As we will prove below, the simplicity of our method relies on the ability to decompose the functions fi as a sum of M functional independent summands, fik, (12) having the property that fik is functionally independent of fil for every kl. For brevity, denote the subset of variables and parameters included in the function fik.

The notion of linear independent functions and how to test it is summarized in S1 Text. However, a simple definition would be: If f1(x1, x2, …), …, fn(x1, x2, …) are linearly independent functions, then the only solution of the equation (13) is a1 = … = an = 0.

Typical examples of functionally independent functions are summarized in Table 1. For instance, f11 = ax1, f12 = bx1x3, f13 = (c + x4)−1 are functionally independent, whereas examples of dependent functions would be f11 = ax1x2 and f12 = bx1x2. Note that it is not required that fij and fkj are independent (as they appear in different equations). For instance, in the example in Eq (9) can be decomposed in polynomials of degree 0 (a constant) and 1 (a linear function), namely

thumbnail
Table 1. A collection of frequent linear independent functions: All the functions listed in the Table are independent to each other (of the same or different type).

We assume that λ1 ≠ λ2 in all of the cases.

https://doi.org/10.1371/journal.pcbi.1008248.t001

We summarize our method in Box 1.

Box 1: Summary of the scale invariance local structural identifiability method introduced in this work

  1. Scale all parameters and all unobserved variables by unknown scaling factors, u: and substitute them into Eq (15) below.
  2. Equate each functionally independent function, fik, to its scaled version. Namely, (14) where for 1 ≤ ir and the prefactor in the right-hand side of the equation comes from the scaling of . From Eq (11) it follows that .
  3. From Eq (14), find combinations of the scaling factors u that leave the system invariant. Hereafter, we will denote these as the identifiability equations of the model (see Eq (24) below).
  4. Only the parameters λi with a solution are identifiable. Only the variables, xi with are observable. Otherwise, parameters whose scaling factors are coupled, form identifiable groups but cannot be identified independently.

In summary, our method reduces the complexity of finding identifiable parameters to finding which scaling factors do not satisfy the trivial solution ui = 1. In the literature, when a scaling factor is related to one of the latent variables xr+1xn, if , then xk is said to be observable [10]. Thus, our method addresses at the same time identifiability and observability. Additionally, irreducible equations involving two or more parameters provide the so-called identifiable groups of variables that cannot be fitted independently. In the case of the pure death model above, the identifiability equation is a signature of the unidentifiable group λ1λ2. This is interesting as groups involving latent variables (for instance, ) would inform future experiments aimed to observe that variable and decouple that group.

It is also worth mentioning that our identifiability test (illustrated by example in S1 Text) provides a simple way to find a type of symmetry that is related to scale invariance. More sophisticated methods have been introduced in the literature to address other symmetries [3739] using the theory of Lie group transformations, however, that approach involves complex calculations assisted by symbolic computations.

Results

The main result

Now we are equipped to prove the main result of the paper. We will proceed in two steps: firstly, we will show how Eq (14) is translated into a set of equations for the scaling factors u. Secondly, we will connect the elasticity matrix with the solution of the identifiability equations and the identifiability of the parameters.

Consider a model described by a set of n ordinary differential equations (ODE) (15) where fik is functionally independent of fil for every kl (namely, they satisfy the generalized Wronskian theorem; see S1 Text). For the sake of simplicity, we denote and the subset of variables and parameters of function fik.

Motivated by Eqs (1)–(5), we seek for scaling of the parameters that leave the system invariant. As we prove below, this invariance (or lack of) is related to the identifiability of the parameters. Hence, if we define the following scaling transformation: (16) (where the variables x1xr are unmodified as we can measure them in the experiment) we can write the following set of re-scaled equations: (17) (18) (19) (20) where M is the number of functional independent summands in the equation. It is convenient to rewrite Eq (19) as (21) to perform the scale invariance analysis below in a simpler way.

If the solution is invariant under this transformation, then the right-hand sides of Eq (15) and, consequently Eqs () should be equal. Besides, by the functional linear independence of the functions fik we can split each summand. Thus, (22) and (23) This new set of equations is much easier to solve than the one that we would obtain from Eqs (17)–(19) (which would be equivalent to the so-called direct-test method [18]). Eqs (22) and (23) admit the trivial solution . Alternatively, some of the parameters are functionally related to each other. Generically, they can be written as (24) Note that, for each parameter k, the scaling will depend only on a subset of all the scaling factors m1, m2, … We denote Eq (24) the identifiability equations of the model. A third possibility would be that some scaling factors take fixed values different from 1. We discuss that case below.

Let us now connect the identifiability equations with the concept of local structural identifiability. If we take the partial derivative of the following (invariant equation) with respect to , by the chain rule, it follows that (25) where, for convenience, we have defined Finally, dividing Eq (25) by xi: (26) where Kim are the elements of the elasticity matrix defined in Eq (4). Eq (26) implies that Kik can be written as a linear combination of other column(s) of the elasticity matrix. According to our discussion in the Introduction (see also Refs. [11, 12]) this is means that λk is not identifiable.

Summarising, for each parameter λk either or it is not identifiable. The adjective “local” follows because the method stems on the continuity of the derivative of xi(t) with respect to λk to derive Eq (25). Thus, it is unable to capture any discrete transformations like, for instance, discussed for Model 8 in S1 Text and that, as we anticipated above, is the third possible solution of the identifiability Eq (24).

Example: An unidentifiable nonlinear model [16]

Here we show how to apply our method to a nonlinear model introduced in Ref. [16] (this model is mathematically equivalent to Model 2 in S1 Text). (27) (28) (29) (30) (31) Following Box 1:

  1. We re-scale the non-observed variables and parameters: (32) as x1 is observed (so, ).
  2. We define the functional linear independent functions: and from Eq (14) and respectively.
  3. Manipulating the previous equations: and Hence, the identifiability equations are (33)
  4. As the system has more than 1 solution besides the trivial () it follows that the model is unidentifiable. Moreover, Eq (33) allows one to conclude that (i) if x2 were to be observed (), all the parameters would be identifiable, and (ii) the combination is identifiable as, for any scale of x2, the condition is always fulfilled and hence λ2λ3 is an identifiable group.

Comparison with other methods

We have applied the method outlined in Box 1 to 13 different models defined and analyzed in detail in S1 Text. The choice is based on two criteria: on the one hand, models 1-5 are included for pedagogical purposes. They are simple enough to illustrate the novel method and most of the existing methods also provide the same definite answers. Models 6-13 were chosen because they have previously been analyzed using the methods summarized in the Introduction and in Table 2. This allows us to put our method in direct competition with those methods and to highlight their merits and limitations.

thumbnail
Table 2. List of current methods testing structural identifiability.

We introduce here the acronyms referred to in Table 3.

https://doi.org/10.1371/journal.pcbi.1008248.t002

The results of this comparison are summarized in Table 3, which is an extension of a similar table in Ref. [7]. The column Not Conclusive/Not Applicable groups different situations in which a particular method do not provide a conclusive answer (or no answer at all). In general, it captures the fact that many of these methods are computationally demanding (after several hours they do not provide any answer) or that the computations do not converge numerically. For instance, in some implementations of the Differential Algebra method [32], when the number of observables is lower than the number of parameters, the computation requires the evaluation of high-order derivatives of the functions fi in Eq (11) what can be computationally prohibitive. In other cases, some criterion of applicability is not fulfilled (for instance, the observability rank condition for the similarity transformation method) or the method cannot be solved if it involves the solution of a high-degree polynomial or transcendental equations (Direct Test method). These limitations are summarized succinctly in the Cons column in Table 2.

thumbnail
Table 3. Summary of models compared in the literature: The number in brackets in the Model Name column corresponds to the number of observed variables.

Model Numbers correspond to those in Table A in S1 Text. The acronyms for the methods are summarized in Table 2. This table is an extension of Table 1 in Ref. [7].

https://doi.org/10.1371/journal.pcbi.1008248.t003

Discussion and conclusions

Table 3 shows that our method can handle any complex model and provides a local structural identifiability criterion that is compatible with those methods capable of producing an answer. Thus, our method is widely applicable. It is worth noting that in several cases where our scaling method comes with a conclusive answer, other more complicated methods cannot address those cases (rightmost column in the table). As any global structural identifiable model is also local, our results are compatible with those methods that can address that difference.

Table 3 also highlights the huge discrepancies among methods. These conflicting conclusions are rather discomforting and deserve deeper clarification. The main source of conflict arises when comparing the Taylor series and the Generating series methods, as they transform the original problem into an approximate one. Also, they incorporate (rightly) the initial conditions into the computation while some implementations of the Differential algebra (DA) method do not (see the DAISY implementation [32]), what can lead to different conclusions. Regarding the DA method, in some instances random values are used for the parameters to handle the complexity of some models what, if those parameters are not properly explored, can lead to wrong conclusions.

So overall, we can distinguish three sources of discrepancy: local vs global structural identifiability (which is not an incompatibility as Global implies Local and our method is restricted to the latter); conclusive vs not conclusive (which favors our method as it is not limited by any computational constraint) and; the most concerning, incompatible conclusions. Here, our method is compatible with the conclusions of DA and hybrid methods such as Reaction network theory or STRIKE-GOLDD. As we mentioned in the introduction, Differential Algebra methods (and extensions) are considered the most reliable (when computable) and our method either agrees, or provides an answer where the other methods cannot. The discrepancies with other methods are due to limitations or uncontrolled approximations when applied to complex problems and have been already raised by other authors [7].

From viewpoint of performance, it is worth emphasizing that we have performed our test by hand, as illustrated in S1 Text, and that, after some practice (and using some interesting motifs as having sums of different parameters, or the coefficients related to diagonal terms in the system of equations) the calculations can be made in a few minutes. This contrasts with the most sophisticated methods that, by hand, can fill several pages [27] or take hours using symbolic computation packages.

Together, broad applicability and simplicity are the main signatures of our method and this may attract the interest of mathematical modelers and spread the culture of checking structural identifiability as a mandatory step when fitting experimental data.

We would like to highlight a connection with the so-called Buckingham-Π theorem of dimensional analysis [48]. In some sense, the scale invariance property is related to the principle of dimensional homogeneity, i.e., the constraints on the functional form of the independent variables with the parameters. Our identifiability equations are therefore similar to finding the so-called Π-groups in the theorem.

A limitation of the method is that it is restricted to testing local identifiability. This is implicit in the differentiability of the elasticity matrix which, by definition, is a local operation. Discrete symmetries are not captured, and more sophisticated methods (based on Lie group transformations [39]) are required. However, simple manipulation of the equations to remove the latent variables can improve the explanatory power of the method and might capture those discrete symmetries (see Sec. 3.8 of S1 Text). We leave that extension for future developments.

Finally, in this work we have chosen to solve the scaling factor equations directly as it is easy to perform with pen and paper. However, if we were to redefine the scaling factors as , the new factors wi would obey a linear system of homogeneous equations. It is therefore expected that the problem of identifiability is related to the rank of the matrix defining the linear system of equations. In that regard, the theorems presented in S1 Text could be supplemented with generic results on homogeneous systems of equations. Thus, our results provide a solid ground for the method and indicate a venue for further development in other systems like delay-differential or partial differential equations.

Another open question is the identifiability problem of mixed-effect models, where parameters are not fixed quantities for each observation but, rather, they are drawn from a meta-distribution linking different subjects [49]. For instance, if one considers the simple model a and b are not identifiable. However, if they are assumed to be drawn from, say, two exponential distributions with different means μa and μb, then the joint distribution for λ ≡ ab is given by which is formed by two linearly independent functions (if μaμb), and so μa and μb are identifiable as the unique solution of the identifiability equations is (because of the exponential). This kind of models need further analysis but they seem to be amenable to our approach.

Finally, while we emphasize the simplicity of the method, it is also amenable to be implemented using symbolic computation packages, particularly for systems with a large number of equations/reactions.

Supporting information

S1 Text. In S1 Text we collect the theorems sustaining the method and a catalogue of models with a detailed computation of the identifiability equations that were used to build Table 3.

https://doi.org/10.1371/journal.pcbi.1008248.s001

(TEX)

Acknowledgments

This work was initiated during summer visits of the authors to the Los Alamos National Laboratory, and we thank Nick Hengartner and Alan Perelson (LANL) for their hospitality and helpful comments on this work, and the Santa Fe Institute for supporting the summer visits of RdB.

References

  1. 1. Castro M, Lythe G, Molina-París C, Ribeiro RM. Mathematics in Modern Immunology. Interface focus. 2016;6(2):20150093. pmid:27051512
  2. 2. Bellman R, Åström KJ. On structural identifiability. Mathematical Biosciences. 1970;7(3-4):329–339.
  3. 3. Jacquez JA, et al. Compartmental analysis in biology and medicine. New York, Elsevier Pub. Co.; 1972.
  4. 4. Balsa-Canto E, Alonso-del Real J, Querol A. Mixed growth curve data do not suffice to fully characterize the dynamics of mixed cultures. Proceedings of the National Academy of Sciences. 2020;117(2):811.
  5. 5. Ram Y, Obolski U, Feldman MW, Berman J, Hadany L. Reply to Balsa-Canto et al.: Growth models are applicable to growth data, not to stationary-phase data. Proceedings of the National Academy of Sciences. 2020;117(2):814–815.
  6. 6. Miao H, Xia X, Perelson AS, Wu H. On the identifiability of nonlinear ODE models and applications in viral dynamics. SIAM review. 2011;53(1):3–39. pmid:21785515
  7. 7. Chis OT, Banga JR, Balsa-Canto E. Structural identifiability of systems biology models: a critical comparison of methods. PloS one. 2011;6(11):e27755. pmid:22132135
  8. 8. Villaverde AF, Barreiro A. Identifiability of large nonlinear biochemical networks. Match Commun Math Comput Chem (Mulheim an der Ruhr, Germany). 2016;76(2):259–276.
  9. 9. Villaverde AF, Barreiro A, Papachristodoulou A. Structural Identifiability of Dynamic Systems Biology Models. PLoS Computational Biology. 2016;12(10):1–22. pmid:27792726
  10. 10. Villaverde AF, Tsiantis N, Banga JR. Full observability and estimation of unknown inputs, states and parameters of nonlinear biological models. Journal of the Royal Society Interface. 2019;16(156):20190043. pmid:31266417
  11. 11. Jaqaman K, Danuser G. Linking data to models: data regression. Nature Reviews Molecular Cell Biology. 2006;7(11):813. pmid:17006434
  12. 12. Komorowski M, Costa MJ, Rand DA, Stumpf MP. Sensitivity, robustness, and identifiability in stochastic chemical kinetics models. Proceedings of the National Academy of Sciences. 2011;108(21):8645–8650. pmid:21551095
  13. 13. Walter E, Lecourtier Y. Unidentifiable compartmental models: What to do? Mathematical Biosciences. 1981;56(1-2):1–25.
  14. 14. Pohjanpalo H. System identifiability based on the power series expansion of the solution. Mathematical Biosciences. 1978;41(1-2):21–33.
  15. 15. Vajda S, Rabitz H. State isomorphism approach to global identifiability of nonlinear systems. IEEE Transactions on Automatic Control. 1989;34(2):220–223.
  16. 16. Vajda S, Godfrey KR, Rabitz H. Similarity transformation approach to identifiability analysis of nonlinear compartmental models. Mathematical Biosciences. 1989;93(2):217–248. pmid:2520030
  17. 17. Chappell MJ, Godfrey KR. Structural identifiability of the parameters of a nonlinear batch reactor model. Mathematical Biosciences. 1992;108(2):241–251. pmid:1547364
  18. 18. Denis-Vidal L, Joly-Blanchard G. An easy to check criterion for (un) indentifiability of uncontrolled systems and its applications. IEEE Transactions on Automatic Control. 2000;45(4):768–771.
  19. 19. Raksanyi A, Lecourtier Y, Walter E, Venot A. Identifiability and distinguishability testing via computer algebra. Mathematical Biosciences. 1985;77(1-2):245–266.
  20. 20. Walter E, Braems I, Jaulin L, Kieffer M. Guaranteed numerical computation as an alternative to computer algebra for testing models for identifiability. In: Numerical Software with Result Verification. Springer; 2004. p. 124–131.
  21. 21. Maiwald T, Hass H, Steiert B, Vanlier J, Engesser R, Raue A, et al. Driving the model to its limit: profile likelihood based model reduction. PloS one. 2016;11(9):e0162366. pmid:27588423
  22. 22. Villaverde AF, Barreiro A, Papachristodoulou A. Structural Identifiability Analysis via Extended Observability and Decomposition. IFAC-PapersOnLine. 2016;49(26):171–177.
  23. 23. Kreutz C. An easy and efficient approach for testing identifiability. Bioinformatics. 2018;34(11):1913–1921. pmid:29365095
  24. 24. Tönsing C, Timmer J, Kreutz C. Profile likelihood-based analyses of infectious disease models. Statistical methods in medical research. 2018;27(7):1979–1998. pmid:29512437
  25. 25. Stigter JD, Molenaar J. A fast algorithm to assess local structural identifiability. Automatica. 2015;58:118–124.
  26. 26. Xia X, Moog CH. Identifiability of nonlinear systems with application to HIV/AIDS models. IEEE transactions on automatic control. 2003;48(2):330–336.
  27. 27. Koelle K, Farrell AP, Brooke CB, Ke R. Within-host infectious disease models accommodating cellular coinfection, with an application to influenza. Virus evolution. 2019;5(2):vez018. pmid:31304043
  28. 28. Walter E, Pronzato L. On the identifiability and distinguishability of nonlinear parametric models. Mathematics and computers in simulation. 1996;42(2-3):125–134.
  29. 29. Ljung L, Glad T. On global identifiability for arbitrary model parametrizations. Automatica. 1994;30(2):265–276.
  30. 30. Ollivier F. Identifiabilité et identification: du Calcul Formel au Calcul Numérique? In: ESAIM: Proceedings. vol. 9. EDP Sciences; 2000. p. 93–99.
  31. 31. Meshkat N, Eisenberg M, DiStefano JJ III. An algorithm for finding globally identifiable parameter combinations of nonlinear ODE models using Gröbner Bases. Mathematical Biosciences. 2009;222(2):61–72. pmid:19735669
  32. 32. Bellu G, Saccomani MP, Audoly S, D’Angiò L. DAISY: A new software tool to test global identifiability of biological and physiological systems. Computer methods and programs in biomedicine. 2007;88(1):52–61. pmid:17707944
  33. 33. Chiş O, Banga JR, Balsa-Canto E. GenSSI: a software toolbox for structural identifiability analysis of biological models. Bioinformatics. 2011;27(18):2610–2611. pmid:21784792
  34. 34. Meshkat N, Kuo CEz, DiStefano J III. On finding and using identifiable parameter combinations in nonlinear dynamic systems biology models and COMBOS: a novel web implementation. PLoS One. 2014;9(10):e110261. pmid:25350289
  35. 35. Hong H, Ovchinnikov A, Pogudin G, Yap C. SIAN: software for structural identifiability analysis of ODE models. Bioinformatics. 2019;35(16):2873–2874. pmid:30601937
  36. 36. Brouwer AF, Meza R, Eisenberg MC. Parameter estimation for multistage clonal expansion models from cancer incidence data: A practical identifiability analysis. PLoS computational biology. 2017;13(3):e1005431. pmid:28288156
  37. 37. Yates JWT, Evans ND, Chappell MJ. Structural identifiability analysis via symmetries of differential equations. Automatica. 2009;45(11):2585–2591.
  38. 38. Anguelova M, Karlsson J, Jirstrand M. Minimal output sets for identifiability. Mathematical Biosciences. 2012;239(1):139–153. pmid:22609467
  39. 39. Merkt B, Timmer J, Kaschek D. Higher-order Lie symmetries in identifiability and predictability analysis of dynamic models. Physical Review E. 2015;92(1):012920. pmid:26274260
  40. 40. Craciun G, Kim J, Pantea C, Rempala GA. Statistical model for biochemical network inference. Communications in Statistics-Simulation and Computation. 2013;42(1):121–137. pmid:23125476
  41. 41. Davidescu FP, Jørgensen SB. Structural parameter identifiability analysis for dynamic reaction networks. Chemical Engineering Science. 2008;63(19):4754–4762.
  42. 42. Locke J, Millar A, Turner M. Modelling genetic networks with noisy and varied experimental data: the circadian clock in Arabidopsis thaliana. Journal of theoretical biology. 2005;234(3):383–393. pmid:15784272
  43. 43. Wu H, Zhu H, Miao H, Perelson AS. Parameter identifiability and estimation of HIV/AIDS dynamic models. Bulletin of mathematical biology. 2008;70(3):785–799. pmid:18247096
  44. 44. Ho DD, Neumann AU, Perelson AS, Chen W, Leonard JM, Markowitz M. Rapid turnover of plasma virions and CD4 lymphocytes in HIV-1 infection. Nature. 1995;373(6510):123–126. pmid:7816094
  45. 45. Bartl M, Kötzing M, Kaleta C, Schuster S, Li P. Just-in-time activation of a glycolysis inspired metabolic network-solution with a dynamic optimization approach. In: Crossing Borders within the ABC: Automation, Biomedical Engineering and Computer Science. vol. 55; 2010. p. 217–222.
  46. 46. Lipniacki T, Paszek P, Brasier AR, Luxon B, Kimmel M. Mathematical model of NF-κB regulatory module. Journal of theoretical biology. 2004;228(2):195–215. pmid:15094015
  47. 47. Domurado M, Domurado D, Vansteenkiste S, De Marre A, Schacht E. Glucose oxidase as a tool to study in vivo the interaction of glycosylated polymers with the mannose receptor of macrophages. Journal of controlled release. 1995;33(1):115–123.
  48. 48. Buckingham E. Illustrations of the use of dimensional analysis on physically similar systems. Physics Review. 1914;4(4):354–377.
  49. 49. Lavielle M, Aarons L. What do we mean by identifiability in mixed effects models? Journal of pharmacokinetics and pharmacodynamics. 2016;43(1):111–122. pmid:26660913