Jump to content

Identifiability analysis

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by RonanDFr (talk | contribs) at 08:17, 13 April 2016 (Introduction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Identifiability analysis is group of methods that are based on Mathematical statistics and can be used to estimate how well model parameters are determined by the amount and quality of experimental data.[1] Therefore, these methods explore not only identifiability of a model as its theoretical property, but also the goodness of model relation to particular experimental data or more generally to the configuration of experiment in general.

Introduction

Assuming the model is defined and the regression analysis or any other model fitting could be performed to obtain the model parameters values that minimize difference between the modeled and experimental data. The goodness of fit, which represents the minimal difference between experimental and modeled data in a particular measure, does not reveal how reliable the parameter estimates are and it is not the sufficient criteria to prove the model was chosen correctly either. For example, if the experimental data was noisy or just insufficient amount of data points was processed, substitution of best fitted parameter values by orders of magnitude will not significantly influence the quality of fit. To address this issues the Identifiability analysis could be applied as an important step to ensure correct choice of model, and sufficient amount of experimental data. The purpose of this analysis is either a quantified proof of correct model choice and integrality of experimental data acquired or such analysis can serve as an instrument for the detection of non-identifiable and sloppy parameters, helping planning the experiments and in building and improvement of the model at the early stages.

Structural and practical identifiability analysis

Structural identifiability analysis is a particular type of analysis in which the model structure itself is investigated for non-identifiability. Recognized non-identifiabilities may be removed analytically through substitution of the non-identifiable parameters with their combinations or by another way. The model overloading with number of independent parameters after its application to simulate finite experimental dataset may provide the good fit to experimental data by the price of making fitting results not sensible to the changes of parameters values, therefore leaving parameter values undetermined. Structural methods are also referred to as a priori, because non-identifiability analysis in this case could also be performed prior to the calculation of the fitting score functions, by exploring the number Degrees of freedom (statistics) for the model and the number of independent experimental conditions to be varied.

Practical identifiability analysis can be performed by exploring the fit of existing model to experimental data. Once the fitting in any measure was obtained, parameter identifiability analysis can be performed either locally near a given point (usually near the parameter values provided the best model fit) or globally over the extended parameter space. The common example of the practicle identifiability analysis is profile likelihood method.

See also

References

Citations

  1. ^ Cobelli, C; DiStefano, J. (1980), Parameter and structural identifiability concepts and ambiguities: a critical review and analysis, Am. J. Physiol. Regul. Integr. Comp.Physiol., 239, 7–24.

Sources