Model calibration and uncertainty
Uncertainty can enter numerical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:
- Parameter uncertainty, which comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods.
- Parametric heterogeneity, which comes from the variability of input variables of the model.
- Structural uncertainty, aka model inadequacy, model bias, or model discrepancy, which comes from the lack of knowledge of the underlying true physics.
- Experimental uncertainty, aka observation error, which comes from the variability of experimental measurements.
In broad terms, inverse modelling refers to the process of gathering information about the model from measurements of what is being modeled. This includes two related concepts: model identification and parameter estimation. The latter is often used as being synonymous with calibration. Model identification applies to methods to find the nature (features) of the model, such as the governing equations, boundary conditions, time regime, or heterogeneity patterns. Parameter estimation, instead, is restricted to assigning values to the properties characterizing those features.
Model calibration can be done manually, i.e., by a trial-and-error process, or automatically, what frees the modeller of the burden of fine-tuning a large number of parameters. Our automatic calibration strategy is based on Maximum Likelihood Theory. We use the stand-alone calibration software PEST to that end.