Journal of Computational Physics
Environmental Modelling & Software
Ten steps applied to development and evaluation of process-based biogeochemical models of estuaries
Environmental Modelling & Software
Applying Bayesian Model Averaging to mechanistic models: An example and comparison of methods
Environmental Modelling & Software
Scour depth modelling by a multi-objective evolutionary paradigm
Environmental Modelling & Software
Automated discovery of a model for dinoflagellate dynamics
Environmental Modelling & Software
Environmental Modelling & Software
Predicting torsional strength of RC beams by using Evolutionary Polynomial Regression
Advances in Engineering Software
Position Paper: A general framework for Dynamic Emulation Modelling in environmental problems
Environmental Modelling & Software
Data-driven dynamic emulation modelling for the optimal management of environmental systems
Environmental Modelling & Software
Relative yield decomposition: A method for understanding the behaviour of complex crop models
Environmental Modelling & Software
Environmental Modelling & Software
Hi-index | 0.00 |
While mechanistic models tend to be detailed, they are less detailed than the real systems they seek to describe, so judgements are being made about the appropriate level of detail within the process of model development. These judgements are difficult to test, consequently it is easy for models to become over-parameterised, potentially increasing uncertainty in predictions. The work we describe is a step towards addressing these difficulties. We propose and implement a method which explores a family of simpler models obtained by replacing model variables with constants (model reduction by variable replacement). The procedure iteratively searches the simpler model formulations and compares models in terms of their ability to predict observed data, evaluated within a Bayesian framework. The results can be summarised as posterior model probabilities and replacement probabilities for individual variables which lend themselves to mechanistic interpretation. This provides powerful diagnostic information to support model development, and can identify areas of model over-parameterisation with implications for interpretation of model results. We present the application of the method to 3 example models. In each case reduced models are identified which outperform the original full model in terms of comparisons to observations, suggesting some over-parameterisation has occurred during model development. We argue that the proposed approach is relevant to anyone involved in the development or use of process based mathematical models, especially those where understanding is encoded via empirically based relationships.