Uncertainty and precaution in environmental management: Insights from the UPEM conference
Environmental Modelling & Software
Ten steps applied to development and evaluation of process-based biogeochemical models of estuaries
Environmental Modelling & Software
Identifiability analysis for receiving water body quality modelling
Environmental Modelling & Software
Urban runoff modelling uncertainty: Comparison among Bayesian and pseudo-Bayesian methods
Environmental Modelling & Software
Does high forecast uncertainty preclude effective decision support?
Environmental Modelling & Software
A tool to evaluate air quality model performances in regulatory applications
Environmental Modelling & Software
Enhancing integrated environmental modelling by designing resource-oriented interfaces
Environmental Modelling & Software
Position paper: Characterising performance of environmental models
Environmental Modelling & Software
Fire behaviour modelling in semi-arid mallee-heath shrublands of southern Australia
Environmental Modelling & Software
Environmental Modelling & Software
Uncertainty associated with model predictions of surface and crown fire rates of spread
Environmental Modelling & Software
Environmental Modelling & Software
Evaluating integrated assessment models of global climate change
Environmental Modelling & Software
Hi-index | 0.00 |
This letter details the collective views of a number of independent researchers on the technical assessment and evaluation of environmental models and software. The purpose is to stimulate debate and initiate action that leads to an improved quality of model development and evaluation, so increasing the capacity for models to have positive outcomes from their use. As such, we emphasize the relationship between the model evaluation process and credibility with stakeholders (including funding agencies) with a view to ensure continued support for modelling efforts. Many journals, including EM&S, publish the results of environmental modelling studies and must judge the work and the submitted papers based solely on the material that the authors have chosen to present and on how they present it. There is considerable variation in how this is done with the consequent risk of considerable variation in the quality and usefulness of the resulting publication. Part of the problem is that the review process is reactive, responding to the submitted manuscript. In this letter, we attempt to be proactive and give guidelines for researchers, authors and reviewers as to what constitutes best practice in presenting environmental modelling results. This is a unique contribution to the organisation and practice of model-based research and the communication of its results that will benefit the entire environmental modelling community. For a start, our view is that the community of environmental modellers should have a common vision of minimum standards that an environmental model must meet. A common vision of what a good model should be is expressed in various guidelines on Good Modelling Practice. The guidelines prompt modellers to codify their practice and to be more rigorous in their model testing. Our statement within this letter deals with another aspect of the issue - it prompts professional journals to codify the peer-review process. Introducing a more formalized approach to peer-review may discourage reviewers from accepting invitations to review given the additional time and labour requirements. The burden of proving model credibility is thus shifted to the authors. Here we discuss how to reduce this burden by selecting realistic evaluation criteria and conclude by advocating the use of standardized evaluation tools as this is a key issue that needs to be tackled.