Presenters-0108

From CSDMS
Revision as of 12:05, 8 August 2018 by WikiSysop (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
CSDMS 2015 annual Meeting - Models meet data, data meet models


Testing model analysis frameworks



Mary Hill

University of Kansas, United States
mchill@ku.edu hary mchill@ku.edu

Abstract
Model analysis frameworks specify ideas by which models and data are combined to simulate a system on interest. A given modeling framework will provide methods for model parameterization, data and model error characterization, sensitivity analysis (including identifying observations and parameters important to calibration and prediction), uncertainty quantification, and so on. Some model analysis frameworks suggest a narrow range of methods, while other frameworks try to place a broader range of methods in context. Testing is required to understand how well a model analysis framework is likely to work in practice. Commonly models are constructed to produce predictions, and here the accuracy and precision of predictions are considered.

The design of meaningful tests depends in part on the timing of system dynamics. In some circumstances the predicted quantity is readily measured and changes quickly, such as for weather (temperature, wind and precipitation), floods, and hurricanes. In such cases meaningful tests involve comparing predictions and measured values and tests can be conducted daily, hourly or even more frequently. The benchmarking tests in rainfall-runoff modeling, such as HEPEX, are in this category. The theoretical rating curves of Kean and Smith provide promise for high flow predictions. Though often challenged by measurement difficulties, short timeframe systems provide the simplest circumstance for conducting meaningful tests of model analysis frameworks.

If measurements are not readily available and(or) the system responds to changes over decades or centuries, as generally occurs for climate change, saltwater intrusion of groundwater systems, and dewatering of aquifers, prediction accuracy needs to be evaluated in other ways. For example, in recent work two methods were used to identify the likely accuracy of different methods used to construct models of groundwater systems (including parameterization methods): (1) results of complex and simple models were compared and (2) cross-validation experiments. These and other tests can require massive computational resources for any but the simplest of problems. In this talk we discuss the importance of model framework testing in these longer-term circumstances and provide examples of tests from several recent publications. We further suggest that for these long-term systems, the design and performance of such tests are essential for the responsible development of model frameworks, are critical for models of these environmental systems to provide enduring insights, and are one of the most important uses of high performance computing in natural resource evaluation.



Please acknowledge the original contributors when you are using this material. If there are any copyright issues, please let us know (CSDMSweb@colorado.edu) and we will respond as soon as possible.

Of interest for:
  • Cyberinformatics and Numerics Working Group
  • Hydrology Focus Research Group