The DAKOTA project began in 1994 with the primary objective of reusing software interfaces to design optimization tools. Over nearly 20 years of development, it has grown into an open source toolkit supporting a broad range of iterative analyses, typically focused on high-fidelity modeling and simulation on high-performance computers. Today, DAKOTA provides a delivery vehicle for uncertainty quantification research for both the NNSA and the office of science, enabling an emphasis on predictive science for stockpile stewardship, energy, and climate mission areas.
Starting with an overview of the DAKOTA architecture, this presentation will introduce processes for setting up iterative analyses, interfacing with computational simulations, and managing high-fidelity workflows. Algorithmic capabilities in optimization, calibration, sensitivity analysis, and uncertainty quantification (UQ) will be briefly overviewed, with special emphasis given to UQ. Core UQ capabilities include random sampling methods, local and global reliability methods, stochastic expansion methods, and epistemic interval propagation methods. This UQ foundation enables a variety of higher level analyses including design under uncertainty, mixed aleatory-epistemic UQ, and Bayesian inference.