Direct and Inverse Problems for PDEs with Random Coefficients - Abstract
Heinkenschloss, Matthias
To numerically solve PDE constrained optimization problems with uncertain data it is crucial to keep the number of samples small or to reduce the cost of evaluating the contributions to objective function and gradient evaluation associated with each sample. In this talk I will present an approach based on the integration of a trust-region algorithm with adaptive sparse grids. The algorithm adaptively builds two separate sparse grids: one to generate optimization models for the optimization step computation, and one to approximate the objective function to evaluate whether to accept the step. The quality of the adaptive sparse grid models is determined by the trust-region algorithm. Conditions on inexact function and gradient evaluations in previous trust-region frameworks are extended to allow the rigorous use of asymptotic (discretization) error estimates for objective function and gradient approximations. For problems that depend smoothly on the random variables, this algorithm often generates adaptive sparse grids that contain significantly fewer points than the high-fidelity grids, which leads to a dramatic reduction in the computational cost. It is less clear, however, how efficient this sampling strategy is for optimization problems that do not depend smoothly on the random variables. This, e.g., the case when semi-deviation or CVaR risk measures are used in the objective. I will present some observations for this case and possible alternatives.