Direct and Inverse Problems for PDEs with Random Coefficients - Abstract

Ullmann, Sebastian

Adaptive finite element POD for uncertainty quantification

joint work with Jens Lang
Surrogate models can be used to reduce the computational cost for uncertainty quantification in the context of parabolic PDEs with stochastic data. Projection based reduced-order modeling provides surrogates which inherit the spatial structure of the solution and as well as the underlying physics. We focus on reduced-order models obtained by a Galerkin projection onto a proper orthogonal decomposition (POD) of samples of the solution. Standard techniques assume that all samples use one and the same spatial mesh. In this study, we provide a generalization for unsteady adaptive finite element simulations. This means the mesh can change from time step to time step and, in the case of a stochastic sampling method, from realization to realization. Generalizing POD to a space adaptive setting is important for two reasons: Firstly, for problems with varying local features, adaptivity can save time and memory in the snapshot computation and set-up of the reduced-order model. Secondly, we envision applications where a given adaptive finite element code or given adaptive snapshot data have to be used. In this context one could just transfer all snapshots to a single common mesh and subsequently apply standard techniques. In general, however, this can be computationally infeasible or lead to additional approximation errors. We limit our scope to snapshots obtained with nested refinement strategies, in particular newest vertex bisection based on some fixed initial triangulation. For any subset of such snapshots, by refinement one can find a mesh on which all members of the subset can be represented exactly. Therefore, one way of creating a POD-Galerkin model is to construct a mesh on which all snapshots can be represented exactly, interpolate them onto this mesh, and proceed with standard techniques. We present an alternative approach for PDEs with polynomial non-linearities of maximum degree $N$, where it suffices to form the common meshes of all $(N+1)$-tuples of snapshots. This method is more efficient than the first approach if the common mesh of all snapshots contains much more nodes than each individual snapshot. As a numerical test case we study a viscous Burgers equation with smooth initial data multiplied by a normally distributed random variable. The output of interest are statistics of the solution at the final time. The non-linearity of the equation leads to a non-trivial input-output relationship. For simplicity we use Monte Carlo sampling to discretize the stochastic dimension. Neglecting time discretization aspects, the resulting error is due to the snapshot sampling, the finite element discretization, the POD truncation and the final Monte Carlo sampling.