WIAS Preprint No. 2566, (2018)

An adaptive stochastic Galerkin tensor train discretization for randomly perturbed domains



Authors

  • Eigel, Martin
  • Marschall, Manuel
    ORCID: 0000-0003-0648-1936
  • Multerer, Michael

2010 Mathematics Subject Classification

  • 35R60 47B80 60H35 65C20 65N12 65N22 65J10

Keywords

  • Partial differential equations with random coefficients, tensor representation, tensor train, uncertainty quantification, stochastic finite element methods, log-normal, adaptive methods, ALS, low-rank, reduced basis methods

DOI

10.20347/WIAS.PREPRINT.2566

Abstract

A linear PDE problem for randomly perturbed domains is considered in an adaptive Galerkin framework. The perturbation of the domain's boundary is described by a vector valued random field depending on a countable number of random variables in an affine way. The corresponding Karhunen-Loeve expansion is approximated by the pivoted Cholesky decomposition based on a prescribed covariance function. The examined high-dimensional Galerkin system follows from the domain mapping approach, transferring the randomness from the domain to the diffusion coefficient and the forcing. In order to make this computationally feasible, the representation makes use of the modern tensor train format for the implicit compression of the problem. Moreover, an a posteriori error estimator is presented, which allows for the problem-dependent iterative refinement of all discretization parameters and the assessment of the achieved error reduction. The proposed approach is demonstrated in numerical benchmark problems.

Download Documents

  • PDF Version of December 21, 2018 (5315 kByte)
  • PDF Version of February 20, 2019 (5791 kByte)