WIAS Preprint No. 3078, (2023)

Generative modelling with tensor train approximations of Hamilton--Jacobi--Bellman equations



Authors

  • Sommer, David
    ORCID: 0000-0002-6797-8009
  • Gruhlke, Robert
    ORCID: 0000-0003-3129-9423
  • Kirstein, Max
  • Eigel, Martin
    ORCID: 0000-0003-2687-4497
  • Schillings, Claudia

2020 Mathematics Subject Classification

  • 35F21 35Q84 62F15 65N75 65C30

Keywords

  • Generative modelling, appoximate sampling, Hamilton-Jacobi-Bellman, low rank tensors

DOI

10.20347/WIAS.PREPRINT.3078

Abstract

Sampling from probability densities is a common challenge in fields such as Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in particular, the use of reverse-time diffusion processes depending on the log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling tool. In [5] the authors point out that these log-densities can be obtained by solution of a Hamilton-Jacobi-Bellman (HJB) equation known from stochastic optimal control. While this HJB equation is usually treated with indirect methods such as policy iteration and unsuper-vised training of black-box architectures like Neural Networks, we propose instead to solve the HJB equation by direct time integration, using compressed polynomials represented in the Tensor Train (TT) format for spatial discretization. Crucially, this method is sample-free, agnostic to normalization constants and can avoid the curse of dimensionality due to the TT compression. We provide a complete derivation of the HJB equation?s action on Tensor Train polynomials and demonstrate the performance of the proposed time-step-, rank- and degree-adaptive integration method on a nonlinear sampling task in 20 dimensions.

Download Documents