WIAS Preprint No. 3121, (2024)

Two-norm discrepancy and convergence of the stochastic gradient method with application to shape optimization



Authors

  • Dambrine, Marc
  • Geiersbach, Caroline
    ORCID: 0000-0002-6518-7756
  • Harbrecht, Helmut

2020 Mathematics Subject Classification

  • 35R60 60H30 35R35 60H35 90C15

Keywords

  • Stochastic gradient method, shape optimization, free boundary problem, two-norm discrepancy, optimization under uncertainty

DOI

10.20347/WIAS.PREPRINT.3121

Abstract

The present article is dedicated to proving convergence of the stochastic gradient method in case of random shape optimization problems. To that end, we consider Bernoulli's exterior free boundary problem with a random interior boundary. We recast this problem into a shape optimization problem by means of the minimization of the expected Dirichlet energy. By restricting ourselves to the class of convex, sufficiently smooth domains of bounded curvature, the shape optimization problem becomes strongly convex with respect to an appropriate norm. Since this norm is weaker than the differentiability norm, we are confronted with the so-called two-norm discrepancy, a well-known phenomenon from optimal control. We therefore need to adapt the convergence theory of the stochastic gradient method to this specific setting correspondingly. The theoretical findings are supported and validated by numerical experiments.

Download Documents