WIAS Preprint No. 2982, (2022)

Topology optimisation under uncertainties with neural networks



Authors

  • Eigel, Martin
    ORCID: 0000-0003-2687-4497
  • Haase, Marvin
  • Neumann, Johannes

2020 Mathematics Subject Classification

  • 35R60 47B80 60H35 65C20 65N22 65J10

Keywords

  • Topology optimisation, deep neural networks, model uncertainties, random fields, convolutional neural networks, recurrent neural networks

DOI

10.20347/WIAS.PREPRINT.2982

Abstract

Topology optimisation is a mathematical approach relevant to different engineering problems where the distribution of material in a defined domain is distributed in some optimal way, subject to a predefined cost function representing desired (e.g., mechanical) properties and constraints. The computation of such an optimal distribution depends on the numerical solution of some physical model (in our case linear elasticity) and robustness is achieved by introducing uncertainties into the model data, namely the forces acting on the structure and variations of the material stiffness, rendering the task high-dimensional and computationally expensive. To alleviate this computational burden, we develop two neural network architectures (NN) that are capable of predicting the gradient step of the optimisation procedure. Since state-of-the-art methods use adaptive mesh refinement, the neural networks are designed to use a sufficiently fine reference mesh such that only one training phase of the neural network suffices. As a first architecture, a convolutional neural network is adapted to the task. To include sequential information of the optimisation process, a recurrent neural network is constructed as a second architecture. A common 2D bridge benchmark is used to illustrate the performance of the proposed architectures. It is observed that the NN prediction of the gradient step clearly outperforms the classical optimisation method, in particular since larger iteration steps become viable.

Appeared in

Download Documents