Research Group "Stochastic Algorithms and Nonparametric Statistics"

Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" Sommer Semester 2022

19.04.2022 N.N.

26.04.2022 N.N.

03.05.2022 Jun.-Prof. Dr. Martin Redmann (Martin-Luther-Universität Halle-Wittenberg)
Solving high-dimensional optimal stopping problems using model order reduction (hybrid talk)
Solving optimal stopping problems by backward induction in high dimensions is often very complex since the computation of conditional expectations is required. Typically, such computations are based on regression, a method that suffers from the curse of dimensionality. Therefore, the objective of this presentation is to establish dimension reduction schemes for large-scale asset price models and to solve related optimal stopping problems (e.g. Bermudan option pricing) in the reduced setting, where regression is feasible. We illustrate the benefit of our approach in several numerical experiments, in which Bermudan option prices are determined.
10.05.2022 Prof. Vladimir Spokoiny (WIAS und HU Berlin)
Laplace's approximation in high dimension (hybrid talk)


31.05.2022 Prof. Vladimir Spokoiny (WIAS und HU Berlin)
Laplace's approximation in high dimension, Part 2 (hybrid talk)

14.06.2022 Priv. - Doz. Dr. John Schoenmakers (WIAS Berlin)
Dual randomization and empirical dual optimization for optimal stopping (online talk)
21.06.2022 Grigory Malinovsky (KAUST)
ProxSkip: Yes! Local gradient steps provably lead to communication acceleration! Finally! (online talk)
We introduce ProxSkip - a surprisingly simple and provably efficient method for minimizing the sum of a smooth and an expensive nonsmooth proximable function. The canonical approach to solving such problems is via the proximal gradient descent (ProxGD) algorithm, which is based on the evaluation of the gradient of the smooth part and the prox operator of the composite term in each iteration. In this work we are specifically interested in the regime in which the evaluation of prox is costly relative to the evaluation of the gradient, which is the case in many applications. ProxSkip allows for the expensive prox operator to be skipped in most iterations while preserving the iteration complexity. Our main motivation comes from federated learning, where evaluation of the gradient operator corresponds to taking a local GD step independently on all devices, and evaluation of prox corresponds to (expensive) communication in the form of gradient averaging. In this context, ProxSkip offers an effective acceleration of communication complexity. Unlike other local gradient-type methods, such as FedAvg, SCAFFOLD, S-Local-GD and FedLin, whose theoretical communication complexity is worse than, or at best matching, that of vanilla GD in the heterogeneous data regime, we obtain a provable and large improvement without any heterogeneity-bounding assumptions.
28.06.2022 Karsten Tabelow (WIAS Berlin)
In-vivo tissue properties from magnetic resonance data (hybrid talk)
05.07.2022 Alexander Marx (Wias Berlin)
Random interactions in the mean-field Ising model (hybrid talk)



last reviewed: June 27, 2022 by Christine Schneider