Forschungsgruppe "Stochastische Algorithmen und Nichtparametrische Statistik"

Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" SS 2020

>
31.03.2020 Dr. Karsten Tabelow (WIAS Berlin)
Model-based geometry reconstruction (MGBR) from TEM
07.04.2020 Prof. Dr. Vladimir Spokoiny (WIAS Berlin)
Bayesian inference for nonlinear inverse problems
I will discuss some general results for the posterior in a nonlinear inverse problem with particular focus on nonlinear regression and Gaussian mixture models.
14.04.2020 Dr. Alexandra Suvorikova (WIAS Berlin)
On some general pricniples behind the results on the bootstrap validity in Bures-Wasserstein space
21.04.2020 Dr. Nikita Zhivotovskiy (Google Research)
Fast classification rates in online and statistical learning with abstention
28.04.2020 Darina Dvinskikh (WIAS Berlin)
SA vs SAA for population Wasserstein barycenter calculation
05.05.2020 Dr. Oleg Butkovsky (WIAS Berlin)
Stochastic sewing with controls and skew fractional Brownian motion
I will explain how sewing with controls developed by Peter Friz & Huilin Zhang can be extended to the stochastic setup and show how this can be used to establish existence of skew fractional brownian motion and related SDEs with iregular drift.
12.05.2020 Dr. César A. Uribe (Massachusetts Institute of Technology)
Distributed inference for cooperative learning
We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior density, and its relation to the stochastic mirror descent algorithm, to propose a new distributed learning algorithm. We show that, under appropriate assumptions, the beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. We provide explicit non-asymptotic bounds for the convergence rate. Moreover, we develop explicit and computationally efficient algorithms for observation models belonging to exponential families.
19.05.2020 Priv.- Doz. Dr. John G. M. Schoenmakers (WIAS Berlin)
Semi-tractability of optimal stopping problems via a weighted stochastic mesh algorithm
In this talk we propose a Weighted Stochastic Mesh (WSM) algorithm for approximating the value of discrete and continuous time optimal stopping problems. In this context we consider tractability of such problems via a useful notion of semi-tractability and the introduction of a tractability index for a particular numerical solution algorithm. It is shown that in the discrete time case the WSM algorithm leads to semi-tractability of the corresponding optimal stopping problem in the sense that its complexity is bounded in order by accuracy to the power four, times logarithm of the dimension of the underlying Markov chain. Furthermore we study the WSM approach in the context of continuous time optimal stopping problems and derive the corresponding complexity bounds. Although we can not prove semi-tractability in this case, our bounds turn out to be the tightest ones among the complexity bounds known in the literature. We illustrate our theoretical findings by a numerical example. (joint work with Denis Belomestny and Maxim Kaledin to appear in Math. Fin.)
02.06.2020 Dr. Sebastian Riedel (WIAS Berlin)
Runge-Kutta methods for rough differential equations
09.06.2020 No seminar due to Berlin-Oxford Meeting
13th Annual ERC Berlin-Oxford Young Researchers Meeting
16.06.2020 Dr. Valeriy Avanesov (WIAS Berlin)
Data-driven confidence bands for distributed nonparametric regression
Gaussian Process Regression and Kernel Ridge Regression are popular nonparametric regression approaches. Unfortunately, they suffer from high computational complexity rendering them inapplicable to the modern massive datasets. To that end a number of approximations have been suggested, some of them allowing for a distributed implementation. One of them is the divide and conquer approach, splitting the data into a number of partitions, obtaining the local estimates and finally averaging them. In this paper we suggest a novel computationally efficient fully data-driven algorithm, quantifying uncertainty of this method, yielding frequentist L_2- confidence bands. We rigorously demonstrate validity of the algorithm. Another contribution of the paper is a minimax-optimal high-probability bound for the averaged estimator, complementing and generalizing the known risk bounds.
23.06.2020 Sara Mazzonetto (Universität Potsdam)
Threshold Ornstein-Uhlenbeck model: Parameters estimation
A threshold Ornstein-Uhlenbeck is a continuous-time threshold autoregressive process. It follows the Ornstein-Uhlenbeck dynamics when above or below a fixed value, yet at this threshold its coefficients can be discontinuous. We discuss (quasi)-maximum likelihood estimation of the drift parameters, both assuming continuous and discrete time observations. In the ergodic case, we derive consistency and speed of convergence in long time and high frequency of these estimators. Based on these results, we develop a test for the presence of a threshold in the dynamics. Finally, we apply these statistical tools to the short term US interest rates. The talk is based on a joint work with Paolo Pigato.
30.06.2020 N.N.

07.07.2020 Paul Hager (TU Berlin)
Reinforced optimal control
We extend the reinforced regression methodology from optimal stopping problems to a general class of optimal control problems. The prevailing idea is the reinforcement of the set of basis functions based on previous regressions in the backwards induction steps. Moreover, we propose an efficient modification of the algorithm, which overcomes cost limitations for problems with a high number of possible exercise dates and a high number of possible control states.



















last reviewed: June 17, 2020 by Christine Schneider