Research Group "Stochastic Algorithms and Nonparametric Statistics"

Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" Winter Semester 2017/2018

  • Place: Weierstrass-Institute for Applied Analysis and Stochastics, Room 406 (4th floor), Mohrenstraße 39, 10117 Berlin
  • Time: Tuesdays, 3:00PM - 4:00PM
17.10.17 Dr. Alexandra Suvorikova (WIAS Berlin)
Two-sample test based on 2-Wasserstein distance
24.10.17

31.10.17 Reformationstag

07.11.17 Egor Klochkov (HU Berlin)
Invertibility of 1D random kernel matrix
14.11.17 Adrien Barrasso (École Nationale Supérieure de Techniques Avancées, Paris)
BSDEs and decoupled mild solutions of (possibly singular, path dependent or Integro-PDEs)
21.11.17 Valeriy Avanesov (WIAS Berlin)
Dynamics of high-dimensional covariance matrices
We consider the detection and localization of an abrupt break in the covariance structure of high-dimensional random data. The study proposes two novel approaches for this problem. The approaches are essentially hypothesis testing procedures which requires a proper choice of a critical level. In that regard calibration schemes, which are in turn different non-standard bootstrap procedures, are proposed. One of the approaches relies on techniques of inverse covariance matrix estimation, which is motivated by applications in neuroimaging. A limitation of the approach is a sparsity assumption crucial for precision matrix estimation which the second approach does not rely on. The description of the approaches are followed by a formal theoretical study justifying the proposed calibration schemes under mild assumptions and providing the guaranties for the break detection. Theoretical results for the first approach rely on the guaranties for inference of precision matrix procedures. Therefore, we rigorously justify adaptive inference procedures for precision matrices. All the results are obtained in a truly high- dimensional (dimensionality p _x001d_ n) finite-sample setting. The theoretical results are supported by simulation studies, most of which are inspired by either real- world neuroimaging or financial data.
28.11.17

05.12.17 Maxim Panov (Skolkovo Institute of Science and Technology, Moskow)
Consistent estimation of mixed memberships with successive projections
In this talk, we consider several types of accelerated randomized gradient methods: random directional search, random coordinate descent, randomized zero-order method. Using the concept of inexact oracle, we present a generic theorem on the convergence rate for all the three methods. Despite their random nature, these methods have complexity with the same dependence on the desired accuracy of the solution as deterministic accelerated gradient method. Joint work with A. Gasnikov and A. Tiurin.
12.12.17 Franz Besold (HU Berlin)
Persistence diagrams
In this talk we introduce persistence diagrams. These can be used as a tool to infer topological information from noisy data. To do that, we review simplicial and singular homology. Persistence diagrams have originally be defined for functions on topological spaces, but can be more generally defined using persistence modules. Stability results ensuring that close data sets have close persistence diagrams show that persistence diagrams are well-suited to deal with real-life data. Applications include data smoothing or recovering topological features of manifolds from a sampled point cloud.
19.12.17 No Seminar

02.01.17 No Seminar

09.01.17 No Seminar

16.01.17 Prof. Alexander Gasnikov (Moscow Institute of Physics and Technology, Russia)
Recent developments of accelerated gradient descent, part I
23.01.17 Dr. Paolo Pigato (WIAS Berlin)
The exploding skew of implied volatility
We show how a simple local volatility model, with volatility function discontinuous at-the-money, is capable of reproducing the explosion of the implied volatility skew, observed on empirical data. We use such local volatility as a leverage function, in a stochastic-local volatility model, obtaining good fits of the whole implied volatility surface on empirical observations.
01.02.17 Prof. Alexander Gasnikov (Moscow Institute of Physics and Technology, Russia)
Please note that the talk is postponed to Thursday Recent developments of accelerated gradient descent, part II
06.02.17 Dr. Mario Maurelli (WIAS und TU Berlin)
McKean-Vlasov SDEs with irregular drift: large deviations for particle approximation
McKean-Vlasov SDEs are SDEs where the coefficients depend on the law of the solution to the SDE. Their interest is in the links with nonlinear PDEs on one side (the SDE-related Fokker-Planck equation is nonlinear) and with interacting particles on the other side: the McKean-Vlasov SDE can be approximated by a system of weakly coupled SDEs. In this talk we consider McKean-Vlasov SDEs with irregular drift: though well-posedness for this SDE is not known, we show a large deviation principle for the corresponding interacting particle system. This implies, in particular, that any limit point of the particle system solves the McKean-Vlasov SDE. The proof combines rough paths techniques and an extended Varadhan lemma. This is a joint work with Thomas Holding.
13.02.17 Dr. Martin Redmann (WIAS Berlin)
Solving parabolic rough partial differential equations using regression
In this talk, we discuss the numerical approximation of parabolic partial differential equations (PDEs) driven by a rough path which, e.g., can be the lift of a path of a fractional Brownian motion. By using the Feynman-Kac formula, the solution can be represented as the expected value of a functional of the corresponding hybrid Stratonovich-rough differential equation. A time-discretisation of this equation and a Monte Carlo regression in the spatial variable lead to an approximation of the solution to the rough PDE. We analyse the regression error and provide several numerical experiments to illustrate the performance of our method.
20.02.17 Dr. Pavel Dvurechensky (WIAS Berlin)
Faster algorithms for (regularized) optimal transport
We propose an alternative to ubiquitous Sinkhorn algorithm for solving regularized optimal transport problem. Our approach is based on accelerated gradient descent applied to the dual problem and allows to solve not only entropy-regularized optimal transport problem, but also squared-2-norm-regularized OT problem, which produces sparser optimal transport plan approximation. We prove that, our algorithm, allows to solve non-regularized optimal transport problem with support size $p$ up to accuracy $\epsilon$ in $O\left(\min\left\{\frac{p^{9/4}}{\epsilon}, \frac{p^{2}}{\epsilon^2} \right\}\right)$ arithmetic operations, which is better than state-of-the-art result $O\left(\frac{p^2}{\epsilon^3}\right)$.
27.02.17



last reviewed: February, 14, 2018, Christine Schneider