Research Group "Stochastic Algorithms and Nonparametric Statistics"

Research Seminar "Mathematical Statistics" Summer Semester 2018

  • Place: Weierstrass-Institute for Applied Analysis and Stochastics, Erhard-Schmidt-Hörsaal, Mohrenstraße 39, 10117 Berlin
  • Time: >Wednesdays, 10.00 a.m. - 12.30 p.m.
18.04.18 Dr. Alexandra Suvorikova (WIAS Berlin)
Gaussian process forecast with multidimensional distributional input
In this work, we focus on forecasting a Gaussian process indexed by probability distributions. We introduce a family of positive definite kernels constructed with the use of optimal transportation distance and provide their probabilistic understanding. The technique allows to forecast efficiently Gaussian processes, which opens new perspective in Gaussian process modelling.
25.04.18 Nicolai Baldin (University of Cambridge, GB)
Optimal link prediction with matrix logistic regression
In this talk, we will consider the problem of link prediction, based on partial observation of a large network, and on side information associated to its vertices. The generative model is formulated as a matrix logistic regression. The performance of the model is analysed in a high-dimensional regime under a structural assumption. The minimax rate for the Frobenius-norm risk is established and a combinatorial estimator based on the penalised maximum likelihood approach is shown to achieve it. Furthermore, it is shown that this rate cannot be attained by any (randomised) algorithm computable in polynomial time under a computational complexity assumption. (joint work with Q. Berthet)
02.05.18 No Seminar

09.05.18 Prof. Gitta Kutyniok (TU Berlin)
Optimal approximation with sparsely connected deep neural networks
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classi cation task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called -shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. Bolcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin).
16.05.18 Prof. Moritz Jirak (TU Braunschweig), Martin Wahl (HU Berlin)
Relative perturbation bounds with applications to empirical covariance operators
A problem of fundamental importance in quantitative science is to estimate how a perturbation of a covariance operator effects the corresponding eigenvalues and eigenvectors. Due to its importance, this problem has been heavily investigated and discussed in the literature. In this talk, we present general perturbation expansions for a class of symmetric, compact operators. Applied to empirical covariance operators, these expansions allow us to describe how perturbations carry over to eigenvalues and vectors in terms of necessary and sucient conditions, characterising the perturbation transition. We demonstrate the usefulness of these expansions by discussing pca and fpca in various setups, including more exotic cases where the data is assumed to have high persistence in the dependence structure or exhibits (very) heavy tails. This talk is jointly given by Moritz Jirak and Martin Wahl, and divided into two parts.
23.05.18 Prof. Markus Reiss, Randolf Altmeyer (Humoldt-Universität zu Berlin)
A nonparametric estimation problem for linear SPDEs
(Abstract)
30.05.18 Florian Schäfer (California Institute of Technology, USA)
Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
Many popular methods in machine learning, statistics, and uncertainty quantification rely on priors given by smooth Gaussian processes, like those obtained from the Matern covariance functions. Furthermore, many physical systems are described in terms of elliptic partial differential equations. Therefore, implicitely or explicitely, numerical simulation of these systems requires an ecient numerical representation of the corresponding Green's operator. The resulting kernel matrices are typically dense, leading to (often prohibitive) O (N^2) or O(N^3) computational complexity. In this work, we prove rigorously that the dense N x N kernel matrices obtained from elliptic boundary value problems and measurement points distributed approximately uniformly in a d-dimensional domain can be Cholesky factorised to accuracy  in computational complexity O(N log^2(N) log^2d(N/e)) in time and O (N log(N) log d(N/e)) in space. For the closely related Matern covariances we observe very good results in practise, even for parameters corresponding to non-integer order equations. As a byproduct, we obtain a sparse PCA with near-optimal low-rank approximation property and a fast solver for elliptic PDE. We emphasise that our algorithm requires no analytic expression for the covariance function. Our work is inspired by the probabilistic interpretation of the Cholesky factorisation, the screening effect in spatial statistics, and recent results in numerical homogenisation.
06.06.18 Igor Cialenco (Illinois Institute of Technology)
Parameter estimation problems for parabolic SPDEs
In the rst part of the talk we will discuss the parameter estimation problem using Bayesian approach for the drift coecient of some linear (parabolic) SPDEs driven by a multiplicative noise of special structure. We assume that one path of the rst N Fourier modes of the solution are continuously observed over a nite time interval, and we derive Bayesian type estimators for the drift coefficient. As custom for Bayesian statistics, we prove a Bernstein--Von Mises theorem for the posterior density, and consequently, we derive some asymptotic properties of the proposed estimators, as N goes to in nity. In the second part of the talk we will study parameter estimation problems for discretely sampled SPDEs. We will discuss some general results on derivation of consistent and asymptotically normal estimators based on computation of the p-variations of stochastic processes and their smooth perturbations, that consequently are conveniently applied to SPDEs. Both the drift and the volatility cofficients are estimated using two sampling schemes - observing the solution at a xed time and on a discrete spatial grid, and at a xed space point and at discrete time instances of a nite interval. The theoretical results will be illustrated via numerical examples.
13.06.18 Prof. Alain Celisse (Université des Sciences et Technologies de Lille, France)
Early stopping rule and discrepancy principle in reproducing kernel Hilbert spaces
The main focus of this work is on the nonparametric estimation of a regression function by means of reproducing kernels and several iterative learning algorithms such as gradient descent, spectral cut-off, Tikhonov regularization,... First, we exploit the general framework of filter estimators to provide a unified analysis of these different algorithms. With the Tikhonov regularization, we will discuss the in uence of the parametrization on the interaction between the condition number of the Gram matrix and the number of iterations. More generally, we also discuss existing links between the qualification assumption and the used filter estimators. Second, we introduce an early stopping rule derived from the so-called discrepancy principle. Its behavior is compared with that one of other existing stopping rules and analyzed through the understanding of the dependence of the empirical risk with respect to in uential parameters (Gram matrix eigenvalues, cumulative step size, initialization). An oracle-type inequality is derived to quantify the finite-sample performance of the proposed stopping rule. The practical performance of the procedure is also empirically assessed from several simulation experiments.
20.06.18 Prof. Zuoqiang Shi (Tsinghua University, Beijing, China)
The seminar takes place at room 406, 4th floor! Low dimensional manifold model for image processing
In this talk, I will introduce a novel low dimensional manifold model for image processing problem. This model is based on the observation that for many natural images, the patch manifold usually has low dimension structure. Then, we use the dimension of the patch manifold as a regularization to recover the original image. Using some formula in differential geometry, this problem is reduced to solve Laplace-Beltrami equation on manifold. The Laplace-Beltrami equation is solved by the point integral method. Numerical tests show that this method gives very good results in image inpainting, denoising and super-resolution problem. This is joint work with Stanley Osher and Wei Zhu.
27.06.18

04.07.18



last reviewed: May 14, 2018 by Christine Schneider