Publikationen

Monografien

  • N. Tupitsa, P. Dvurechensky, D. Dvinskikh, A. Gasnikov, Section: Computational Optimal Transport, P.M. Pardalos, O.A. Prokopyev, eds., Encyclopedia of Optimization, Springer International Publishing, Cham, published online on 11.07.2023 pages, (Chapter Published), DOI 10.1007/978-3-030-54621-2_861-1 .

  • J. Polzehl, K. Tabelow, Magnetic Resonance Brain Imaging: Modeling and Data Analysis using R, 2nd Revised Edition, Series: Use R!, Springer International Publishing, Cham, 2023, 258 pages, (Monograph Published), DOI 10.1007/978-3-031-38949-8 .
    Abstract
    This book discusses the modeling and analysis of magnetic resonance imaging (MRI) data acquired from the human brain. The data processing pipelines described rely on R. The book is intended for readers from two communities: Statisticians who are interested in neuroimaging and looking for an introduction to the acquired data and typical scientific problems in the field; and neuroimaging students wanting to learn about the statistical modeling and analysis of MRI data. Offering a practical introduction to the field, the book focuses on those problems in data analysis for which implementations within R are available. It also includes fully worked examples and as such serves as a tutorial on MRI analysis with R, from which the readers can derive their own data processing scripts. The book starts with a short introduction to MRI and then examines the process of reading and writing common neuroimaging data formats to and from the R session. The main chapters cover three common MR imaging modalities and their data modeling and analysis problems: functional MRI, diffusion MRI, and Multi-Parameter Mapping. The book concludes with extended appendices providing details of the non-parametric statistics used and the resources for R and MRI data.The book also addresses the issues of reproducibility and topics like data organization and description, as well as open data and open science. It relies solely on a dynamic report generation with knitr and uses neuroimaging data publicly available in data repositories. The PDF was created executing the R code in the chunks and then running LaTeX, which means that almost all figures, numbers, and results were generated while producing the PDF from the sources.

  • M. Danilova, P. Dvurechensky, A. Gasnikov, E. Gorbunov, S. Guminov, D. Kamzolov, I. Shibaev, Chapter: Recent Theoretical Advances in Non-convex Optimization, A. Nikeghbali, P.M. Pardalos, A.M. Raigorodskii, M.Th. Rassias, eds., 191 of Springer Optimization and Its Applications, Springer, Cham, 2022, pp. 79--163, (Chapter Published), DOI 10.1007/978-3-031-00832-0_3 .

Artikel in Referierten Journalen

  • O. Butkovsky, K. Dareiotis, M. Gerencsér, Optimal rate of convergence for approximations of SPDEs with non-regular drift, SIAM Journal on Numerical Analysis, 61 (2023), pp. 1103--1137, DOI 10.1137/21M1454213 .

  • O. Butkovsky, V. Margarint, Y. Yuan, Law of the SLE tip, Electronic Journal of Probability, 28 (2023), pp. 1-25, DOI 10.1214/23-EJP1015 .

  • F. Galarce Marín, K. Tabelow, J. Polzehl, Ch.P. Papanikas, V. Vavourakis, L. Lilaj, I. Sack, A. Caiazzo, Displacement and pressure reconstruction from magnetic resonance elastography images: Application to an in silico brain model, SIAM Journal on Imaging Sciences, 16 (2023), pp. 996--1027, DOI 10.1137/22M149363X .
    Abstract
    This paper investigates a data assimilation approach for non-invasive quantification of intracranial pressure from partial displacement data, acquired through magnetic resonance elastography. Data assimilation is based on a parametrized-background data weak methodology, in which the state of the physical system tissue displacements and pressure fields is reconstructed from partially available data assuming an underlying poroelastic biomechanics model. For this purpose, a physics-informed manifold is built by sampling the space of parameters describing the tissue model close to their physiological ranges, to simulate the corresponding poroelastic problem, and compute a reduced basis. Displacements and pressure reconstruction is sought in a reduced space after solving a minimization problem that encompasses both the structure of the reduced-order model and the available measurements. The proposed pipeline is validated using synthetic data obtained after simulating the poroelastic mechanics on a physiological brain. The numerical experiments demonstrate that the framework can exhibit accurate joint reconstructions of both displacement and pressure fields. The methodology can be formulated for an arbitrary resolution of available displacement data from pertinent images. It can also inherently handle uncertainty on the physical parameters of the mechanical model by enlarging the physics-informed manifold accordingly. Moreover, the framework can be used to characterize, in silico, biomarkers for pathological conditions, by appropriately training the reduced-order model. A first application for the estimation of ventricular pressure as an indicator of abnormal intracranial pressure is shown in this contribution.

  • D. Belomestny, J.G.M. Schoenmakers, From optimal martingales to randomized dual optimal stopping, Quantitative Finance, 23 (2023), pp. 1099--1113, DOI 10.1080/14697688.2023.2223242 .
    Abstract
    In this article we study and classify optimal martingales in the dual formulation of optimal stopping problems. In this respect we distinguish between weakly optimal and surely optimal martingales. It is shown that the family of weakly optimal and surely optimal martingales may be quite large. On the other hand it is shown that the Doob-martingale, that is, the martingale part of the Snell envelope, is in a certain sense the most robust surely optimal martingale under random perturbations. This new insight leads to a novel randomized dual martingale minimization algorithm that does`nt require nested simulation. As a main feature, in a possibly large family of optimal martingales the algorithm efficiently selects a martingale that is as close as possible to the Doob martingale. As a result, one obtains the dual upper bound for the optimal stopping problem with low variance.

  • J. Diehl, K. Ebrahimi-Fard, N. Tapia, Generalized iterated-sums signatures, Journal of Algebra, 632 (2023), pp. 801--824, DOI 10.1016/j.jalgebra.2023.06.007 .
    Abstract
    We explore the algebraic properties of a generalized version of the iterated-sums signature, inspired by previous work of F. Király and H. Oberhauser. In particular, we show how to recover the character property of the associated linear map over the tensor algebra by considering a deformed quasi-shuffle product of words on the latter. We introduce three non-linear transformations on iterated-sums signatures, close in spirit to Machine Learning applications, and show some of their properties.

  • K. Ebrahimi-Fard, F. Patras, N. Tapia, L. Zambotti, Shifted substitution in non-commutative multivariate power series with a view toward free probability, SIGMA. Symmetry, Integrability and Geometry. Methods and Applications, 19 (2023), pp. 038/1--038/17, DOI 10.3842/SIGMA.2023.038 .

  • A. Vasin, A. Gasnikov, P. Dvurechensky, V. Spokoiny, Accelerated gradient methods with absolute and relative noise in the gradient, Optimization Methods & Software, published online in June 2023, DOI 10.1080/10556788.2023.2212503 .

  • S. Athreya, O. Butkovsky, K. , L. Mytnik, Well-posedness of stochastic heat equation with distributional drift and skew stochastic heat equation, Communications on Pure and Applied Analysis, published online on 29.11.2023, DOI 10.1002/cpa.22157 .

  • CH. Bayer, P. Hager, S. Riedel, J.G.M. Schoenmakers, Optimal stopping with signatures, The Annals of Applied Probability, 33 (2023), pp. 238--273, DOI 10.1214/22-AAP1814 .
    Abstract
    We propose a new method for solving optimal stopping problems (such as American option pricing in finance) under minimal assumptions on the underlying stochastic process. We consider classic and randomized stopping times represented by linear functionals of the associated rough path signature, and prove that maximizing over the class of signature stopping times, in fact, solves the original optimal stopping problem. Using the algebraic properties of the signature, we can then recast the problem as a (deterministic) optimization problem depending only on the (truncated) expected signature. The only assumption on the process is that it is a continuous (geometric) random rough path. Hence, the theory encompasses processes such as fractional Brownian motion which fail to be either semi-martingales or Markov processes.

  • CH. Bayer, M. Eigel, L. Sallandt, P. Trunschke, Pricing high-dimensional Bermudan options with hierarchical tensor formats, SIAM Journal on Financial Mathematics, ISSN 1945-497X, 14 (2023), pp. 383--406, DOI 10.1137/21M1402170 .

  • CH. Bayer, P. Friz, N. Tapia, Stability of deep neural networks via discrete rough paths, SIAM Journal on Mathematics of Data Science, 5 (2023), pp. 50--76, DOI 10.1137/22M1472358 .
    Abstract
    Using rough path techniques, we provide a priori estimates for the output of Deep Residual Neural Networks. In particular we derive stability bounds in terms of the total p-variation of trained weights for any p ≥ 1.

  • CH. Bayer, Ch. Ben Hammouda, R.F. Tempone, Numerical smoothing with hierarchical adaptive sparse grids and quasi-Monte Carlo methods for efficient option pricing, Quantitative Finance, 23 (2023), pp. 209--227, DOI 10.1080/14697688.2022.2135455 .
    Abstract
    When approximating the expectation of a functional of a stochastic process, the efficiency and performance of deterministic quadrature methods, such as sparse grid quadrature and quasi-Monte Carlo (QMC) methods, may critically depend on the regularity of the integrand. To overcome this issue and reveal the available regularity, we consider cases in which analytic smoothing cannot be performed, and introduce a novel numerical smoothing approach by combining a root finding algorithm with one-dimensional integration with respect to a single well-selected variable. We prove that under appropriate conditions, the resulting function of the remaining variables is a highly smooth function, potentially affording the improved efficiency of adaptive sparse grid quadrature (ASGQ) and QMC methods, particularly when combined with hierarchical transformations (i.e., Brownian bridge and Richardson extrapolation on the weak error). This approach facilitates the effective treatment of high dimensionality. Our study is motivated by option pricing problems, and our focus is on dynamics where the discretization of the asset price is necessary. Based on our analysis and numerical experiments, we show the advantages of combining numerical smoothing with the ASGQ and QMC methods over ASGQ and QMC methods without smoothing and the Monte Carlo approach.

  • P.K. Friz, Th. Wagenhofer, Reconstructing volatility: Pricing of index options under rough volatility, Mathematical Finance. An International Journal of Mathematics, Statistics and Financial Economics, 33 (2023), pp. 19--40, DOI 10.1111/mafi.12374 .

  • P.K. Friz, P. Zorin-Kranich , Rough semimartingales and $p$-variation estimates for martingale transforms, The Annals of Applied Probability, 51 (2023), pp. 397--441, DOI 10.1214/22-AOP1598 .

  • V. Spokoiny, Dimension free non-asymptotic bounds on the accuracy of high dimensional Laplace approximation, SIAM/ASA Journal on Uncertainty Quantification, 11 (2023), pp. 1044--1068, DOI 10.1137/22M1495688 .
    Abstract
    This note attempts to revisit the classical results on Laplace approximation in a modern non-asymptotic and dimension free form. Such an extension is motivated by applications to high dimensional statistical and optimization problems. The established results provide explicit non-asymptotic bounds on the quality of a Gaussian approximation of the posterior distribution in total variation distance in terms of the so called empheffective dimension ( dimL ). This value is defined as interplay between information contained in the data and in the prior distribution. In the contrary to prominent Bernstein - von Mises results, the impact of the prior is not negligible and it allows to keep the effective dimension small or moderate even if the true parameter dimension is huge or infinite. We also address the issue of using a Gaussian approximation with inexact parameters with the focus on replacing the Maximum a Posteriori (MAP) value by the posterior mean and design the algorithm of Bayesian optimization based on Laplace iterations. The results are specified to the case of nonlinear regression.

  • A. Kroshnin, E. Stepanov, D. Trevisan, Infinite multidimensional scaling for metric measure spaces, ESAIM. Control, Optimisation and Calculus of Variations, 28 (2022), pp. 2022053/1--2022053/27, DOI 10.1051/cocv/2022053 .

  • S.A. Alves, J. Polzehl, N.M. Brisson, A. Bender, A.N. Agres, P. Damm, G.N. Duda, Ground reaction forces and external hip joint moments predict in vivo hip contact forces during gait, Journal of Biomechanics, 135 (2022), pp. 111037/1--111037/6, DOI 10.1016/j.jbiomech.2022.111037 .

  • E. Borodich, V. Tominin, Y. Tominin, D. Kovalev, A. Gasnikov, P. Dvurechensky, Accelerated variance-reduced methods for saddle-point problems, EURO Journal on Computational Optimization, 10 (2022), pp. 100048/1--100048/32, DOI 10.1016/j.ejco.2022.100048 .

  • M. Coghi, W. Dreyer, P.K. Friz, P. Gajewski, C. Guhlke, M. Maurelli, A McKean--Vlasov SDE and particle system with interaction from reflecting boundaries, SIAM Journal on Mathematical Analysis, 54 (2022), pp. 2251--2294, DOI 10.1137/21M1409421 .

  • A. Ivanova, P. Dvurechensky, E. Vorontsova, D. Pasechnyuk, A. Gasnikov, D. Dvinskikh, A. Tyurin, Oracle complexity separation in convex optimization, Journal of Optimization Theory and Applications, 193 (2022), pp. 462--490, DOI 10.1007/s10957-022-02038-7 .
    Abstract
    Ubiquitous in machine learning regularized empirical risk minimization problems are often composed of several blocks which can be treated using different types of oracles, e.g., full gradient, stochastic gradient or coordinate derivative. Optimal oracle complexity is known and achievable separately for the full gradient case, the stochastic gradient case, etc. We propose a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for each block the corresponding oracle is called the optimal number of times for a given accuracy. As a particular example, we demonstrate that for a combination of a full gradient oracle and either a stochastic gradient oracle or a coordinate descent oracle our approach leads to the optimal number of oracle calls separately for the full gradient part and the stochastic/coordinate descent part.

  • C. Bellingeri, P.K. Friz, S. Paycha, R. Preiss, Smooth rough paths, their geometry and algebraic renormalization, Vietnam Journal of Mathematics, 50 (2022), pp. 719--761, DOI 10.1007/s10013-022-00570-7 .
    Abstract
    We introduce the class of smooth rough paths and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer Cartan perspective is the key to a purely algebraic form of Lyons extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of sum of rough paths. We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.

  • D. Belomestny, Ch. Bender, J.G.M. Schoenmakers, Solving optimal stopping problems via randomization and empirical dual optimization, Mathematics of Operations Research, published online on 14.09.2022, DOI 10.1287/moor.2022.1306 .
    Abstract
    In this paper we consider optimal stopping problems in their dual form. In this way we reformulate the optimal stopping problem as a problem of stochastic average approximation (SAA) which can be solved via linear programming. By randomizing the initial value of the underlying process, we enforce solutions with zero variance while preserving the linear programming structure of the problem. A careful analysis of the randomized SAA algorithm shows that it enjoys favorable properties such as faster convergence rates and reduced complexity as compared to the non randomized procedure. We illustrate the performance of our algorithm on several benchmark examples.

  • I. Chevyrev, P.K. Friz, A. Korepanov, I. Melbourne, H. Zhang, Deterministic homogenization under optimal moment assumptions for fast-slow systems. Part 2, Annales de l'Institut Henri Poincare. Probabilites et Statistiques, 58 (2022), pp. 1328--1350, DOI 10.1214/21-AIHP1203 .

  • J. Diehl, R. Preiss, M. Ruddy, N. Tapia, The moving-frame method for iterated-integrals: Orthogonal invariants, Foundations of Computational Mathematics. The Journal of the Society for the Foundations of Computational Mathematics, published online on 01.06.2022, DOI 10.1007/s10208-022-09569-5 .

  • J. Diehl, K. Ebrahimi-Fard, N. Tapia, Tropical time series, iterated-sums signatures and quasisymmetric functions, SIAM Journal on Applied Algebra and Geometry, 6 (2022), pp. 563--599, DOI 10.1137/20M1380041 .
    Abstract
    Aiming for a systematic feature extraction from time series, we introduce the iterated-sums signature over arbitrary commutative semirings. The case of the tropical semiring is a central, and our motivating, example. It leads to features of (real-valued) time series that are not easily available using existing signature-type objects. We demonstrate how the signature extracts chronological aspects of a time series and that its calculation is possible in linear time. We identify quasisymmetric expressions over semirings as the appropriate framework for iterated-sums signatures over semiring-valued time series.

  • E. Gorbunov, P. Dvurechensky, A. Gasnikov, An accelerated method for derivative-free smooth stochastic convex optimization, SIAM Journal on Optimization, 32 (2022), pp. 1210--1238, DOI 10.1137/19M1259225 .
    Abstract
    We consider an unconstrained problem of minimization of a smooth convex function which is only available through noisy observations of its values, the noise consisting of two parts. Similar to stochastic optimization problems, the first part is of a stochastic nature. On the opposite, the second part is an additive noise of an unknown nature, but bounded in the absolute value. In the two-point feedback setting, i.e. when pairs of function values are available, we propose an accelerated derivative-free algorithm together with its complexity analysis. The complexity bound of our derivative-free algorithm is only by a factor of n??? larger than the bound for accelerated gradient-based algorithms, where n is the dimension of the decision variable. We also propose a non-accelerated derivative-free algorithm with a complexity bound similar to the stochastic-gradient-based algorithm, that is, our bound does not have any dimension-dependent factor. Interestingly, if the solution of the problem is sparse, for both our algorithms, we obtain better complexity bound if the algorithm uses a 1-norm proximal setup, rather than the Euclidean proximal setup, which is a standard choice for unconstrained problems.

  • S. Mohammadi, T. Streubel, L. Klock, A. Lutti, K. Pine, S. Weber, L. Edwards, P. Scheibe, G. Ziegler, J. Gallinat, S. Kuhn, M. Callaghan, N. Weiskopf, K. Tabelow, Error quantification in multi-parameter mapping facilitates robust estimation and enhanced group level sensitivity, NeuroImage, 262 (2022), pp. 119529/1--119529/14, DOI 10.1016/j.neuroimage.2022.119529 .
    Abstract
    Multi-Parameter Mapping (MPM) is a comprehensive quantitative neuroimaging protocol that enables estimation of four physical parameters (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD, and magnetization transfer saturation MTsat) that are sensitive to microstructural tissue properties such as iron and myelin content. Their capability to reveal microstructural brain differences, however, is tightly bound to controlling random noise and artefacts (e.g. caused by head motion) in the signal. Here, we introduced a method to estimate the local error of PD, R1, and MTsat maps that captures both noise and artefacts on a routine basis without requiring additional data. To investigate the method's sensitivity to random noise, we calculated the model-based signal-to-noise ratio (mSNR) and showed in measurements and simulations that it correlated linearly with an experimental raw-image-based SNR map. We found that the mSNR varied with MPM protocols, magnetic field strength (3T vs. 7T) and MPM parameters: it halved from PD to R1 and decreased from PD to MT_sat by a factor of 3-4. Exploring the artefact-sensitivity of the error maps, we generated robust MPM parameters using two successive acquisitions of each contrast and the acquisition-specific errors to down-weight erroneous regions. The resulting robust MPM parameters showed reduced variability at the group level as compared to their single-repeat or averaged counterparts. The error and mSNR maps may better inform power-calculations by accounting for local data quality variations across measurements. Code to compute the mSNR maps and robustly combined MPM maps is available in the open-source hMRI toolbox.

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Axisymmetric diffusion kurtosis imaging with Rician bias correction: A simulation study, Magnetic Resonance in Medicine, 89 (2023), pp. 787--799 (published online on 05.10.2022), DOI 10.1002/mrm.29474 .

  • N. Puchkin, V. Spokoiny, Structure-adaptive manifold estimation, Journal of Machine Learning Research (JMLR). MIT Press, Cambridge, MA. English, English abstracts., 23 (2022), pp. 1--62.
    Abstract
    We consider a problem of manifold estimation from noisy observations. Many manifold learning procedures locally approximate a manifold by a weighted average over a small neighborhood. However, in the presence of large noise, the assigned weights become so corrupted that the averaged estimate shows very poor performance. We suggest a novel computationally efficient structure-adaptive procedure, which simultaneously reconstructs a smooth manifold and estimates projections of the point cloud onto this manifold. The proposed approach iteratively refines the weights on each step, using the structural information obtained at previous steps. After several iterations, we obtain nearly öracle" weights, so that the final estimates are nearly efficient even in the presence of relatively large noise. In our theoretical study we establish tight lower and upper bounds proving asymptotic optimality of the method for manifold estimation under the Hausdorff loss. Our finite sample study confirms a very reasonable performance of the procedure in comparison with the other methods of manifold estimation.

  • F. Stonyakin, A. Gasnikov, P. Dvurechensky, A. Titov, M. Alkousa, Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle, Journal of Optimization Theory and Applications, 194 (2022), pp. 988--1013, DOI 10.1007/s10957-022-02062-7 .

  • D. Tiapkin, A. Gasnikov, P. Dvurechensky, Stochastic saddle-point optimization for the Wasserstein barycenter problem, Optimization Letters, 16 (2022), pp. 2145--2175, DOI 10.1007/s11590-021-01834-w .

  • CH. Bayer, S. Breneis, Markovian approximations of stochastic Volterra equations with the fractional kernel, Quantitative Finance, 23 (2023), pp. 53--70 (published online on 24.11.2022), DOI 10.1080/14697688.2022.2139193 .
    Abstract
    We consider rough stochastic volatility models where the variance process satisfies a stochastic Volterra equation with the fractional kernel, as in the rough Bergomi and the rough Heston model. In particular, the variance process is therefore not a Markov process or semimartingale, and has quite low Hölder-regularity. In practice, simulating such rough processes thus often results in high computational cost. To remedy this, we study approximations of stochastic Volterra equations using an N-dimensional diffusion process defined as solution to a system of ordinary stochastic differential equation. If the coefficients of the stochastic Volterra equation are Lipschitz continuous, we show that these approximations converge strongly with superpolynomial rate in N. Finally, we apply this approximation to compute the implied volatility smile of a European call option under the rough Bergomi and the rough Heston model.

  • CH. Bayer, D. Belomestny, P. Hager, P. Pigato, J.G.M. Schoenmakers, V. Spokoiny, Reinforced optimal control, Communications in Mathematical Sciences, 20 (2022), pp. 1951--1978, DOI 10.4310/CMS.2022.v20.n7.a7 .
    Abstract
    Least squares Monte Carlo methods are a popular numerical approximation method for solving stochastic control problems. Based on dynamic programming, their key feature is the approximation of the conditional expectation of future rewards by linear least squares regression. Hence, the choice of basis functions is crucial for the accuracy of the method. Earlier work by some of us [Belomestny, Schoenmakers, Spokoiny, Zharkynbay, Commun. Math. Sci., 18(1):109?121, 2020] proposes to reinforce the basis functions in the case of optimal stopping problems by already computed value functions for later times, thereby considerably improving the accuracy with limited additional computational cost. We extend the reinforced regression method to a general class of stochastic control problems, while considerably improving the method?s efficiency, as demonstrated by substantial numerical examples as well as theoretical analysis.

  • CH. Bayer, M. Fukasawa, S. Nakahara, Short communication: On the weak convergence rate in the discretization of rough volatility models, SIAM Journal on Financial Mathematics, ISSN 1945-497X, 13 (2022), pp. SC66--SC73, DOI 10.1137/22M1482871 .

  • CH. Bayer, E.J. Hall, R. Tempone, Weak error rates for option pricing under linear rough volatility, International Journal of Theoretical and Applied Finance, 25 (2022), pp. 2250029/1--2250029/47, DOI 10.1142/S0219024922500297 .

  • CH. Bayer, J. Qiu, Y. Yao, Pricing options under rough volatility with backward SPDEs, SIAM Journal on Financial Mathematics, ISSN 1945-497X, 13 (2022), pp. 179--212, DOI 10.1137/20M1357639 .
    Abstract
    In this paper, we study the option pricing problems for rough volatility models. As the framework is non-Markovian, the value function for a European option is not deterministic; rather, it is random and satisfies a backward stochastic partial differential equation (BSPDE). The existence and uniqueness of weak solutions is proved for general nonlinear BSPDEs with unbounded random leading coefficients whose connections with certain forward-backward stochastic differential equations are derived as well. These BSPDEs are then used to approximate American option prices. A deep learning-based method is also investigated for the numerical approximations to such BSPDEs and associated non-Markovian pricing problems. Finally, the examples of rough Bergomi type are numerically computed for both European and American options.

  • P. Dvurechensky, D. Kamzolov, A. Lukashevich, S. Lee, E. Ordentlich, C.A. Uribe, A. Gasnikov, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, EURO Journal on Computational Optimization, 10 (2022), pp. 100045/1--100045/35, DOI 10.1016/j.ejco.2022.100045 .

  • P. Dvurechensky, K. Safin, S. Shtern, M. Staudigl, Generalized self-concordant analysis of Frank--Wolfe algorithms, Mathematical Programming. A Publication of the Mathematical Programming Society, 198 (2023), pp. 255--323 (published online on 29.01.2022), DOI 10.1007/s10107-022-01771-1 .
    Abstract
    Projection-free optimization via different variants of the Frank--Wolfe method has become one of the cornerstones of large scale optimization for machine learning and computational statistics. Numerous applications within these fields involve the minimization of functions with self-concordance like properties. Such generalized self-concordant functions do not necessarily feature a Lipschitz continuous gradient, nor are they strongly convex, making them a challenging class of functions for first-order methods. Indeed, in a number of applications, such as inverse covariance estimation or distance-weighted discrimination problems in binary classification, the loss is given by a generalized self-concordant function having potentially unbounded curvature. For such problems projection-free minimization methods have no theoretical convergence guarantee. This paper closes this apparent gap in the literature by developing provably convergent Frank?Wolfe algorithms with standard O(1/k) convergence rate guarantees. Based on these new insights, we show how these sublinearly convergent methods can be accelerated to yield linearly convergent projection-free methods, by either relying on the availability of a local liner minimization oracle, or a suitable modification of the away-step Frank--Wolfe method.

  • P.K. Friz, J. Gatheral, R. Radoičić, Forests, cumulants, martingales, The Annals of Probability, 50 (2022), pp. 1418--1445, DOI 10.1214/21-AOP1560 .
    Abstract
    This work is concerned with forest and cumulant type expansions of general random variables on a filtered probability spaces. We establish a "broken exponential martingale" expansion that generalizes and unifies the exponentiation result of Alòs, Gatheral, and Radoičić and the cumulant recursion formula of Lacoin, Rhodes, and Vargas. Specifically, we exhibit the two previous results as lower dimensional projections of the same generalized forest expansion, subsequently related by forest reordering. Our approach also leads to sharp integrability conditions for validity of the cumulant formula, as required by many of our examples, including iterated stochastic integrals, Lévy area, Bessel processes, KPZ with smooth noise, Wiener-Itô chaos and "rough" stochastic (forward) variance models.

  • P.K. Friz, P. Hager, N. Tapia, Unified signature cumulants and generalized Magnus expansions, Forum of Mathematics. Sigma, 10 (2022), pp. e42/1--e42/60, DOI 10.1017/fms.2022.20 .
    Abstract
    The signature of a path can be described as its full non-commutative exponential. Following T. Lyons we regard its expectation, the expected signature, as path space analogue of the classical moment generating function. The logarithm thereof, taken in the tensor algebra, defines the signature cumulant. We establish a universal functional relation in a general semimartingale context. Our work exhibits the importance of Magnus expansions in the algorithmic problem of computing expected signature cumulants, and further offers a far-reaching generalization of recent results on characteristic exponents dubbed diamond and cumulant expansions; with motivation ranging from financial mathematics to statistical physics. From an affine process perspective, the functional relation may be interpreted as infinite-dimensional, non-commutative ("Hausdorff") variation of Riccati's equation. Many examples are given.

  • P.K. Friz, T. Klose, Precise Laplace asymptotics for singular stochastic PDEs: The case of 2D gPAM, Journal of Functional Analysis, 283 (2022), pp. 109446/1--109446/86, DOI 10.1016/j.jfa.2022.109446 .

  • P.K. Friz, B. Seeger, P. Zorin-Kranich , Besov rough path analysis, Journal of Differential Equations, 339 (2022), pp. 152--231, DOI 10.1016/j.jde.2022.08.008 .
    Abstract
    Rough path analysis is developed in the full Besov scale. This extends, and essentially concludes, an investigation started by Prömel and Trabs (2016) [49], further studied in a series of papers by Liu, Prömel and Teichmann. A new Besov sewing lemma, a real-analysis result of interest in its own right, plays a key role, and the flexibility in the choice of Besov parameters allows for the treatment of equations not available in the Hölder or variation settings. Important classes of stochastic processes fit in the present framework.

Beiträge zu Sammelwerken

  • R. Danabalan, M. Hintermüller, Th. Koprucki, K. Tabelow, MaRDI: Building research data infrastructures for mathematics and the mathematical sciences, in: Vol. 1 (2023): 1st Conference on Research Data Infrastructure (CoRDI) - Connecting Communities, Y. Sure-Vetter, C. Goble, eds., Proceedings of the Conference on Research Data Infrastructure, TIB Open Publishing, Hannover, pp. published online on 07.09.2023 (69/1--69/4), DOI 10.52825/cordi.v1i.397 .
    Abstract
    MaRDI is building a research data infrastructure for mathematics and beyond based on semantic technologies (metadata, ontologies, knowledge graphs) and data repositories. Focusing on the algorithms, models and workflows, the MaRDI infrastructure will connect with other disciplines and NFDI consortia on data processing methods, solving real world problems and support mathematicians on research datamanagement

  • T. Boege, R. Fritze, Ch. Görgen, J. Hanselman, D. Iglezakis, L. Kastner, Th. Koprucki, T. Krause, Ch. Lehrenfeld, S. Polla, M. Reidelbach, Ch. Riedel, J. Saak, B. Schembera, K. Tabelow, M. Weber, Research-data management planning in the German mathematical community, Eur. Math. Soc. Mag., European Mathematical Society, pp. published online on 21.09.2023, DOI 10.4171/MAG/152 .

  • F. Galarce Marín, K. Tabelow, J. Polzehl, Ch. Panagiotis, V. Vavourakis, I. Sack, A. Caiazzo, Assimilation of magnetic resonance elastography displacement data in brain tissues, in: 7th International Conference on Computational & Mathematical Biomedical Engineering (CMBE22), 27th -- 29th June, 2022, Milan, Italy, P. Nithiarasu, C. Vergara, eds., 2, CMBE, Cardiff, UK, 2022, pp. 648--651.

  • A. Beznosikov, P. Dvurechensky, A. Koloskova, V. Samokhin, S.U. Stich, A. Gasnikov, Decentralized local stochastic extra-gradient for variational inequalities, in: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), S. Kojeyo, S. Mohamed, A. Argawal, D. Belgrave, K. Cho, A. Oh, eds., 2022, pp. 38116--38133.

  • S. Chezhegov, A. Novitskii, A. Rogozin, S. Parsegov, P. Dvurechensky, A. Gasnikov, A general framework for distributed partitioned optimization, 9th IFAC Conference on Networked Systems NECSYS 2022, Zürich, Switzerland, July 5 - 7, 2022, 55 of IFAC-PapersOnLine, Elsevier, 2022, pp. 139--144, DOI 10.1016/j.ifacol.2022.07.249 .
    Abstract
    Distributed optimization is widely used in large scale and privacy preserving machine learning and various distributed control and sensing systems. It is assumed that every agent in the network possesses a local objective function, and the nodes interact via a communication network. In the standard scenario, which is mostly studied in the literature, the local functions are dependent on a common set of variables, and, therefore, have to send the whole variable set at each communication round. In this work, we study a different problem statement, where each of the local functions held by the nodes depends only on some subset of the variables. Given a network, we build a general algorithm-independent framework for distributed partitioned optimization that allows to construct algorithms with reduced communication load using a generalization of Laplacian matrix. Moreover, our framework allows to obtain algorithms with non-asymptotic convergence rates with explicit dependence on the parameters of the network, including accelerated and optimal first-order methods. We illustrate the efficacy of our approach on a synthetic example.

  • A. Gasnikov, A. Novitskii, V. Novitskii, F. Abdukhakimov, D. Kamzolov, A. Beznosikov, M. Takáč, P. Dvurechensky, B. Gu, The power of first-order smooth optimization for black-box non-smooth problems, in: Proceedings of the 39th International Conference on Machine Learning, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato, eds., 162 of Proceedings of Machine Learning Research, 2022, pp. 7241--7265.

  • E. Gorbunov, M. Danilova, D. Dobre, P. Dvurechensky, A. Gasnikov, G. Gidel, Clipped stochastic methods for variational inequalities with heavy-tailed noise, in: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh, eds., 2022, pp. 31319--31332.

  • D. Yarmoshik, A. Rogozin, O.O. Khamisov, P. Dvurechensky, A. Gasnikov, Decentralized convex optimization under affine constraints for power systems control, in: Mathematical Optimization Theory and Operations Research. MOTOR 2022, P. Pardalos, M. Khachay, V. Mazalov, eds., 13367 of Lecture Notes in Computer Science, Springer, Cham, 2022, pp. 62--75, DOI 10.1007/978-3-031-09607-5_5 .

Preprints, Reports, Technical Reports

  • J.A. Dekker, R.J.A. Laeven, J.G.M. Schoenmakers, M.H. Vellekoop, Optimal stopping with randomly arriving opportunities to stop, Preprint no. 3056, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3056 .
    Abstract, PDF (701 kByte)
    We develop methods to solve general optimal stopping problems with opportunities to stop that arrive randomly. Such problems occur naturally in applications with market frictions. Pivotal to our approach is that our methods operate on random rather than deterministic time scales. This enables us to convert the original problem into an equivalent discrete-time optimal stopping problem with natural number valued stopping times and a possibly infinite horizon. To numerically solve this problem, we design a random times least squares Monte Carlo method. We also analyze an iterative policy improvement procedure in this setting. We illustrate the efficiency of our methods and the relevance of randomly arriving opportunities in a few examples.

  • CH. Bayer, S. Breneis, Efficient option pricing in the rough Heston model using weak simulation schemes, Preprint no. 3045, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3045 .
    Abstract, PDF (569 kByte)
    We provide an efficient and accurate simulation scheme for the rough Heston model in the standard ($H>0$) as well as the hyper-rough regime ($H > -1/2$). The scheme is based on low-dimensional Markovian approximations of the rough Heston process derived in [Bayer and Breneis, arXiv:2309.07023], and provides weak approximation to the rough Heston process. Numerical experiments show that the new scheme exhibits second order weak convergence, while the computational cost increases linear with respect to the number of time steps. In comparison, existing schemes based on discretization of the underlying stochastic Volterra integrals such as Gatheral's HQE scheme show a quadratic dependence of the computational cost. Extensive numerical tests for standard and path-dependent European options and Bermudan options show the method's accuracy and efficiency.

  • CH. Bayer, S. Breneis, Weak Markovian approximations of rough Heston, Preprint no. 3044, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3044 .
    Abstract, PDF (834 kByte)
    The rough Heston model is a very popular recent model in mathematical finance; however, the lack of Markov and semimartingale properties poses significant challenges in both theory and practice. A way to resolve this problem is to use Markovian approximations of the model. Several previous works have shown that these approximations can be very accurate even when the number of additional factors is very low. Existing error analysis is largely based on the strong error, corresponding to the L2 distance between the kernels. Extending earlier results by [Abi Jaber and El Euch, SIAM Journal on Financial Mathematics 10(2):309?349, 2019], we show that the weak error of the Markovian approximations can be bounded using the L1-error in the kernel approximation for general classes of payoff functions for European style options. Moreover, we give specific Markovian approximations which converge super-polynomially in the number of dimensions, and illustrate their numerical superiority in option pricing compared to previously existing approximations. The new approximations also work for the hyper-rough case H > -1/2.

  • P. Bank, Ch. Bayer, P. Friz, L. Pelizzari, Rough PDEs for local stochastic volatility models, Preprint no. 3034, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3034 .
    Abstract, PDF (575 kByte)
    In this work, we introduce a novel pricing methodology in general, possibly non-Markovian local stochastic volatility (LSV) models. We observe that by conditioning the LSV dynamics on the Brownian motion that drives the volatility, one obtains a time-inhomogeneous Markov process. Using tools from rough path theory, we describe how to precisely understand the conditional LSV dynamics and reveal their Markovian nature. The latter allows us to connect the conditional dynamics to so-called rough partial differential equations (RPDEs), through a Feynman-Kac type of formula. In terms of European pricing, conditional on realizations of one Brownian motion, we can compute conditional option prices by solving the corresponding linear RPDEs, and then average over all samples to find unconditional prices. Our approach depends only minimally on the specification of the volatility, making it applicable for a wide range of classical and rough LSV models, and it establishes a PDE pricing method for non-Markovian models. Finally, we present a first glimpse at numerical methods for RPDEs and apply them to price European options in several rough LSV models.

  • P. Dvurechensky, J.-J. Zhu, Kernel mirror prox and RKHS gradient flow for mixed functional Nash equilibrium, Preprint no. 3032, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3032 .
    Abstract, PDF (436 kByte)
    Kernel mirror prox and RKHS gradient flow for mixed functional Nash equilibrium Pavel Dvurechensky , Jia-Jie Zhu Abstract The theoretical analysis of machine learning algorithms, such as deep generative modeling, motivates multiple recent works on the Mixed Nash Equilibrium (MNE) problem. Different from MNE, this paper formulates the Mixed Functional Nash Equilibrium (MFNE), which replaces one of the measure optimization problems with optimization over a class of dual functions, e.g., the reproducing kernel Hilbert space (RKHS) in the case of Mixed Kernel Nash Equilibrium (MKNE). We show that our MFNE and MKNE framework form the backbones that govern several existing machine learning algorithms, such as implicit generative models, distributionally robust optimization (DRO), and Wasserstein barycenters. To model the infinite-dimensional continuous- limit optimization dynamics, we propose the Interacting Wasserstein-Kernel Gradient Flow, which includes the RKHS flow that is much less common than the Wasserstein gradient flow but enjoys a much simpler convexity structure. Time-discretizing this gradient flow, we propose a primal-dual kernel mirror prox algorithm, which alternates between a dual step in the RKHS, and a primal step in the space of probability measures. We then provide the first unified convergence analysis of our algorithm for this class of MKNE problems, which establishes a convergence rate of O(1/N ) in the deterministic case and O(1/√N) in the stochastic case. As a case study, we apply our analysis to DRO, providing the first primal-dual convergence analysis for DRO with probability-metric constraints.

  • CH. Bayer, S. Breneis, T. Lyons, An adaptive algorithm for rough differential equations, Preprint no. 3013, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3013 .
    Abstract, PDF (838 kByte)
    We present an adaptive algorithm for effectively solving rough differential equations (RDEs) using the log- ODE method. The algorithm is based on an error representation formula that accurately describes the contribution of local errors to the global error. By incorporating a cost model, our algorithm efficiently deter- mines whether to refine the time grid or increase the order of the log-ODE method. In addition, we provide several examples that demonstrate the effectiveness of our adapctive algorithm in solving RDEs.

  • J. Geuter, V. Laschos, Generative adversarial learning of Sinkhorn algorithm initializations, Preprint no. 2978, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2978 .
    Abstract, PDF (790 kByte)
    The Sinkhorn algorithm [Cut13] is the state-of-the-art to compute approximations of optimal transport distances between discrete probability distributions, making use of an entropically regularized formulation of the problem. The algorithm is guaranteed to converge, no matter its initialization. This lead to little attention being paid to initializing it, and simple starting vectors like the n-dimensional one-vector are common choices. We train a neural network to compute initializations for the algorithm, which significantly outperform standard initializations. The network predicts a potential of the optimal transport dual problem, where training is conducted in an adversarial fashion using a second, generating network. The network is universal in the sense that it is able to generalize to any pair of distributions of fixed dimension. Furthermore, we show that for certain applications the network can be used independently.

  • A. Afsardeir, A. Kapetanis, V. Laschos, K. Obermayer, Risk-sensitive partially observable Markov decision processes as fully observable multivariate utility optimization problems, Preprint no. 2977, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2977 .
    Abstract, PDF (521 kByte)
    We provide a new algorithm for solving Risk Sensitive Partially Observable Markov Decisions Processes, when the risk is modeled by a utility function, and both the state space and the space of observations are fi- nite. This algorithm is based on an observation that the change of measure and the subsequent introduction of the information space, which is used for exponential utility functions, can be actually extended for sums of exponentials if one introduces an extra vector parameter that tracks the expected accumulated cost that corresponds to each exponential. Since every increasing function can be approximated by sums of expo- nentials in finite intervals, the method can be essentially applied for any utility function, with its complexity depending on the number

  • V. Laschos, A. Mielke, Evolutionary variational inequalities on the Hellinger--Kantorovich and spherical Hellinger--Kantorovich spaces, Preprint no. 2973, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2973 .
    Abstract, PDF (491 kByte)
    We study the minimizing movement scheme for families of geodesically semiconvex functionals defined on either the Hellinger--Kantorovich or the Spherical Hellinger--Kantorovich space. By exploiting some of the finer geometric properties of those spaces, we prove that the sequence of curves, which are produced by geodesically interpolating the points generated by the minimizing movement scheme, converges to curves that satisfy the Evolutionary Variational Inequality (EVI), when the time step goes to 0.

  • CH. Bayer, Ch. Ben Hammouda, A. Papapantoleon, M. Samet, R. Tempone, Optimal damping with hierarchical adaptive quadrature for efficient Fourier pricing of multi-asset options in Lévy models, Preprint no. 2968, WIAS, Berlin, 2022.
    Abstract, PDF (1686 kByte)
    Efficient pricing of multi-asset options is a challenging problem in quantitative finance. When the characteristic function is available, Fourier-based methods become competitive compared to alternative techniques because the integrand in the frequency space has often higher regularity than in the physical space. However, when designing a numerical quadrature method for most of these Fourier pricing approaches, two key aspects affecting the numerical complexity should be carefully considered: (i) the choice of the damping parameters that ensure integrability and control the regularity class of the integrand and (ii) the effective treatment of the high dimensionality. To address these challenges, we propose an efficient numerical method for pricing European multi-asset options based on two complementary ideas. First, we smooth the Fourier integrand via an optimized choice of damping parameters based on a proposed heuristic optimization rule. Second, we use sparsification and dimension-adaptivity techniques to accelerate the convergence of the quadrature in high dimensions. Our extensive numerical study on basket and rainbow options under the multivariate geometric Brownian motion and some Lévy models demonstrates the advantages of adaptivity and our damping rule on the numerical complexity of the quadrature methods. Moreover, our approach achieves substantial computational gains compared to the Monte Carlo method.

  • D. Belomestny, J.G.M. Schoenmakers, Primal-dual regression approach for Markov decision processes with general state and action space, Preprint no. 2957, WIAS, Berlin, 2022.
    Abstract, PDF (454 kByte)
    We develop a regression based primal-dual martingale approach for solving finite time horizon MDPs with general state and action space. As a result, our method allows for the construction of tight upper and lower biased approximations of the value functions, and, provides tight approximations to the optimal policy. In particular, we prove tight error bounds for the estimated duality gap featuring polynomial dependence on the time horizon, and sublinear dependence on the cardinality/dimension of the possibly infinite state and action space. From a computational point of view the proposed method is efficient since, in contrast to usual duality-based methods for optimal control problems in the literature, the Monte Carlo procedures here involved do not require nested simulations.

  • F. Besold, V. Spokoiny, Adaptive weights community detection, Preprint no. 2951, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2951 .
    Abstract, PDF (628 kByte)
    Due to the technological progress of the last decades, Community Detection has become a major topic in machine learning. However, there is still a huge gap between practical and theoretical results, as theoretically optimal procedures often lack a feasible implementation and vice versa. This paper aims to close this gap and presents a novel algorithm that is both numerically and statistically efficient. Our procedure uses a test of homogeneity to compute adaptive weights describing local communities. The approach was inspired by the Adaptive Weights Community Detection (AWCD) algorithm by [2]. This algorithm delivered some promising results on artificial and real-life data, but our theoretical analysis reveals its performance to be suboptimal on a stochastic block model. In particular, the involved estimators are biased and the procedure does not work for sparse graphs. We propose significant modifications, addressing both shortcomings and achieving a nearly optimal rate of strong consistency on the stochastic block model. Our theoretical results are illustrated and validated by numerical experiments.

  • M.G. Varzaneh, S. Riedel, A. Schmeding, N. Tapia, The geometry of controlled rough paths, Preprint no. 2926, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2926 .
    Abstract, PDF (472 kByte)
    We prove that the spaces of controlled (branched) rough paths of arbitrary order form a continuous field of Banach spaces. This structure has many similarities to an (infinite-dimensional) vector bundle and allows to define a topology on the total space, the collection of all controlled path spaces, which turns out to be Polish in the geometric case. The construction is intrinsic and based on a new approximation result for controlled rough paths. This framework turns well-known maps such as the rough integration map and the Itô-Lyons map into continuous (structure preserving) mappings. Moreover, it is compatible with previous constructions of interest in the stability theory for rough integration.

  • CH. Bayer, D. Belomestny, O. Butkovsky, J.G.M. Schoenmakers, RKHS regularization of singular local stochastic volatility McKean--Vlasov models, Preprint no. 2921, WIAS, Berlin, 2022, DOI 10.20347/WIAS.PREPRINT.2921 .
    Abstract, PDF (504 kByte)
    Motivated by the challenges related to the calibration of financial models, we consider the problem of solving numerically a singular McKean-Vlasov equation, which represents a singular local stochastic volatility model. Whilst such models are quite popular among practitioners, unfortunately, its well-posedness has not been fully understood yet and, in general, is possibly not guaranteed at all. We develop a novel regularization approach based on the reproducing kernel Hilbert space (RKHS) technique and show that the regularized model is well-posed. Furthermore, we prove propagation of chaos. We demonstrate numerically that a thus regularized model is able to perfectly replicate option prices due to typical local volatility models. Our results are also applicable to more general McKean--Vlasov equations.

Vorträge, Poster

  • A. Shehu, MaRDI: The mathematical research data initiative, 2023 NFDI4DS Conference and Consortium Meeting, November 10, 2023, Fraunhofer FOKUS, November 10, 2023.

  • S. Breneis, American options under rough Heston, 11th General AMaMeF Conference, June 28 - 30, 2023, Universität Bielefeld, Fakultät für Mathematik, June 30, 2023.

  • S. Breneis, American options under rough Heston, 11th General AMaMeF Conference, June 26 - 30, 2023, Universität Bielefeld, Center for Mathematical Economics, June 26, 2023.

  • S. Breneis, Path-dependent options under rough Heston, 4th Workshop on Stochastic Methods in Finance and Physics, Heraklion, Kreta, Greece, July 17 - 21, 2023.

  • S. Breneis, Pricing American Options under rough Heston, Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2023, Thuwal, Saudi Arabia, May 21 - June 1, 2023.

  • S. Breneis, Pricing-path dependent options under rough Heston, CDT-IRTG Summer School 2023, September 3 - 8, 2023, Templin, September 3, 2023.

  • S. Breneis, Pricing-path dependent options under rough Heston, 18. Doktorand:innentreffen der Stochastik 2023, August 21 - 23, 2023, Universität Heidelberg, Fachbereich Mathematik, August 23, 2023.

  • S. Breneis, Weak Markovian approximations of rough Heston, 17th Oxford-Berlin Young Researcher's Meeting on Applied Stochastic Analysis, April 27 - 29, 2023, WIAS & TU Berlin, April 27, 2023.

  • O. Butkovsky, Regularization by noise for SDEs and SPDEs beyond the Brownian case, Probability Seminar, Université Paris-Saclay, CentraleSupélec, France, May 25, 2023.

  • O. Butkovsky, Stochastic equations with singular drift driven by fractional Brownian motion, 17th Oxford-Berlin Young Researcher's Meeting on Applied Stochastic Analysis, April 27 - 29, 2023, WIAS & TU Berlin, April 28, 2023.

  • O. Butkovsky, Stochastic equations with singular drift driven by fractional Brownian motion, 43rd Conference on Stochastic Processes and their Applications, August 23 - July 28, 2023, Bernoulli Society, Portugal, July 25, 2023.

  • O. Butkovsky, Stochastic equations with singular drift driven by fractional Brownian motion (online talk), Non-local operators, probability and singularities (online event), researchseminars.org (online event), April 4, 2023.

  • O. Butkovsky, Stochastic sewing, John-Nirenberg inequality, and taming singularities for regularization by noise: A very practical guide., SDEs with low regularity coefficients: Theory and numerics, September 20 - 22, 2023, University of Torino, Department of Mathematics, Italy, September 22, 2023.

  • O. Butkovsky, Strong rate of convergence of the Euler scheme for SDEs with irregular drift driven by Levy noise, 14th Conference on Monte Carlo Methods and Applications, June 26 - 30, 2023, Sorbonne University, Paris, France, June 29, 2023.

  • L. Pelizzari, Primal-dual optimal stopping with signatures, Stochastic Numerics and Statistical Learning: Theory and Applications 2023 Workshop, Thuwal, Saudi Arabia, May 26 - June 1, 2023.

  • L. Pelizzari, Rough PDEs and local stochastic volatility, Volatility is rough, Isle of Skye Workshop, May 21 - 25, 2023, Sabhal Mòr Ostaig, Sleat, Isle of Skye, UK, May 25, 2023.

  • L. Pelizzari, Rough PDEs for local stochastic volatility models, 17th Oxford-Berlin Young Researcher's Meeting on Applied Stochastic Analysis, April 27 - 29, 2023, WIAS & TU Berlin, April 27, 2023.

  • N. Tapia, Branched Itô formula, SFI: Structural Aspects of Signatures and Rough Paths, August 28 - September 1, 2023, The Norwegian Academy of Science, Centre for Advanced Study (CAS), Oslo, Norway, August 31, 2023.

  • N. Tapia, Stability of deep neural networks via discrete rough paths, Oxford Stochastic Analysis and Mathematical Finance Seminar, University of Oxford, Mathematical Institute, UK, February 13, 2023.

  • A. Kroshnin, Sobolev space of measure-valued functions, Variational and Information Flows in Machine Learning and Optimal Transport, November 19 - 24, 2023, Mathematisches Forschungsinstitut Oberwolfach, November 20, 2023.

  • CH. Bayer, D. Kreher, M. Landstorfer, W. Kenmoe Nzali, Volatile electricity markets and battery storage: A model-based approach for optimal control, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023.

  • CH. Bayer, P. Friz, J.G.M. Schoenmakers, V. Spokoiny, N. Tapia, L. Pelizzari, Optimal control in energy markets using rough analysis and deep networks, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023.

  • CH. Bayer, Markovian approximations to rough volatility models, Volatility is rough, Isle of Skye Workshop, May 21 - 25, 2023, Sabhal Mòr Ostaig, Sleat, Isle of Skye, UK, May 25, 2023.

  • CH. Bayer, Markovian approximations to rough volatility models, Stochastics around Finance, August 28 - 30, 2023, Kanazawa University, Natural Science and Technology, Kanazawa, Japan, August 28, 2023.

  • CH. Bayer, Markovian approximations to rough volatility models, Heriot-Watt University, Mathematical Institute, Edinburgh, UK, November 15, 2023.

  • CH. Bayer, Optimal stopping with signatures, Probabilistic methods, Signatures, Cubature and Geometry, January 9 - 11, 2023, University of York, Department of Mathematics, UK, January 9, 2023.

  • CH. Bayer, Optimal stopping with signatures, Quantitative Finance Conference, April 12 - 15, 2023, University of Cambridge, Centre for Financial Research, UK, April 13, 2023.

  • CH. Bayer, Optimal stopping with signatures, 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023), Minisymposium 00322 ``Methodological Advancement in Rough Paths and Data Science'', August 20 - 25, 2023, Waseda University, Tokyo, Japan, August 24, 2023.

  • CH. Bayer, Optimal stopping with signatures, Workshop on Stochastic Control Theory, October 25 - 26, 2023, KTH Royal Institute of Technology, Department of Mathematics, Stockholm, Sweden, October 26, 2023.

  • CH. Bayer, Optimal stopping with signatures, University of Dundee, School of Science and Engineering, UK, November 13, 2023.

  • CH. Bayer, Optimal stopping with signatures (online talk), North British Probability Seminar, University of Edinburgh, UK, March 29, 2023.

  • CH. Bayer, Rough PDEs for local stochastic volatility models, Rough Volatility Workshop, November 21 - 22, 2023, Sorbonne Université, Institut Henri Poincaré, Paris, France.

  • CH. Bayer, Signatures and applications, 4th Workshop on Stochastic Methods in Finance and Physics, July 17 - 21, 2023, Institute of Applied and Computational Mathematics (IACM), Heraklion, Kreta, Greece.

  • CH. Bayer, Non-Markovian models in finance, Stochastic Numerics and Statistical Learning: Theory and Applications 2023 Workshop, May 26 - June 1, 2023, King Abdullah University of Science and Technology, Computer, Electrical and Mathematical Sciences, Thuwal, Saudi Arabia, May 27, 2023.

  • C. Cárcamo Sanchez, F. Galarce Marín, A. Caiazzo, I. Sack, K. Tabelow, Quantitative tissue pressure imaging via PDE-informed assimilation of MR-data, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023.

  • P. Dvurechensky, C. Geiersbach, M. Hintermüller, A. Kannan, S. Kater, Equilibria for distributed multi-modal energy systems under uncertainty, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023.

  • P. Dvurechensky, Decentralized local stochastic extra-gradient for variational inequalities, TES Conference on Mathematical Optimization for Machine Learning, September 13 - 15, 2023, Mathematics Research Cluster MATH+, Berlin, September 14, 2023.

  • P. Dvurechensky, Decentralized local stochastic extra-gradient for variational inequalities, European Conference on Computational Optimization (EUCCO), September 25 - 27, 2023, Heidelberg University, Scientific Computing and Optimization, September 25, 2023.

  • P. Dvurechensky, Hessian barrier algorithms for non-convex conic optimization, 20th Workshop on Advances in Continuous Optimization, August 22 - 25, 2023, Corvinus University, Institute of Mathematical Statistics and Modelling, Budapest, August 25, 2023.

  • P.K. Friz, On rough stochastic differential equations, SPDEs, optimal control and mean field games - analysis, numerics and applications, July 10 - 14, 2023, Universität Bielefeld, Center for Interdisciplinary Research (ZiF), July 11, 2023.

  • A. Kroshnin, Robust k-means clustering in metric spaces, Workshop on Statistics in Metric Spaces, October 11 - 13, 2023, Center for Research in Economics and Statistics (CREST), UMR 9194, Palaiseau, France, October 12, 2023.

  • A. Kroshnin, Entropic Wasserstein barycenters, Interpolation of Measures, January 24 - 25, 2023, Lagrange Mathematics and Computation Research Center, Huawei, Paris, France.

  • J.G.M. Schoenmakers, Optimal stopping with randomly arriving opportunities, Stochastische Analysis und Stochastik der Finanzmärkte, Humboldt-Universität zu Berlin, Institut für Mathematik, November 23, 2023.

  • J.G.M. Schoenmakers, Primal-dual regression approach for Markov decision processes with general state and action spaces, SPDEs, optimal control and mean field games - analysis, numerics and applications, July 11 - 14, 2023, Universität Bielefeld, Center for Interdisciplinary Research (ZiF), July 12, 2023.

  • V. Spokoiny, Bayesian inference for complex models, MIA 2023 Mathematics and Image Analysis, February 1 - 3, 2023, Humboldt Universität zu Berlin, TU Berlin und WIAS Berlin, February 3, 2023.

  • V. Spokoiny, Bayesian inference using mixed Laplace approximation with applications to error-in-operator models, New York University, Courant Institute of Mathematical Sciences and Center for Data Science, USA, October 3, 2023.

  • V. Spokoiny, Estimation and inference for error-in-operator model, Mathematics in Armenia: Advances and Perspectives, July 2 - 8, 2023, Yerevan State University and National Academy of Sciences, Institute of Mathematics, Yerevan, Armenia, July 3, 2023.

  • V. Spokoiny, Estimation and inference for error-in-operator model, Lecture Series Trends in Statistics, National University of Singapore, Department of Mathematics, Singapore, August 25, 2023.

  • V. Spokoiny, Estimation and inference for error-in-operator model, Massachusetts Institute of Technology, Department of Mathematics, Cambridge, USA, September 29, 2023.

  • V. Spokoiny, Inference in error-in-operator model, Tel Aviv University, Department of Statistics, Israel, March 30, 2023.

  • V. Spokoiny, Marginal Laplace approximation and Gaussian mixtures, Optimization and Statistical Learning, OSL2023, January 15 - 20, 2023, Les Houches School of Physics, France, January 17, 2023.

  • K. Tabelow, MaRDI: Building research data infrastructures for mathematics and the mathematical science, 1st Conference on Research Data Infrastructure (CoRDI), September 12 - 14, 2023, Karlsruhe Institute of Technology (KIT), September 12, 2023.

  • K. Tabelow, Mathematical research data management in interdisciplinary research, Workshop on Biophysics-based modeling and data assimilation in medical imaging (hybrid event), WIAS Berlin, August 31, 2023.

  • J.-J. Zhu, From gradient flow force-balance to robust machine learning, Basque Center for Applied Mathamatics, Bilbao, Spain, October 31, 2023.

  • S. Breneis, An error representation formula for the log-ode method, 15th Berlin-Oxford Young Researcher's Meeting on Applied Stochastic Analysis, May 12 - 14, 2022, WIAS & TU Berlin, May 14, 2022.

  • S. Breneis, An error representation formula for the log-ode method, 16th Oxford-Berlin Young Researcher's Meeting on Applied Stochastic Analysis, December 8 - 10, 2022, University of Oxford, UK, December 9, 2022.

  • S. Breneis, Markovian approximations for rough volatility models, Seminar Stochastic Numerics Research Group, King Abdullah University of Science and Technology, Thuval, Saudi Arabia, January 26, 2022.

  • S. Breneis, Markovian approximations of rough volatility models, Mathematics of Random Systems Summer School 2022, September 25 - 30, 2022, University of Oxford, St Hugh's College, UK, September 29, 2022.

  • O. Butkovsky, Regularisation by noise for SDEs: State of the art & open problems, Mini-Workshop ``Regularization by Noise: Theoretical Foundations, Numerical Methods and Applications driven by Levy Noise'', February 13 - 20, 2022, Mathematisches Forschungsinstitut Oberwolfach, February 16, 2022.

  • O. Butkovsky, Regularization by noise for $L_p$ drifts: The case for Burkholder--Rosenthal stochastic sewing, Stochastic & Rough Analysis, August 22 - 26, 2022, Harnack House, August 23, 2022.

  • O. Butkovsky, Regularization by noise for SDEs and SPDEs beyond the Brownian case, Open Japanese-German Conference on Stochastic Analysis and Applications, September 19 - 23, 2022, Westfälische Wilhelms-Universität Münster, September 19, 2022.

  • O. Butkovsky, Regularization by noise for SDEs and SPDEs beyond the Brownian case, Oberseminar Analysis - Probability, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Fakultät für Mathematik und Informatik, Lepzig, November 1, 2022.

  • O. Butkovsky, Regularization by noise for SDEs and SPDEs beyond the Brownian case (online talk), Webinar on Stochastic Analysis 2022 (Online Event), Beijing Institute of Technology, School of Mathematics and Statistics, China, November 8, 2022.

  • O. Butkovsky, Strong rate of convergence of the Euler scheme for SDEs with irregular drift driven by Levy noise, 15th Berlin-Oxford Young Researcher's Meeting on Applied Stochastic Analysis, May 12 - 14, 2022, WIAS & TU Berlin, May 12, 2022.

  • O. Butkovsky, Strong rate of convergence of the Euler scheme for SDEs with irregular drift driven by Levy noise, Numerical Analysis and Applications of SDEA, September 25 - October 1, 2022, Banach Center, Bedlewo, Poland, September 28, 2022.

  • O. Butkovsky, Weak and mild solutions of SPDEs with distributional drift (online talk), 42nd Conference on Stochastic Processes and their Applications (Online Event), June 27 - July 1, 2022, Wuhan University, School of Mathematics and Statistics, Chinese Society of Probability and Statistics, China, June 28, 2022.

  • A. Kroshnin, Robust k-means clustering in Hilbert and metric spaces, Rencontres de Statistique Mathématique, December 12 - 16, 2022, Centre International de Rencontres Mathématiques (CIRM), Marseille, France, December 13, 2022.

  • L. Pelizzari, Polynomial Volterra processes, 16th Oxford-Berlin Young Researcher's Meeting on Applied Stochastic Analysis, December 8 - 10, 2022, University of Oxford, UK, December 9, 2022.

  • W. Salkeld, Lions calculus and regularity structures, Probability and Mathematical Physics 2022, Helsinki, Finland, June 28 - July 4, 2022.

  • W. Salkeld, Lions calculus and rough mean-field equation, Journées TRAG 2022, May 30 - June 1, 2022, Université Paris Nanterre, GdR TRAG (TRAjectoires ruGueuses).

  • W. Salkeld, Random controlled rough paths, 15th Berlin-Oxford Young Researcher's Meeting on Applied Stochastic Analysis, May 12 - 14, 2022, WIAS & TU Berlin, May 12, 2022.

  • W. Salkeld, Random controlled rough paths (online talk), Thursdays Seminar (Online Event), Technische Universität Berlin, Institut für Mathematik, March 10, 2022.

  • N. Tapia, Generalized iterated-sums signatures, New Interfaces of Stochastic Analysis and Rough Paths, September 4 - 9, 2022, Banff International Research Station, Canada, September 9, 2022.

  • N. Tapia, Signature methods in numerical analysis, International Conference on Scientific Computation and Differential Equation (SciCADE 2022), July 25 - 29, 2022, University of Iceland, Faculty of Physical Sciences, Reykjavík, Iceland, July 25, 2022.

  • N. Tapia, Stability of deep neural networks via discrete rough paths, 15th Berlin-Oxford Young Researcher's Meeting on Applied Stochastic Analysis, May 12 - 14, 2022, WIAS & TU Berlin, May 13, 2022.

  • N. Tapia, Stability of deep neural networks via discrete rough paths (online talk), Rough Analysis and Data Science Workshop 2022, July 26 - 27, 2022, Imperial College London, Department of Mathematics, UK, July 27, 2022.

  • N. Tapia, The moving frame method for iterated-integrals signatures: Orthogonal invariants (online talk), Arbeitsgruppenseminar Analysis (Online Event), Universität Potsdam, Institut für Mathematik, January 28, 2022.

  • N. Tapia, Transport and continuity equations with (very) rough noise, Mini-Workshop ``Regularization by Noise: Theoretical Foundations, Numerical Methods and Applications driven by Levy Noise'', February 13 - 19, 2022, Mathematisches Forschungsinstitut Oberwolfach, February 18, 2022.

  • Y. Vargas, Algebraic combinatorics of moment-to-cumulant relations, Summer School in Algebraic Combinatorics, Kraków, Poland, July 11 - 15, 2022.

  • Y. Vargas, Combinational moment-to-cumulant formulas in free probability, Technische Universität Graz, Institut für Diskrete Mathematik, Austria, June 2, 2022.

  • Y. Vargas, Cumulant-to-moment relations from Hopf algebras, 15th Berlin-Oxford Young Researcher's Meeting on Applied Stochastic Analysis, May 12 - 14, 2022, WIAS & TU Berlin, May 12, 2022.

  • Y. Vargas, Primitive basis for the Hopf algebra of permutations, Séminaire de Combinatoire, Université Gustave Eiffel, Laboratoire d'Informatique Gaspard Monge, Marne-la-Vallée, France, July 1, 2022.

  • CH. Bayer, Efficient Markovian approximation of rough stochastic volatility models (online talk), Aarhus/SMU Volatility Workshop (Online Event), Aarhus University, Department of Economics and Business, Denmark, May 31, 2022.

  • CH. Bayer, Efficient Markovian approximation to rough volatility models, Rough Volatility Meeting, Imperial College London, UK, March 16, 2022.

  • CH. Bayer, Machine learning techniques in computational finance, Stochastic Numerics and Statistical Learning: Theory and Applications Workshop, May 15 - 28, 2022, King Abdullah University, Computer, Electrical and Mathematical Sciences and Engineering Division, Thuwal, Saudi Arabia, May 22, 2022.

  • CH. Bayer, Optimal stopping with signatures, ISOR Colloquium, June 13 - 14, 2022, Universität Wien, Department of Statistics and Operations Research, Austria, June 13, 2022.

  • CH. Bayer, Optimal stopping with signatures, Advances in Mathematical Finance and Optimal Transport, June 27 - 30, 2022, Scuola Normale Superiore di Pisa, Centro di Ricerca Matematica Ennio De Giorgi, Italy, June 28, 2022.

  • CH. Bayer, Optimal stopping with signatures, Rough Analysis and Data Science Workshop 2022, July 26 - 27, 2022, Imperial College London, Department of Mathematics, UK, July 27, 2022.

  • CH. Bayer, Optimal stopping with signatures, Oberseminar, Martin-Luther-Universität Halle Wittenberg, Institut für Mathematik, June 14, 2022.

  • CH. Bayer, Optimal stopping with signatures, Séminaire Bachelier, Institut Henri Poincaré, Paris, France, December 16, 2022.

  • CH. Bayer, Optimal stopping, machine learning, and signatures, Seminar Stochastic Numerics Research Group, King Abdullah University of Science and Technology, Thuval, Saudi Arabia, January 31, 2022.

  • CH. Bayer, RKHS regularization of singular local stochastic volatility McKean--Vlasov models (online talk), Mini-Workshop ``Regularization by Noise: Theoretical Foundations, Numerical Methods and Applications driven by Levy Noise'', February 13 - 20, 2022, Mathematisches Forschungsinstitut Oberwolfach, February 14, 2022.

  • CH. Bayer, Simulating rough volatility models (online talk), MathFinance 2022 Conference (Online Event), March 21 - 22, 2022, March 22, 2022.

  • CH. Bayer, Stability of deep neural networks via discrete rough paths, New Interfaces of Stochastic Analysis and Rough Paths, September 4 - 9, 2022, Banff International Research Station, Canada, September 8, 2022.

  • P. Dvurechensky, Accelerated alternating minimization methods, 20th French German Portuguese Conference in Optimization, May 3 - 6, 2022, University of Porto, School of Economics and Management, Portugal, May 5, 2022.

  • P. Dvurechensky, Generalized self-concordant analysis of Frank--Wolfe algorithms, 19th Workshop on Advances in Continuous Optimization (EUROPT 2022), July 29 - 30, 2022, NOVA University Lisbon, School of Science and Technology, Portugal, July 30, 2022.

  • P. Dvurechensky, Multimarginal optimal transport by accelerated alternating minimization (online talk), SIAM Conference on Imaging Science (IS22) (Online Event), Minisymposium MS94: ``Multi-Marginal Optimal Transport'', March 21 - 25, 2022, March 24, 2022.

  • P. Friz, A theory of rough differential equations (online talk), Webinar on Stochastic Analysis (Online Event), Beijing Institute of Technology, School of Mathematics and Statistics, China, March 31, 2022.

  • P. Friz, Itô and Lyons in tandem, Open Japanese-German Conference on Stochastic Analysis and Applications, September 19 - 23, 2022, Westfälische Wilhelms-Universität Münster, Institut für Mathematische Stochastik, September 20, 2022.

  • P. Friz, Local vol under rough vol, Rough Volatility Workshop, March 15 - 16, 2022, Imperial College London, UK, March 16, 2022.

  • P. Friz, Rough SDEs, rough semimartingales, Advances in Mathematical Finance and Optimal Transport, June 27 - July 1, 2022, Scuola Normale Superiore di Pisa, Centro di Ricerca Matematica Ennio De Giorgi, Italy, June 28, 2022.

  • P. Friz, Rough stochastic analysis, Stochastic Analysis and Stochastic Partial Differential Equations: A Celebration of Marta Sanz-Solé's Mathematical Legacy, May 30 - June 3, 2022, Centre de Recerca Matemàtica (CRM), Barcelona, Spain, June 2, 2022.

  • P. Friz, Rough stochastic analysis, Conference in Honor of S. R. S. Varadhan's 80th Birthday, June 13 - 17, 2022, Jeju Shinhwa World Marriott Resort, Seoul, Korea (Republic of), June 13, 2022.

  • P. Friz, Weak rates for rough vol (online talk), New Interfaces of Stochastic Analysis and Rough Paths, September 4 - 9, 2022, Banff International Research Station, Canada, September 6, 2022.

  • TH. Koprucki, K. Tabelow, HackMD (online talk), E-Coffee-Lecture (Online Event), WIAS Berlin, March 25, 2022.

  • V. Spokoiny, Bayesian inference for nonlinear inverse problems, SFB 1294 Annual Meeting 2022, September 13 - 14, 2022, Universität Potsdam, Institut für Mathematik, September 13, 2022.

  • V. Spokoiny, Bayesian optimization by Laplace iterations, Workshop on Statistical Inference and Convex Optimization, June 13 - 15, 2022, Université Grenoble Alpes, Laboratoire Jean Kuntzmann, France, June 13, 2022.

  • V. Spokoiny, Laplace approximation in high dimension, Workshop ``Re-thinking High-dimensional Mathematical Statistics'', May 16 - 20, 2022, Mathematisches Forschungsinstitut Oberwolfach, May 17, 2022.

  • K. Tabelow, Neural MRI, Tandem tutorial ``Mathematics of Imaging' ', Berlin Mathematics Research Center MATH+, February 18, 2022.

Preprints im Fremdverlag

  • O. Butkovsky, K. , L. Mytnik, Stochastic equations with singular drift driven by fractional Brownian motion, Preprint no. arXiv:2302.11937, Cornell University, 2023, DOI 10.48550/arXiv.2302.11937 .

  • K. Ebrahimi-Fard, F. Patras, N. Tapia, L. Zambotti, Shifted substitution in non-commutative multivariate power series with a view toward free probability, Preprint no. arXiv:2204.01445, Cornell University, 2023, DOI 10.48550/arXiv.2204.01445 .

  • D. Gergely, B. Fricke, J.M. Oeschger, L. Ruthotto, P. Freund, K. Tabelow, S. Mohammadi, ACID: A comprehensive toolbox for image processing and modeling of brain, spinal cord, and post-mortem diffusion MRI data, Preprint no. BioRxiv:2023.10.13.562027, Cold Spring Harbor Laboratory, 2023, DOI 10.1101/2023.10.13.562027 .

  • E. Gladin, A. Gasnikov, P. Dvurechensky, Accuracy certificates for convex minimization with inexact Oracle, Preprint no. arXiv:2310.00523, Cornell University, 2023, DOI 10.48550/arXiv.2310.00523 .
    Abstract
    Accuracy certificates for convex minimization problems allow for online verification of the accuracy of approximate solutions and provide a theoretically valid online stopping criterion. When solving the Lagrange dual problem, accuracy certificates produce a simple way to recover an approximate primal solution and estimate its accuracy. In this paper, we generalize accuracy certificates for the setting of inexact first-order oracle, including the setting of primal and Lagrange dual pair of problems. We further propose an explicit way to construct accuracy certificates for a large class of cutting plane methods based on polytopes. As a by-product, we show that the considered cutting plane methods can be efficiently used with a noisy oracle even thought they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in the numerical experiments highlighting that our certificates provide a tight upper bound on the objective residual.

  • E. Gorbunov, A. Sadiev, D. Dolinova, S. Horvát, G. Gidel, P. Dvurechensky, A. Gasnikov, P. Richtárik, High-probability convergence for composite and distributed stochastic minimization and variational inequalities with heavy-tailed noise, Preprint no. arXiv:2310.01860, Cornell University, 2023, DOI 10.48550/arXiv.2310.01860 .
    Abstract
    High-probability analysis of stochastic first-order optimization methods under mild assumptions on the noise has been gaining a lot of attention in recent years. Typically, gradient clipping is one of the key algorithmic ingredients to derive good high-probability guarantees when the noise is heavy-tailed. However, if implemented naïvely, clipping can spoil the convergence of the popular methods for composite and distributed optimization (Prox-SGD/Parallel SGD) even in the absence of any noise. Due to this reason, many works on high-probability analysis consider only unconstrained non-distributed problems, and the existing results for composite/distributed problems do not include some important special cases (like strongly convex problems) and are not optimal. To address this issue, we propose new stochastic methods for composite and distributed optimization based on the clipping of stochastic gradient differences and prove tight high-probability convergence results (including nearly optimal ones) for the new methods. Using similar ideas, we also develop new methods for composite and distributed variational inequalities and analyze the high-probability convergence of these methods.

  • N. Kornilov, A. Gasnikov, P. Dvurechensky, D. Dvinskikh, Gradient free methods for non-smooth convex optimization with heavy tails on convex compact, Preprint no. arXiv:2304.02442, Cornell University, 2023, DOI 10.48550/arXiv.2304.02442 .

  • N. Kornilov, E. Gorbunov, M. Alkousa, F. Stonyakin, P. Dvurechensky, A. Gasnikov, Intermediate gradient methods with relative inexactness, Preprint no. arXiv:2310.00506, Cornell University, 2023.
    Abstract
    This paper is devoted to first-order algorithms for smooth convex optimization with inexact gradi- ents. Unlike the majority of the literature on this topic, we consider the setting of relative rather than absolute inexactness. More precisely, we assume that an additive error in the gradient is propor- tional to the gradient norm, rather than being globally bounded by some small quantity. We propose a novel analysis of the accelerated gradient method under relative inexactness and strong convex- ity and improve the bound on the maximum admissible error that preserves the linear convergence of the algorithm. In other words, we analyze how robust is the accelerated gradient method to the relative inexactness of the gradient information. Moreover, based on the Performance Estimation Problem (PEP) technique, we show that the obtained result is optimal for the family of accelerated algorithms we consider. Motivated by the existing intermediate methods with absolute error, i.e., the methods with convergence rates that interpolate between slower but more robust non-accelerated algorithms and faster, but less robust accelerated algorithms, we propose an adaptive variant of the intermediate gradient method with relative error in the gradient.

  • D.A. Pasechnyuk, M. Persiianov, P. Dvurechensky, A. Gasnikov, Algorithms for Euclidean-regularised optimal transport, Preprint no. arXiv:2307.00321, Cornell University, 2023, DOI 10.48550/arXiv.2307.00321 .

  • A. Sadiev, E. Gorbunov, S. Horváth, G. Gidel, P. Dvurechensky, A. Gasnikov, P. Peter, High-probability bounds for stochastic optimization and variational inequalities: The case of unbounded variance, Preprint no. arXiv:2302.00999, Cornell University, 2023, DOI 10.48550/arXiv.2302.00999 .

  • B. Schembera, F. Wübbeling, H. Kleikamp, Ch. Biedinger, J. Fiedler, M. Reidelbach, A. Shehu, B. Schmidt, Th. Koprucki, D. Iglezakis, D. Göddeke, Ontologies for models and algorithms in applied mathematics and related disciplines, Preprint no. arXiv:2310.20443, Cornell University, 2023, DOI 10.48550/arXiv.2310.20443 .
    Abstract
    In applied mathematics and related disciplines, the modeling-simulationoptimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.

  • V. Spokoiny, Mixed Laplace approximation for marginal posterior and Bayesian inference in error-in-operator model, Preprint no. arXiv:2305.08193, Cornell University, 2023, DOI 10.48550/arXiv.2305.09336 .

  • V. Spokoiny, Nonlinear regression: Finite sample guarantees, Preprint no. arXiv:2305.08193, Cornell University, 2023, DOI 10.48550/arXiv.2305.08193 .

  • V. Spokoiny, Sharp deviation bounds and concentration phenomenon for the squared norm of a sub-Gaussian vector, Preprint no. arXiv:2305.07885, Cornell University, 2023, DOI 10.48550/arXiv.2305.07885 .

  • O. Butkovsky, K. Dareiotis, M. Gerencsér, Strong rate of convergence of the Euler scheme for SDEs with irregular drift driven by Levy noise, Preprint no. arXiv:2204.12926, Cornell University, 2022, DOI 10.48550/arXiv.2204.12926 .

  • A. Kroshnin, E. Stepanov, D. Trevisan, Infinite multidimensional scaling for metric measure spaces, Preprint no. arXiv:2201.05885, Cornell University, 2022, DOI 10.48550/arXiv.2201.05885 .

  • Y.-W. Sun, K. Papagiannouli, V. Spokoiny, High dimensional change-point detection: A complete graph approach, Preprint no. arXiv:2203.08709, Cornell University, 2022, DOI 10.48550/arXiv.2203.08709 .
    Abstract
    The aim of online change-point detection is for a accurate, timely discovery of structural breaks. As data dimension outgrows the number of data in observation, online detection becomes challenging. Existing methods typically test only the change of mean, which omit the practical aspect of change of variance. We propose a complete graph-based, change-point detection algorithm to detect change of mean and variance from low to high-dimensional online data with a variable scanning window. Inspired by complete graph structure, we introduce graph-spanning ratios to map high-dimensional data into metrics, and then test statistically if a change of mean or change of variance occurs. Theoretical study shows that our approach has the desirable pivotal property and is powerful with prescribed error probabilities. We demonstrate that this framework outperforms other methods in terms of detection power. Our approach has high detection power with small and multiple scanning window, which allows timely detection of change-point in the online setting. Finally, we applied the method to financial data to detect change-points in S&P 500 stocks.

  • M. Alkousa, A. Gasnikov, P. Dvurechensky, A. Sadiev, L. Razouk, An approach for non-convex uniformly concave structured saddle point problem, Preprint no. arXiv:2202.06376, Cornell University, 2022, DOI 10.48550/arXiv.2202.06376 .
    Abstract
    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the non-convex uniformly-concave setting. We study a general class of saddle point problems with composite structure and Hölder-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, outer minimization problem w.r.t. primal variables, and inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for non-convex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has Hölder-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  • V. Artem, A. Gasnikov, P. Dvurechensky, V. Spokoiny, Accelerated gradient methods with absolute and relative noise in the gradient, Preprint no. arXiv:2102.02921, Cornell University, 2022, DOI 10.48550/arXiv.2102.02921 .

  • C. Bellingeri, P.K. Friz, S. Paycha, R. Preiss, Smooth rough paths, their geometry and algebraic renormalization, Preprint no. arXiv:2111.15539, Cornell University, 2022, DOI 10.48550/arXiv.2111.15539 .

  • T. Boege, R. Fritze, Ch. Görgen, J. Hanselman, D. Iglezakis, L. Kastner, Th. Koprucki, T. Krause, Ch. Lehrenfeld, S. Polla, M. Reidelbach, Ch. Riedel, J. Saak, B. Schembera, K. Tabelow, M. Weber, Research-data management planning in the German mathematical community, Preprint no. arXiv:2211.12071, Cornell University, 2022, DOI 10.48550/arXiv.2211.12071 .
    Abstract
    In this paper we discuss the notion of research data for the field of mathematics and report on the status quo of research-data management and planning. A number of decentralized approaches are presented and compared to needs and challenges faced in three use cases from different mathematical subdisciplines. We highlight the importance of tailoring research-data management plans to mathematicians' research processes and discuss their usage all along the data life cycle.

  • F. Delarue, W. Salkeld, Probabilistic rough paths II Lion--Taylor expansions and random controlled rough paths, Preprint no. arXiv:2203.01185, Cornell University, 2022, DOI 10.48550/arXiv.2203.01185 .

  • K. Ebrahimi-Fard, F. Patras, N. Tapia, L. Zambotti, Shifted substitution in non-commutative multivariate power series with a view toward free probability, Preprint no. arXiv:2204.01445, Cornell University, 2022, DOI 10.48550/arXiv.2204.01445 .

  • A. Gasnikov, A. Novitskii, V. Novitskii, F. Abdukhakimov, D. Kamzolov, A. Beznosikov, M. Takáč, P. Dvurechensky, B. Gu, The power of first-order smooth optimization for black-box non-smooth problems, Preprint no. arXiv:2201.12289, Cornell University, 2022, DOI 10.48550/arXiv.2201.12289 .
    Abstract
    Gradient-free/zeroth-order methods for black-box convex optimization have been extensively studied in the last decade with the main focus on oracle calls complexity. In this paper, besides the oracle complexity, we focus also on iteration complexity, and propose a generic approach that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems. Our approach not only leads to optimal oracle complexity, but also allows to obtain iteration complexity similar to first-order methods, which, in turn, allows to exploit parallel computations to accelerate the convergence of our algorithms. We also elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.

  • S. Mohammadi, T. Streubel, L. Klock, A. Lutti, K. Pine, S. Weber, L. Edwards, P. Scheibe, G. Ziegler, J. Gallinat, S. Kuhn, M. Callaghan, N. Weiskopf, K. Tabelow, Error quantification in multi-parameter mapping facilitates robust estimation and enhanced group level sensitivity, Preprint no. bioRxiv: 2022.01.11.475846, Cold Spring Harbor Laboratory, 2022, DOI 10.1101/2022.01.11.475846 .
    Abstract
    Multi-Parameter Mapping (MPM) is a comprehensive quantitative neuroimaging protocol that enables estimation of four physical parameters (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD, and magnetization transfer saturation MTsat) that are sensitive to microstructural tissue properties such as iron and myelin content. Their capability to reveal microstructural brain differences, however, is tightly bound to controlling random noise and artefacts (e.g. caused by head motion) in the signal. Here, we introduced a method to estimate the local error of PD, R1, and MTsat maps that captures both noise and artefacts on a routine basis without requiring additional data. To investigate the method's sensitivity to random noise, we calculated the model-based signal-to-noise ratio (mSNR) and showed in measurements and simulations that it correlated linearly with an experimental raw-image-based SNR map. We found that the mSNR varied with MPM protocols, magnetic field strength (3T vs. 7T) and MPM parameters: it halved from PD to R1 and decreased from PD to MT_sat by a factor of 3-4. Exploring the artefact-sensitivity of the error maps, we generated robust MPM parameters using two successive acquisitions of each contrast and the acquisition-specific errors to down-weight erroneous regions. The resulting robust MPM parameters showed reduced variability at the group level as compared to their single-repeat or averaged counterparts. The error and mSNR maps may better inform power-calculations by accounting for local data quality variations across measurements. Code to compute the mSNR maps and robustly combined MPM maps is available in the open-source hMRI toolbox.

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Axisymmetric diffusion kurtosis imaging with Rician bias correction: A simulation study, Preprint no. bioRxiv2022.03.15.484442, Cold Spring Harbor Laboratory, bioRxiv, 2022, DOI 10.1101/2022.03.15.484442 .

  • M. Ghani Varzaneh, S. Riedel, A. Schmeding, N. Tapia, The geometry of controlled rough paths, Preprint no. arXiv:2203.05946, Cornell University, 2022, DOI 10.48550/arXiv.2203.05946 .

  • CH. Bayer, M. Fukasawa, N. Shonosuke , On the weak convergence rate in the discretization of rough volatility models, Preprint no. arXiv:2203.02943, Cornell University, 2022, DOI 10.48550/arXiv.2203.02943 .

  • CH. Bayer, P.K. Friz, N. Tapia, Stability of deep neural networks via discrete rough paths, Preprint no. arXiv:2201.07566, Cornell University, 2022, DOI 10.48550/arXiv.2201.07566 .

  • P. Dvurechensky, S. Shtern, M. Staudigl, A conditional gradient homotopy method with applications to semidefinite programming, Preprint no. arXiv:2207.03101, Cornell University, 2022, DOI 10.48550/arXiv.2207.03101 .

  • P.K. Friz, A. Hocquet, K. , Rough stochastic differential equations, Preprint no. arXiv:2106.10340, Cornell University, 2022, DOI 10.48550/arXiv.2106.10340 .

  • V. Spokoiny, Dimension free non-asymptotic bounds on the accuracy of high dimensional Laplace approximation, Preprint no. arXiv:2204.11038, Cornell University, 2022, DOI 10.48550/arXiv.2204.11038 .
    Abstract
    This note attempts to revisit the classical results on Laplace approximation in a modern non-asymptotic and dimension free form. Such an extension is motivated by applications to high dimensional statistical and optimization problems. The established results provide explicit non-asymptotic bounds on the quality of a Gaussian approximation of the posterior distribution in total variation distance in terms of the so called empheffective dimension ( dimL ). This value is defined as interplay between information contained in the data and in the prior distribution. In the contrary to prominent Bernstein - von Mises results, the impact of the prior is not negligible and it allows to keep the effective dimension small or moderate even if the true parameter dimension is huge or infinite. We also address the issue of using a Gaussian approximation with inexact parameters with the focus on replacing the Maximum a Posteriori (MAP) value by the posterior mean and design the algorithm of Bayesian optimization based on Laplace iterations. The results are specified to the case of nonlinear regression.

  • V. Spokoiny, Finite samples inference and critical dimension for stochastically linear models, Preprint no. arXiv:2201.06327, Cornell University, 2022, DOI 10.48550/arXiv.2201.06327 .
    Abstract
    The aim of this note is to state a couple of general results about the properties of the penalized maximum likelihood estimators (pMLE) and of the posterior distribution for parametric models in a non-asymptotic setup and for possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying two major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth function. The main results simplify a lot if the expected log-likelihood is concave. For the pMLE, we establish a number of finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansion. The later results extend the classical asymptotic Fisher and Wilks Theorems about the MLE to the non-asymptotic setup with large parameter dimension which can depend on the sample size. For the posterior distribution, our main result states a Gaussian approximation of the posterior which can be viewed as a finite sample analog of the prominent Bernstein--von Mises Theorem. In all bounds, the remainder is given explicitly and can be evaluated in terms of the effective sample size and effective parameter dimension. The results are dimension and coordinate free. In spite of generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. An interesting case of logit regression with smooth or truncation priors is used to specify the results and to explain the main notions.