Biomedical imaging This research topic is concerned with biomedical images often with applications to Magnetic Resonance Imaging (MRI) problem, ranging from functional, diffusion-weighted, quantitative MRI to image reconstruction in Magnetic Resonance Fingerprinting. The methods developed in this project arise from different areas of mathematics specifically from non-parametric statistics, machine-learning, and non-smooth variational methods. It it a combined effort of research groups 6 and 8 with strong collaboration with research group 3. Image analysis often involves solving a large-scale optimization problem which in turn makes it necessary to develop scalable optimization algorithms using distributed computations, randomization techniques, stochastic gradient schemes, etc.

Research Group 6 Research Group 8

Non-smooth variational models for imaging

Non-smooth variational (energy minimizing) models form a powerful tool for addressing inverse medical imaging problems. The idea is to reconstruct an image by inverting the operator which models a given medical imaging modality in a stable and robust way. Examples of such operators are the Radon transform for Positron Emission Tomography (PET) and a subsampling operator of a signal's Fourier coefficients for Magnetic Resonance Imaging (MRI). This inversion is typically an ill-posed problem and it is further complicated by the presence of random noise. Therefore, a Tikhonov regularization approach is employed where some a priori information in encoded in the regularization term. For a stable inversion, non-smooth regularisers, e.g. of Total Variation (TV) type, are popular due to their ability to preserve prominent features in the reconstruction (edges).

Spatially adapted regularization via bilevel minimization

In a series of papers (Hintermüller, Rautenberg 2017, Hintermüller, Rautenberg, Wu, Langer 2017, Hintermüller, Papafitsoros 2019, Hintermüller, Papafitsoros, Rautenberg, Sun 2022) new generalized formulations of the renowned total variation (TV) regularization and higher order extensions, e.g. total generalized variation (TGV) were introduced, analyzed and applied to image reconstruction problems. These regularization functionals incorporate distributed regularization weight functions which allow for a varying regularization effect depending on local image features (details vs. homogeneous regions). The regularization weight is determined automatically through a novel bilevel optimization framework introduced in Hintermüller, Rautenberg 2017 and Hintermüller, Rautenberg, Wu, Langer 2017. In this context, the parameterized image reconstruction problem represents the lower level problem, and the upper level problem aims at fixing the filter weight properly. Inspired by an unsupervised learning approach and the statistics of extremes, a localized variance corridor for the image residual is considered and its violation provides a suitable choice for the upper level objective. A detailed study concerning analytical properties of the total variation generalization with spatially varying regularization parameter was presented in Hintermüller, Papafitsoros, Rautenberg 2017.

In a follow-up study, this bilevel optimization framework was extended in Hintermüller, Papafitsoros, Rautenberg, Sun 2022 for the automatic selection of spatially dependent regularization parameters for total generalized variation (TGV), a higher order extention of TV. The computed regularization parameters not only result in preservation of fine scale image details, but also lead to elimination of the staircasing effect, a well-known artifact of total variation regularization.

This general methodology for designing automated (monolithic) image reconstruction workflows using bilevel optimization was summarized in an extensive invited review article in the Handbook of Numerical Analysis (Hintermüller, Papafitsoros 2019).

Bilevel Optimization
Figure 1. - Solutions to a problem of Fourier inpainting arising in parallel magnetic resonance imaging of a chest, where the data are restored by known methods utilizing a) backprojection or b) a scalar weight as well as c) the bilevel approach (from left to right). The spatial distribution of the weight allows to better handle local properties, i.e. homogeneous and detailed regions (subfigure d)).

Structural TV Priors in function space

During the recent years, total variation-type functionals, which exploit structural similarity of the reconstruction to some a priori known information, have become increasingly popular. They typically incorporate gradient information in a pointwise fashion. These techniques are particularly relevant in multimodal medical imaging, where for instance, information from one modality e.g., MRI, can be exploited in the reconstruction process of another modality, e.g., PET. In (Hintermüller, Holler, Papafitsoros 2018) a function space framework for a large class of such structural total variation functionals that are typically used in the above context was introduced and analyzed. This is particularly important, since in function space there is a thorough mathematical description of prominent image features, e.g., edges, which are modelled as discontinuities of functions that typically belong to the space of functions of bounded variation. The structural TV functionals were defined in function space, as appropriate relaxations (lower semicontinuous envelopes). It was shown that these relaxations can have a precise integral representation only in certain restrictive cases. However through a general duality result, it was proved that formulation of the Tikhonov regularisation problem in function space can still be understood via its equivalence to a corresponding saddle-point formulation, where no knowledge of the precise formulation of the relaxation is needed. Thus, this work allows the function space formulation of a wide class of multimodal medical imaging problems, for instance MR-guided PET reconstruction:

Structural Priors
Figure 2. - Example where there the structural TV functional is appropriately tuned to promote edge alignment of a PET reconstruction to an already reconstructed MRI image. As a result, the edges in the PET reconstruction are more enhanced.

Mathematical framework for quantitative imaging

Integrated physics-based methods in quantitative MRI

Magnetic Resonance Fingerprinting (MRF) was recently introduced in the context of quantitative MRI as a highly promising scheme which allows for the simultaneous quantification of the tissue parameters, e.g., the magnetization relaxation times T1 and T2 , using a single acquisition process. It requires the pre-computation of a dictionary that reflects the underlying physical laws of MRI (Bloch equations), whose elements (fingerprints) are matched to magnetization maps inferred by the data. In Dong, Hintermüller, Papafitsoros 2019, an analysis of MRF and its extensions in an inverse problems setting was performed. Additionally, a novel physically oriented method for qMRI, where a single step, dictionary-free model for estimating the values of the tissue parameters, was proposed, analyzed, and implemented. The proposed model relies on a nonlinear operator equation instead of conventional two-step models (comprised of (i) reconstruction of the magnetization and (ii) matching to dictionary). Stability under noise and subsampling was shown and verified via numerical examples. In contrast to state-of-the-art MRF-type algorithms, the performance of the proposed method is not restricted by the fineness of a dictionary, and it is superior in terms of accuracy, memory, and computational time.

MRF
Figure 3. - Preliminary results on improving the quality of MRF reconstruction. a) Ground truth, b) MRF reconstruction, c) regularized MRF. Left: T1 map, Right: T2 map.

Learning enriched differential equations models in optimal control and inverse problems

In Dong, Hintermüller, Papafitsoros 2022 a general optimization framework subjected to physical processes whose constituents are only accessible through data, was introduced, analyzed and implemented. This work introduced the concept of learning-informed differential equations. There, artificial neural networks are employed to learn some unknown components of PDES or ODES as well as their parameter-to-solution maps which are subsequently embedded in optimal control schemes. Existence and neural network-related approximation results for general families of learning- informed differential equations as well as continuity and differentiability properties of the corresponding parameter-to-solution maps were proven. The versatility and applicability of the developed framework was highlighted in quantitative MRI and optimal control of partially unknown semilinear PDEs.

Dictionary learning in quantitative MRI

A spatial regularization approach based on dictionary learning is currently investigated for the quantitative MRI problem, which has already shown excellent results in the linear inverse problem of classical MRI. From a mathematical viewpoint this ends up in a variety of non convex and non smooth optimization problems. Iterative schemes to solve these problems are being devised and convergence to equilibrium points are studied.

Structural adaptive smoothing methods for imaging problems

Images are often characterized by qualitative properties of their spatial structure, e.g. spatially extended regions of homogeneity that are separated by discontinuities. Images or image data with such a property are the target of the methods considered in this research. The methods summarized under the term structural adaptive smoothing try to employ a qualitative assumption on the spatial structure of the data. This assumption is used to simultaneously describe the structure and efficiently estimate parameters like image intensities. Structural adaptive smoothing generalizes several concepts in non-parametric regression. The methods are designed to provide intrinsic balance between variability and bias of the reconstruction results.

A first attempt to use the idea of structural adaptive smoothing for image processing was proposed in Polzehl and Spokoiny (2000) under the name adaptive weights smoothing. This was generalized and refined especially in Polzehl and Spokoiny (2006) providing a theory for the case of one-parameter exponential families. This has become known under the name propagation-separation approach. Several extentions have been made to cover locally smooth images, color images (Polzehl and Tabelow, 2007) and applications from the neurosciences like functional Magnetic Resonance Imaging (fMRI) and diffusion-weighted Magnetic Resonance Imaging (dMRI).

The structural adaptive denoising method have been applied a tailor-made to a number of different imaging modalities in MRI and are accompanyed with respective software. A comprehensive summary has been published in the first edition and a fully revised and extended second revision of a Springer monograph "Magnetic Resonance Brain Imaging: Modeling and Data Analysis with R" (Polzehl and Tabelow, 2019, Polzehl and Tabelow, 2023)

Adaptive noise reduction and signal detection in fMRI

In a series of publications we develop dedicated methods for noise reduction and signal inference in functional MRI based on the propagation-separation approach: In Tabelow et al. 2006 we proposed a new adaptive method for noise reduction of the statistical parametric map in a single-subject fMRI dataset. The properties of this map after smoothing allow for the application of Random Field theory for signal detection. We demonstrated that the method is able to recover the signal-to-noise loss when increasing the spatial resolution of the MRI acquisition (Tabelow et al. 2009) and its applicability for pre-surgical planning (Tabelow et al. 2008). Later, we were able to include the signal detection into a coherent statistical framework for adaptive fMRI analysis in a structural adaptive segmentation method (Polzehl et al. 2010), see Figure 4. In collaboration with the Max-Delbrueck Center Berlin we applied an validated the method for the emerging ultra high field fMRI and presented the results in a series of conference papers at ISMRM 2022, 2023 and 2024.

All adaptive methods for fMRI are implemented in the R software environment for statistical computing and graphics as a free contributed package fmri. It can be downloaded from the CRAN server. It is also listed at NITRC and part of the WIAS R packages for neuroimaging. The structural adaptive segmentation algorithm is available as Adaptive Smoothing Plugin for the neuroimaging software BrainVoyager QX.

Signal detection in fMRI
Figure 4. - Signal detection in a single-subject finger-tapping experiment using (Left to right) a) standard Gaussian filter b) structural adaptive smoothing and Random field theory (RFT) (Tabelow et al. 2006), c) structural adaptive segmentation (Polzehl et al. 2010).

Analysis of diffusion-weighted MRI data

Figure 5. - A slice of the fractional anisotropy (FA) map obtained in DTI. Move the yellow slider with the mouse pointer to switch between the original noisy map and the result after denoising with msPOAS (Becker et. al 2012).

The signal attenuation by the diffusion weighting in dMRI makes this imaging modality vulnerable to noise. We developed a structural adaptive smoothing method for Diffusion Tensor Imaging data (Tabelow et al. 2008; Polzehl and Tabelow 2009). The method uses local comparisons of the estimated diffusion tensor to define the local homogeneity regions for the propagation-separation approach. We developed a position-orientation adaptive smoothing algorithm for denoising of diffusion-weigthed MR data (POAS). This algorithm works in the orientation space of the measurement and does not refer to a model for the spherical distribution of the data like the diffusion tensor (Becker et. al 2012). Recently, the method could be extended for multi-shell dMRI data as multi-shell POAS (msPOAS) (Becker et al. 2014), see Figure 5.

All adaptive methods for dMRI are implemented in the R software environment for statistical computing and graphics as a free contributed package dti. It can be downloaded from the CRAN server. It is also listed at NITRC and part of the WIAS R packages for neuroimaging. The msPOAS method is also implemented in Matlab as part on the ACID-Toolbox for SPM (Tabelow et al. 2015). The method has shown to be an essential part of an improved processing pipeline for Diffusion Kurtosis Imaging (DKI) in Mohammadi et al. 2015.

The application of many processing methods to neuroimaging data, like the denoising method msPOAS, requires knowledge on the (local) noise level in the data. In Tabelow et al. 2015 we provide a new method LANE for the corresponding estimation problem. The knowledge about the local noise level then also allows for the characterization of the estimation bias in local diffusion models (Polzehl and Tabelow 2016), which is in particular important at low SNR.

The R package dti is capable of performing a full analysis of dMRI data and implements a large number of diffusion models for the data, e.g. the DTI model, the diffusion kurtosis model (DKI), and the orientation distribution function. We proposed a computationally feasible and interpretable tensor mixture model for the modelling of dMRI data (Tabelow et al. 2012).

The importance of adequate processing of neuroimaging data for diagnostic sensitivity became obvious in two publications together with colleagues from Universitätsklinikum Münster concerning multiple sclerosis (Deppe et al. 2016) and EHEC (Krämer et al. 2015).

Quantitative MRI - revisted from a postprocessing view

The magnetization relaxation times T1 and T2 in MRI can also be obtained from a model of the observed magnitude data varying several acquisition parameters. Among them are the repetition time, the echo time, and the flip angle. The data acquisition can be made very efficient using a multi-echo sequence (MPM). The data model is then given by the Ernst equation, from which the quantitative parameters T1, T2, and the proton density can be inferred. In contrast to the technique mentioned above in the context of Magnetic Resonance Fingerprinting (MRF) this oes not rely on the Bloch equations but on a model for the magnitude data. However, data is corrupted by noise. We applied the structural adaptive smoothing method Mohammadi et. al, 2017 to this data to improve the signal-to-noise ratio without blurring the fine structural details of the image, see Figure 6.

Ädaptive
Figure 6. - Adaptive vs. non-adaptive smoothing of noisy MPM data. The upper row shows R1=1/T1 from a single acquisition, the lower row the same quantity based on data acquired three times (NEX=3) with apparently higher SNR.

In a second related project we examined the influence of adaptive denoising on the inference on T1 and the porosity of the tissue from inversion recovery MRI (IR-MRI). Again, estimation is based on a signal model for the magnitude data depending on the inversion time. The model is a mixture model and thus very sensitive to the noise component in the data. We could demonstrate how an adaptive smoothing method can improve the parameter estimation, which can then be used in a poroelastic model tissue model to infer on in-vivo pressure. This has been done in MATH+ projects EF3-9 and EF3-11 together with research group 3, for details on the numerical model see here.


Highlights

The research activity of many international groups with respect to R and Medical Imaging has been recently summarized in a Special Volume of the Journal of Statistical Software "Magnetic Resonance Imaging in R" vol. 44 (2011) edited by K. Tabelow and B. Whitcher, see also Tabelow et al. 2011. Furthermore we published a Springer monograph on "Magnetic Resonance Brain Imaging: Modeling and Analysis with R" (Polzehl and Tabelow, 2019) in the UseR! series, the Second edition (Polzehl and Tabelow, 2023) of the book has been largely revised and extended.

Software has been developed within the framework of the R Environment for Statistical Computing:

  • adimpro - Adaptive Smoothing of Digital Images
  • dti - DTI/DWI Analysis
  • fmri - Analysis of fMRI Experiments
  • qmri - Analysis of quantitative MRI Experiments

Further software packages and plugins are

Publications

  Monographs

  • N. Tupitsa, P. Dvurechensky, D. Dvinskikh, A. Gasnikov, Section: Computational Optimal Transport, P.M. Pardalos, O.A. Prokopyev, eds., Encyclopedia of Optimization, Springer International Publishing, Cham, published online on 11.07.2023 pages, (Chapter Published), DOI 10.1007/978-3-030-54621-2_861-1 .

  • D. Kamzolov, A. Gasnikov, P. Dvurechensky, A. Agafonov, M. Takac, Exploiting Higher-order Derivates in Convex Optimization Methods, Encyclopedia of Optimization, Springer, Cham, 2023, (Chapter Published), DOI 10.1007/978-3-030-54621-2_858-1 .

  • J. Polzehl, K. Tabelow, Magnetic Resonance Brain Imaging: Modeling and Data Analysis using R, 2nd Revised Edition, Series: Use R!, Springer International Publishing, Cham, 2023, 258 pages, (Monograph Published), DOI 10.1007/978-3-031-38949-8 .
    Abstract
    This book discusses the modeling and analysis of magnetic resonance imaging (MRI) data acquired from the human brain. The data processing pipelines described rely on R. The book is intended for readers from two communities: Statisticians who are interested in neuroimaging and looking for an introduction to the acquired data and typical scientific problems in the field; and neuroimaging students wanting to learn about the statistical modeling and analysis of MRI data. Offering a practical introduction to the field, the book focuses on those problems in data analysis for which implementations within R are available. It also includes fully worked examples and as such serves as a tutorial on MRI analysis with R, from which the readers can derive their own data processing scripts. The book starts with a short introduction to MRI and then examines the process of reading and writing common neuroimaging data formats to and from the R session. The main chapters cover three common MR imaging modalities and their data modeling and analysis problems: functional MRI, diffusion MRI, and Multi-Parameter Mapping. The book concludes with extended appendices providing details of the non-parametric statistics used and the resources for R and MRI data.The book also addresses the issues of reproducibility and topics like data organization and description, as well as open data and open science. It relies solely on a dynamic report generation with knitr and uses neuroimaging data publicly available in data repositories. The PDF was created executing the R code in the chunks and then running LaTeX, which means that almost all figures, numbers, and results were generated while producing the PDF from the sources.

  • M. Danilova, P. Dvurechensky, A. Gasnikov, E. Gorbunov, S. Guminov, D. Kamzolov, I. Shibaev, Chapter: Recent Theoretical Advances in Non-convex Optimization, A. Nikeghbali, P.M. Pardalos, A.M. Raigorodskii, M.Th. Rassias, eds., 191 of Springer Optimization and Its Applications, Springer, Cham, 2022, pp. 79--163, (Chapter Published), DOI 10.1007/978-3-031-00832-0_3 .

  • L. Starke, K. Tabelow, Th. Niendorf, A. Pohlmann, Chapter 34: Denoising for Improved Parametric MRI of the Kidney: Protocol for Nonlocal Means Filtering, in: Preclinical MRI of the Kidney: Methods and Protocols, A. Pohlmann, Th. Niendorf, eds., 2216 of Methods in Molecular Biology, Springer Nature Switzerland AG, Cham, 2021, pp. 565--576, (Chapter Published), DOI 10.1007/978-1-0716-0978-1_34 .

  • M. Hintermüller, K. Papafitsoros, Chapter 11: Generating Structured Nonsmooth Priors and Associated Primal-dual Methods, in: Processing, Analyzing and Learning of Images, Shapes, and Forms: Part 2, R. Kimmel, X.-Ch. Tai, eds., 20 of Handbook of Numerical Analysis, Elsevier, 2019, pp. 437--502, (Chapter Published), DOI 10.1016/bs.hna.2019.08.001 .

  • J. Polzehl, K. Tabelow, Magnetic Resonance Brain Imaging: Modeling and Data Analysis using R, Series: Use R!, Springer International Publishing, Cham, 2019, 231 pages, (Monograph Published), DOI 10.1007/978-3-030-29184-6 .
    Abstract
    This book discusses the modeling and analysis of magnetic resonance imaging (MRI) data acquired from the human brain. The data processing pipelines described rely on R. The book is intended for readers from two communities: Statisticians who are interested in neuroimaging and looking for an introduction to the acquired data and typical scientific problems in the field; and neuroimaging students wanting to learn about the statistical modeling and analysis of MRI data. Offering a practical introduction to the field, the book focuses on those problems in data analysis for which implementations within R are available. It also includes fully worked examples and as such serves as a tutorial on MRI analysis with R, from which the readers can derive their own data processing scripts. The book starts with a short introduction to MRI and then examines the process of reading and writing common neuroimaging data formats to and from the R session. The main chapters cover three common MR imaging modalities and their data modeling and analysis problems: functional MRI, diffusion MRI, and Multi-Parameter Mapping. The book concludes with extended appendices providing details of the non-parametric statistics used and the resources for R and MRI data.The book also addresses the issues of reproducibility and topics like data organization and description, as well as open data and open science. It relies solely on a dynamic report generation with knitr and uses neuroimaging data publicly available in data repositories. The PDF was created executing the R code in the chunks and then running LaTeX, which means that almost all figures, numbers, and results were generated while producing the PDF from the sources.

  • J. Polzehl, K. Tabelow, Chapter 4: Structural Adaptive Smoothing: Principles and Applications in Imaging, in: Mathematical Methods for Signal and Image Analysis and Representation, L. Florack, R. Duits, G. Jongbloed, M.-C. VAN Lieshout, L. Davies, eds., 41 of Computational Imaging and Vision, Springer, London et al., 2012, pp. 65--81, (Chapter Published).

  • K. Tabelow, B. Whitcher, eds., Magnetic Resonance Imaging in R, 44 of Journal of Statistical Software, American Statistical Association, 2011, 320 pages, (Monograph Published).

  Articles in Refereed Journals

  • A. Rogozin, A. Beznosikov, D. Dvinskikh, D. Kovalev, P. Dvurechensky, A. Gasnikov, Decentralized saddle point problems via non-Euclidean mirror prox, Optimization Methods & Software, published online in Jan. 2024, DOI 10.1080/10556788.2023.2280062 .

  • P. Dvurechensky, P. Ostroukhov, A. Gasnikov, C.A. Uribe, A. Ivanova, Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal-dual tensor methods, Optimization Methods & Software, published online on 05.02.2024, DOI 10.1080/10556788.2023.2296443 .

  • P. Dvurechensky, M. Staudigl, Hessian barrier algorithms for non-convex conic optimization, Mathematical Programming. A Publication of the Mathematical Programming Society, published online on 04.03.2024, DOI 10.1007/s10107-024-02062-7 .
    Abstract
    We consider the minimization of a continuous function over the intersection of a regular cone with an affine set via a new class of adaptive first- and second-order optimization methods, building on the Hessian-barrier techniques introduced in [Bomze, Mertikopoulos, Schachinger, and Staudigl, Hessian barrier algorithms for linearly constrained optimization problems, SIAM Journal on Optimization, 2019]. Our approach is based on a potential-reduction mechanism and attains a suitably defined class of approximate first- or second-order KKT points with the optimal worst-case iteration complexity O(??2) (first-order) and O(??3/2) (second-order), respectively. A key feature of our methodology is the use of self-concordant barrier functions to construct strictly feasible iterates via a disciplined decomposition approach and without sacrificing on the iteration complexity of the method. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained optimization problems.

  • F. Galarce Marín, K. Tabelow, J. Polzehl, Ch.P. Papanikas, V. Vavourakis, L. Lilaj, I. Sack, A. Caiazzo, Displacement and pressure reconstruction from magnetic resonance elastography images: Application to an in silico brain model, SIAM Journal on Imaging Sciences, 16 (2023), pp. 996--1027, DOI 10.1137/22M149363X .
    Abstract
    This paper investigates a data assimilation approach for non-invasive quantification of intracranial pressure from partial displacement data, acquired through magnetic resonance elastography. Data assimilation is based on a parametrized-background data weak methodology, in which the state of the physical system tissue displacements and pressure fields is reconstructed from partially available data assuming an underlying poroelastic biomechanics model. For this purpose, a physics-informed manifold is built by sampling the space of parameters describing the tissue model close to their physiological ranges, to simulate the corresponding poroelastic problem, and compute a reduced basis. Displacements and pressure reconstruction is sought in a reduced space after solving a minimization problem that encompasses both the structure of the reduced-order model and the available measurements. The proposed pipeline is validated using synthetic data obtained after simulating the poroelastic mechanics on a physiological brain. The numerical experiments demonstrate that the framework can exhibit accurate joint reconstructions of both displacement and pressure fields. The methodology can be formulated for an arbitrary resolution of available displacement data from pertinent images. It can also inherently handle uncertainty on the physical parameters of the mechanical model by enlarging the physics-informed manifold accordingly. Moreover, the framework can be used to characterize, in silico, biomarkers for pathological conditions, by appropriately training the reduced-order model. A first application for the estimation of ventricular pressure as an indicator of abnormal intracranial pressure is shown in this contribution.

  • A. Agafonov, D. Kamzolov, P. Dvurechensky, A. Gasnikov, Inexact tensor methods and their application to stochastic convex optimization, Optimization Methods & Software, 39 (2024), pp. 42--83 (published online in Nov. 2023), DOI 10.1080/10556788.2023.2261604 .

  • A. Vasin, A. Gasnikov, P. Dvurechensky, V. Spokoiny, Accelerated gradient methods with absolute and relative noise in the gradient, Optimization Methods & Software, published online in June 2023, DOI 10.1080/10556788.2023.2212503 .

  • E. Borodich, V. Tominin, Y. Tominin, D. Kovalev, A. Gasnikov, P. Dvurechensky, Accelerated variance-reduced methods for saddle-point problems, EURO Journal on Computational Optimization, 10 (2022), pp. 100048/1--100048/32, DOI 10.1016/j.ejco.2022.100048 .

  • A. Ivanova, P. Dvurechensky, E. Vorontsova, D. Pasechnyuk, A. Gasnikov, D. Dvinskikh, A. Tyurin, Oracle complexity separation in convex optimization, Journal of Optimization Theory and Applications, 193 (2022), pp. 462--490, DOI 10.1007/s10957-022-02038-7 .
    Abstract
    Ubiquitous in machine learning regularized empirical risk minimization problems are often composed of several blocks which can be treated using different types of oracles, e.g., full gradient, stochastic gradient or coordinate derivative. Optimal oracle complexity is known and achievable separately for the full gradient case, the stochastic gradient case, etc. We propose a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for each block the corresponding oracle is called the optimal number of times for a given accuracy. As a particular example, we demonstrate that for a combination of a full gradient oracle and either a stochastic gradient oracle or a coordinate descent oracle our approach leads to the optimal number of oracle calls separately for the full gradient part and the stochastic/coordinate descent part.

  • G. Dong, M. Hintermüller, K. Papafitsoros, Optimization with learning-informed differential equation constraints and its applications, ESAIM. Control, Optimisation and Calculus of Variations, 28 (2022), pp. 3/1--3/44, DOI 10.1051/cocv/2021100 .
    Abstract
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided.

  • E. Gorbunov, P. Dvurechensky, A. Gasnikov, An accelerated method for derivative-free smooth stochastic convex optimization, SIAM Journal on Optimization, 32 (2022), pp. 1210--1238, DOI 10.1137/19M1259225 .
    Abstract
    We consider an unconstrained problem of minimization of a smooth convex function which is only available through noisy observations of its values, the noise consisting of two parts. Similar to stochastic optimization problems, the first part is of a stochastic nature. On the opposite, the second part is an additive noise of an unknown nature, but bounded in the absolute value. In the two-point feedback setting, i.e. when pairs of function values are available, we propose an accelerated derivative-free algorithm together with its complexity analysis. The complexity bound of our derivative-free algorithm is only by a factor of n??? larger than the bound for accelerated gradient-based algorithms, where n is the dimension of the decision variable. We also propose a non-accelerated derivative-free algorithm with a complexity bound similar to the stochastic-gradient-based algorithm, that is, our bound does not have any dimension-dependent factor. Interestingly, if the solution of the problem is sparse, for both our algorithms, we obtain better complexity bound if the algorithm uses a 1-norm proximal setup, rather than the Euclidean proximal setup, which is a standard choice for unconstrained problems.

  • S. Mohammadi, T. Streubel, L. Klock, A. Lutti, K. Pine, S. Weber, L. Edwards, P. Scheibe, G. Ziegler, J. Gallinat, S. Kuhn, M. Callaghan, N. Weiskopf, K. Tabelow, Error quantification in multi-parameter mapping facilitates robust estimation and enhanced group level sensitivity, NeuroImage, 262 (2022), pp. 119529/1--119529/14, DOI 10.1016/j.neuroimage.2022.119529 .
    Abstract
    Multi-Parameter Mapping (MPM) is a comprehensive quantitative neuroimaging protocol that enables estimation of four physical parameters (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD, and magnetization transfer saturation MTsat) that are sensitive to microstructural tissue properties such as iron and myelin content. Their capability to reveal microstructural brain differences, however, is tightly bound to controlling random noise and artefacts (e.g. caused by head motion) in the signal. Here, we introduced a method to estimate the local error of PD, R1, and MTsat maps that captures both noise and artefacts on a routine basis without requiring additional data. To investigate the method's sensitivity to random noise, we calculated the model-based signal-to-noise ratio (mSNR) and showed in measurements and simulations that it correlated linearly with an experimental raw-image-based SNR map. We found that the mSNR varied with MPM protocols, magnetic field strength (3T vs. 7T) and MPM parameters: it halved from PD to R1 and decreased from PD to MT_sat by a factor of 3-4. Exploring the artefact-sensitivity of the error maps, we generated robust MPM parameters using two successive acquisitions of each contrast and the acquisition-specific errors to down-weight erroneous regions. The resulting robust MPM parameters showed reduced variability at the group level as compared to their single-repeat or averaged counterparts. The error and mSNR maps may better inform power-calculations by accounting for local data quality variations across measurements. Code to compute the mSNR maps and robustly combined MPM maps is available in the open-source hMRI toolbox.

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Axisymmetric diffusion kurtosis imaging with Rician bias correction: A simulation study, Magnetic Resonance in Medicine, 89 (2023), pp. 787--799 (published online on 05.10.2022), DOI 10.1002/mrm.29474 .

  • D. Tiapkin, A. Gasnikov, P. Dvurechensky, Stochastic saddle-point optimization for the Wasserstein barycenter problem, Optimization Letters, 16 (2022), pp. 2145--2175, DOI 10.1007/s11590-021-01834-w .

  • P. Dvurechensky, D. Kamzolov, A. Lukashevich, S. Lee, E. Ordentlich, C.A. Uribe, A. Gasnikov, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, EURO Journal on Computational Optimization, 10 (2022), pp. 100045/1--100045/35, DOI 10.1016/j.ejco.2022.100045 .

  • P. Dvurechensky, K. Safin, S. Shtern, M. Staudigl, Generalized self-concordant analysis of Frank--Wolfe algorithms, Mathematical Programming. A Publication of the Mathematical Programming Society, 198 (2023), pp. 255--323 (published online on 29.01.2022), DOI 10.1007/s10107-022-01771-1 .
    Abstract
    Projection-free optimization via different variants of the Frank--Wolfe method has become one of the cornerstones of large scale optimization for machine learning and computational statistics. Numerous applications within these fields involve the minimization of functions with self-concordance like properties. Such generalized self-concordant functions do not necessarily feature a Lipschitz continuous gradient, nor are they strongly convex, making them a challenging class of functions for first-order methods. Indeed, in a number of applications, such as inverse covariance estimation or distance-weighted discrimination problems in binary classification, the loss is given by a generalized self-concordant function having potentially unbounded curvature. For such problems projection-free minimization methods have no theoretical convergence guarantee. This paper closes this apparent gap in the literature by developing provably convergent Frank?Wolfe algorithms with standard O(1/k) convergence rate guarantees. Based on these new insights, we show how these sublinearly convergent methods can be accelerated to yield linearly convergent projection-free methods, by either relying on the availability of a local liner minimization oracle, or a suitable modification of the away-step Frank--Wolfe method.

  • M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, H. Sun, Dualization and automatic distributed parameter selection of total generalized variation via bilevel optimization, Numerical Functional Analysis and Optimization. An International Journal, 43 (2022), pp. 887--932, DOI 10.1080/01630563.2022.2069812 .
    Abstract
    Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters.

  • A. Gasnikov, D. Dvinskikh, P. Dvurechensky, D. Kamzolov, V. Matyukhin, D. Pasechnyuk, N. Tupitsa, A. Chernov, Accelerated meta-algorithm for convex optimization, Computational Mathematics and Mathematical Physics, 61 (2021), pp. 17--28, DOI 10.1134/S096554252101005X .

  • F. Stonyakin, A. Tyurin, A. Gasnikov, P. Dvurechensky, A. Agafonov, D. Dvinskikh, M. Alkousa, D. Pasechnyuk, S. Artamonov, V. Piskunova, Inexact model: A framework for optimization and variational inequalities, Optimization Methods & Software, published online in July 2021, DOI 10.1080/10556788.2021.1924714 .
    Abstract
    In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities.

  • N. Tupitsa, P. Dvurechensky, A. Gasnikov, S. Guminov, Alternating minimization methods for strongly convex optimization, Journal of Inverse and Ill-Posed Problems, 29 (2021), pp. 721--739, DOI 10.1515/jiip-2020-0074 .
    Abstract
    We consider alternating minimization procedures for convex optimization problems with variable divided in many block, each block being amenable for minimization with respect to its variable with freezed other variables blocks. In the case of two blocks, we prove a linear convergence rate for alternating minimization procedure under Polyak-Łojasiewicz condition, which can be seen as a relaxation of the strong convexity assumption. Under strong convexity assumption in many-blocks setting we provide an accelerated alternating minimization procedure with linear rate depending on the square root of the condition number as opposed to condition number for the non-accelerated method.

  • P. Dvurechensky, M. Staudigl, S. Shtern, First-order methods for convex optimization, EURO Journal on Computational Optimization, 9 (2021), pp. 100015/1--100015/27, DOI 10.1016/j.ejco.2021.100015 .
    Abstract
    First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.

  • Y.Y. Park, J. Polzehl, S. Chatterjee, A. Brechmann, M. Fiecas, Semiparametric modeling of time-varying activation and connectivity in task-based fMRI data, Computational Statistics & Data Analysis, 150 (2020), pp. 107006/1--107006/14, DOI 10.1016/j.csda.2020.107006 .
    Abstract
    In functional magnetic resonance imaging (fMRI), there is a rise in evidence that time-varying functional connectivity, or dynamic functional connectivity (dFC), which measures changes in the synchronization of brain activity, provides additional information on brain networks not captured by time-invariant (i.e., static) functional connectivity. While there have been many developments for statistical models of dFC in resting-state fMRI, there remains a gap in the literature on how to simultaneously model both dFC and time-varying activation when the study participants are undergoing experimental tasks designed to probe at a cognitive process of interest. A method is proposed to estimate dFC between two regions of interest (ROIs) in task-based fMRI where the activation effects are also allowed to vary over time. The proposed method, called TVAAC (time-varying activation and connectivity), uses penalized splines to model both time-varying activation effects and time-varying functional connectivity and uses the bootstrap for statistical inference. Simulation studies show that TVAAC can estimate both static and time-varying activation and functional connectivity, while ignoring time-varying activation effects would lead to poor estimation of dFC. An empirical illustration is provided by applying TVAAC to analyze two subjects from an event-related fMRI learning experiment.

  • J. Polzehl, K. Papafitsoros, K. Tabelow, Patch-wise adaptive weights smoothing in R, Journal of Statistical Software, 95 (2020), pp. 1--27, DOI 10.18637/jss.v095.i06 .
    Abstract
    Image reconstruction from noisy data has a long history of methodological development and is based on a variety of ideas. In this paper we introduce a new method called patch-wise adaptive smoothing, that extends the Propagation-Separation approach by using comparisons of local patches of image intensities to define local adaptive weighting schemes for an improved balance of reduced variability and bias in the reconstruction result. We present the implementation of the new method in an R package aws and demonstrate its properties on a number of examples in comparison with other state-of-the art image reconstruction methods.

  • E.A. Vorontsova, A. Gasnikov, E.A. Gorbunov, P. Dvurechensky, Accelerated gradient-free optimization methods with a non-Euclidean proximal operator, Automation and Remote Control, 80 (2019), pp. 1487--1501.

  • L. Calatroni, K. Papafitsoros, Analysis and automatic parameter selection of a variational model for mixed Gaussian and salt & pepper noise removal, Inverse Problems and Imaging, 35 (2019), pp. 114001/1--114001/37, DOI 10.1088/1361-6420/ab291a .
    Abstract
    We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind” model selection.

  • A. Gasnikov, P. Dvurechensky, F. Stonyakin, A.A. Titov, An adaptive proximal method for variational inequalities, Computational Mathematics and Mathematical Physics, 59 (2019), pp. 836--841.

  • S. Guminov, Y. Nesterov, P. Dvurechensky, A. Gasnikov, Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems, Doklady Mathematics. Maik Nauka/Interperiodica Publishing, Moscow. English. Translation of the Mathematics Section of: Doklady Akademii Nauk. (Formerly: Russian Academy of Sciences. Doklady. Mathematics)., 99 (2019), pp. 125--128.

  • K. Tabelow, E. Balteau, J. Ashburner, M.F. Callaghan, B. Draganski, G. Helms, F. Kherif, T. Leutritz, A. Lutti, Ch. Phillips, E. Reimer, L. Ruthotto, M. Seif, N. Weiskopf, G. Ziegler, S. Mohammadi, hMRI -- A toolbox for quantitative MRI in neuroscience and clinical research, NeuroImage, 194 (2019), pp. 191--210, DOI 10.1016/j.neuroimage.2019.01.029 .
    Abstract
    Quantitative magnetic resonance imaging (qMRI) finds increasing application in neuroscience and clinical research due to its sensitivity to micro-structural properties of brain tissue, e.g. axon, myelin, iron and water concentration. We introduce the hMRI--toolbox, an easy-to-use open-source tool for handling and processing of qMRI data presented together with an example dataset. This toolbox allows the estimation of high-quality multi-parameter qMRI maps (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD and magnetisation transfer MT) that can be used for calculation of standard and novel MRI biomarkers of tissue microstructure as well as improved delineation of subcortical brain structures. Embedded in the Statistical Parametric Mapping (SPM) framework, it can be readily combined with existing SPM tools for estimating diffusion MRI parameter maps and benefits from the extensive range of available tools for high-accuracy spatial registration and statistical inference. As such the hMRI--toolbox provides an efficient, robust and simple framework for using qMRI data in neuroscience and clinical research.

  • M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, Analytical aspects of spatially adapted total variation regularisation, Journal of Mathematical Analysis and Applications, 454 (2017), pp. 891--935, DOI 10.1016/j.jmaa.2017.05.025 .
    Abstract
    In this paper we study the structure of solutions of the one dimensional weighted total variation regularisation problem, motivated by its application in signal recovery tasks. We study in depth the relationship between the weight function and the creation of new discontinuities in the solution. A partial semigroup property relating the weight function and the solution is shown and analytic solutions for simply data functions are computed. We prove that the weighted total variation minimisation problem is well-posed even in the case of vanishing weight function, despite the lack of coercivity. This is based on the fact that the total variation of the solution is bounded by the total variation of the data, a result that it also shown here. Finally the relationship to the corresponding weighted fidelity problem is explored, showing that the two problems can produce completely different solutions even for very simple data functions.

  • M. Hintermüller, C.N. Rautenberg, S. Rösel, Density of convex intersections and applications, Proceedings of the Royal Society of Edinburgh. Section A. Mathematics, 473 (2017), pp. 20160919/1--20160919/28, DOI 10.1098/rspa.2016.0919 .
    Abstract
    In this paper we address density properties of intersections of convex sets in several function spaces. Using the concept of Gamma-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite element discretizations of sets associated to convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems.

  • M. Hintermüller, C.N. Rautenberg, T. Wu, A. Langer, Optimal selection of the regularization function in a generalized total variation model. Part II: Algorithm, its analysis and numerical tests, Journal of Mathematical Imaging and Vision, 59 (2017), pp. 515--533.
    Abstract
    Based on the generalized total variation model and its analysis pursued in part I (WIAS Preprint no. 2235), in this paper a continuous, i.e., infinite dimensional, projected gradient algorithm and its convergence analysis are presented. The method computes a stationary point of a regularized bilevel optimization problem for simultaneously recovering the image as well as determining a spatially distributed regularization weight. Further, its numerical realization is discussed and results obtained for image denoising and deblurring as well as Fourier and wavelet inpainting are reported on.

  • M. Hintermüller, C.N. Rautenberg, Optimal selection of the regularization function in a weighted total variation model. Part I: Modeling and theory, Journal of Mathematical Imaging and Vision, 59 (2017), pp. 498--514.
    Abstract
    Based on the generalized total variation model and its analysis pursued in part I (WIAS Preprint no. 2235), in this paper a continuous, i.e., infinite dimensional, projected gradient algorithm and its convergence analysis are presented. The method computes a stationary point of a regularized bilevel optimization problem for simultaneously recovering the image as well as determining a spatially distributed regularization weight. Further, its numerical realization is discussed and results obtained for image denoising and deblurring as well as Fourier and wavelet inpainting are reported on.

  • M. Deppe, K. Tabelow, J. Krämer, J.-G. Tenberge, P. Schiffler, S. Bittner, W. Schwindt, F. Zipp, H. Wiendl, S.G. Meuth, Evidence for early, non-lesional cerebellar damage in patients with multiple sclerosis: DTI measures correlate with disability, atrophy, and disease duration, Multiple Sclerosis Journal, 22 (2016), pp. 73--84, DOI 10.1177/1352458515579439 .

  • K. Schildknecht, K. Tabelow, Th. Dickhaus, More specific signal detection in functional magnetic resonance imaging by false discovery rate control for hierarchically structured systems of hypotheses, PLOS ONE, 11 (2016), pp. e0149016/1--e0149016/21, DOI 10.1371/journal.pone.0149016 .

  • H.U. Voss, J.P. Dyke, K. Tabelow, N. Schiff, D. Ballon, Magnetic resonance advection imaging of cerebrovascular pulse dynamics, Journal of Cerebral Blood Flow and Metabolism, 37 (2017), pp. 1223--1235 (published online on 24.05.2016), DOI 10.1177/0271678x16651449 .

  • M. Deliano, K. Tabelow, R. König, J. Polzehl, Improving accuracy and temporal resolution of learning curve estimation for within- and across-session analysis, PLOS ONE, 11 (2016), pp. e0157355/1--e0157355/23, DOI 10.1371/journal.pone.0157355 .
    Abstract
    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. In this approach, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors for single subjects as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from a shuttle-box avoidance experiment with Mongolian gerbils, our approach revealed performance changes occurring at multiple temporal scales within and across training sessions which were otherwise obscured in the conventional analysis. The proper assessment of the behavioral dynamics of learning at a high temporal resolution clarified and extended current descriptions of the process of avoidance learning. It further disambiguated the interpretation of neurophysiological signal changes recorded during training in relation to learning.

  • J. Polzehl, K. Tabelow, Low SNR in diffusion MRI models, Journal of the American Statistical Association, 111 (2016), pp. 1480--1490, DOI 10.1080/01621459.2016.1222284 .
    Abstract
    Noise is a common issue for all magnetic resonance imaging (MRI) techniques such as diffusion MRI and obviously leads to variability of the estimates in any model describing the data. Increasing spatial resolution in MR experiments further diminish the signal-to-noise ratio (SNR). However, with low SNR the expected signal deviates from the true value. Common modeling approaches therefore lead to a bias in estimated model parameters. Adjustments require an analysis of the data generating process and a characterization of the resulting distribution of the imaging data. We provide an adequate quasi-likelihood approach that employs these characteristics. We elaborate on the effects of typical data preprocessing and analyze the bias effects related to low SNR for the example of the diffusion tensor model in diffusion MRI. We then demonstrate the relevance of the problem using data from the Human Connectome Project.

  • K. Tabelow, S. Mohammadi, N. Weiskopf, J. Polzehl, POAS4SPM --- A toolbox for SPM to denoise diffusion MRI data, Neuroinformatics, 13 (2015), pp. 19--29.
    Abstract
    We present an implementation of a recently developed noise reduction algorithm for dMRI data, called multi-shell position orientation adaptive smoothing (msPOAS), as a toolbox for SPM. The method intrinsically adapts to the structures of different size and shape in dMRI and hence avoids blurring typically observed in non-adaptive smoothing. We give examples for the usage of the toolbox and explain the determination of experiment-dependent parameters for an optimal performance of msPOAS.

  • K. Tabelow, H.U. Voss, J. Polzehl, Local estimation of the noise level in MRI using structural adaptation, Medical Image Analysis, 20 (2015), pp. 76--86.
    Abstract
    We present a method for local estimation of the signal-dependent noise level in magnetic resonance images. The procedure uses a multi-scale approach to adaptively infer on local neighborhoods with similar data distribution. It exploits a maximum-likelihood estimator for the local noise level. The validity of the method was evaluated on repeated diffusion data of a phantom and simulated data using T1-data corrupted with artificial noise. Simulation results are compared with a recently proposed estimate. The method was applied to a high-resolution diffusion dataset to obtain improved diffusion model estimation results and to demonstrate its usefulness in methods for enhancing diffusion data.

  • J. Krämer, M. Deppe, K. Göbel, K. Tabelow, H. Wiendl, S.G. Meuth, Recovery of thalamic microstructural damage after Shiga toxin 2-associated hemolytic-uremic syndrome, Journal of the Neurological Sciences, 356 (2015), pp. 175--183.

  • S. Mohammadi, K. Tabelow, L. Ruthotto, Th. Feiweier, J. Polzehl, N. Weiskopf, High-resolution diffusion kurtosis imaging at 3T enabled by advanced post-processing, Frontiers in Neuroscience, 8 (2015), pp. 427/1--427/14.

  • S. Becker, K. Tabelow, S. Mohammadi, N. Weiskopf, J. Polzehl, Adaptive smoothing of multi-shell diffusion-weighted magnetic resonance data by msPOAS, NeuroImage, 95 (2014), pp. 90--105.
    Abstract
    In this article we present a noise reduction method (msPOAS) for multi-shell diffusion-weighted magnetic resonance data. To our knowledge, this is the first smoothing method which allows simultaneous smoothing of all q-shells. It is applied directly to the diffusion weighted data and consequently allows subsequent analysis by any model. Due to its adaptivity, the procedure avoids blurring of the inherent structures and preserves discontinuities. MsPOAS extends the recently developed position-orientation adaptive smoothing (POAS) procedure to multi-shell experiments. At the same time it considerably simplifies and accelerates the calculations. The behavior of the algorithm msPOAS is evaluated on diffusion-weighted data measured on a single shell and on multiple shells.

  • M. Welvaert, K. Tabelow, R. Seurinck, Y. Rosseel, Adaptive smoothing as inference strategy: More specificity for unequally sized or neighboring regions, Neuroinformatics, 11 (2013), pp. 435--445.
    Abstract
    Although spatial smoothing of fMRI data can serve multiple purposes, increasing the sensitivity of activation detection is probably its greatest benefit. However, this increased detection power comes with a loss of specificity when non-adaptive smoothing (i.e. the standard in most software packages) is used. Simulation studies and analysis of experimental data was performed using the R packages neuRosim and fmri. In these studies, we systematically investigated the effect of spatial smoothing on the power and number of false positives in two particular cases that are often encountered in fMRI research: (1) Single condition activation detection for regions that differ in size, and (2) multiple condition activation detection for neighbouring regions. Our results demonstrate that adaptive smoothing is superior in both cases because less false positives are introduced by the spatial smoothing process compared to standard Gaussian smoothing or FDR inference of unsmoothed data.

  • S. Becker, K. Tabelow, H.U. Voss, A. Anwander, R.M. Heidemann, J. Polzehl, Position-orientation adaptive smoothing of diffusion weighted magnetic resonance data (POAS), Medical Image Analysis, 16 (2012), pp. 1142--1155.
    Abstract
    We introduce an algorithm for diffusion weighted magnetic resonance imaging data enhancement based on structural adaptive smoothing in both space and diffusion direction. The method, called POAS, does not refer to a specific model for the data, like the diffusion tensor or higher order models. It works by embedding the measurement space into a space with defined metric and group operations, in this case the Lie group of three-dimensional Euclidean motion SE(3). Subsequently, pairwise comparisons of the values of the diffusion weighted signal are used for adaptation. The position-orientation adaptive smoothing preserves the edges of the observed fine and anisotropic structures. The POAS-algorithm is designed to reduce noise directly in the diffusion weighted images and consequently also to reduce bias and variability of quantities derived from the data for specific models. We evaluate the algorithm on simulated and experimental data and demonstrate that it can be used to reduce the number of applied diffusion gradients and hence acquisition time while achieving similar quality of data, or to improve the quality of data acquired in a clinically feasible scan time setting.

  • K. Tabelow, H.U. Voss, J. Polzehl, Modeling the orientation distribution function by mixtures of angular central Gaussian distributions, Journal of Neuroscience Methods, 203 (2012), pp. 200--211.
    Abstract
    In this paper we develop a tensor mixture model for diffusion weighted imaging data using an automatic model selection criterion for the order of tensor components in a voxel. We show that the weighted orientation distribution function for this model can be expanded into a mixture of angular central Gaussian distributions. We show properties of this model in extensive simulations and in a high angular resolution experimental data set. The results suggest that the model may improve imaging of cerebral fiber tracts. We demonstrate how inference on canonical model parameters may give rise to new clinical applications.

  • K. Tabelow, J.D. Clayden, P. Lafaye DE Micheaux, J. Polzehl, V.J. Schmid, B. Whitcher, Image analysis and statistical inference in neuroimaging with R, NeuroImage, 55 (2011), pp. 1686--1693.
    Abstract
    R is a language and environment for statistical computing and graphics. It can be considered an alternative implementation of the S language developed in the 1970s and 1980s for data analysis and graphics (Becker and Chambers, 1984; Becker et al., 1988). The R language is part of the GNU project and offers versions that compile and run on almost every major operating system currently available. We highlight several R packages built specifically for the analysis of neuroimaging data in the context of functional MRI, diffusion tensor imaging, and dynamic contrast-enhanced MRI. We review their methodology and give an overview of their capabilities for neuroimaging. In addition we summarize some of the current activities in the area of neuroimaging software development in R.

  • K. Tabelow, J. Polzehl, Statistical parametric maps for functional MRI experiments in R: The package fmri, Journal of Statistical Software, 44 (2011), pp. 1--21.
    Abstract
    The package fmri is provided for analysis of single run functional Magnetic Resonance Imaging data. It implements structural adaptive smoothing methods with signal detection for adaptive noise reduction which avoids blurring of edges of activation areas. fmri provides fmri analysis from time series modeling to signal detection and publication-ready images.

  • J. Bardin, J. Fins, D. Katz, J. Hersh, L. Heier, K. Tabelow, J. Dyke, D. Ballon, N. Schiff, H. Voss, Dissociations between behavioral and fMRI-based evaluations of cognitive function after brain injury, Brain, 134 (2011), pp. 769--782.
    Abstract
    Functional neuroimaging methods hold promise for the identification of cognitive function and communication capacity in some severely brain-injured patients who may not retain sufficient motor function to demonstrate their abilities. We studied seven severely brain-injured patients and a control group of 14 subjects using a novel hierarchical functional magnetic resonance imaging assessment utilizing mental imagery responses. Whereas the control group showed consistent and accurate (for communication) blood-oxygen-level-dependent responses without exception, the brain-injured subjects showed a wide variation in the correlation of blood-oxygen-level-dependent responses and overt behavioural responses. Specifically, the brain-injured subjects dissociated bedside and functional magnetic resonance imaging-based command following and communication capabilities. These observations reveal significant challenges in developing validated functional magnetic resonance imaging-based methods for clinical use and raise interesting questions about underlying brain function assayed using these methods in brain-injured subjects.

  • J. Polzehl, K. Tabelow, Beyond the Gaussian model in diffussion-weighted imaging: The package dti, Journal of Statistical Software, 44 (2011), pp. 1--26.
    Abstract
    Diffusion weighted imaging is a magnetic resonance based method to investigate tissue micro-structure especially in the human brain via water diffusion. Since the standard diffusion tensor model for the acquired data failes in large portion of the brain voxel more sophisticated models have bee developed. Here, we report on the package dti and how some of these models can be used with the package.

  • E. Diederichs, A. Juditsky, V. Spokoiny, Ch. Schütte, Sparse non-Gaussian component analysis, IEEE Transactions on Information Theory, 56 (2010), pp. 3033--3047.

  • J. Polzehl, H.U. Voss, K. Tabelow, Structural adaptive segmentation for statistical parametric mapping, NeuroImage, 52 (2010), pp. 515--523.
    Abstract
    Functional Magnetic Resonance Imaging inherently involves noisy measurements and a severe multiple test problem. Smoothing is usually used to reduce the effective number of multiple comparisons and to locally integrate the signal and hence increase the signal-to-noise ratio. Here, we provide a new structural adaptive segmentation algorithm (AS) that naturally combines the signal detection with noise reduction in one procedure. Moreover, the new method is closely related to a recently proposed structural adaptive smoothing algorithm and preserves shape and spatial extent of activation areas without blurring the borders.

  • K. Tabelow, V. Piëch, J. Polzehl, H.U. Voss, High-resolution fMRI: Overcoming the signal-to-noise problem, Journal of Neuroscience Methods, 178 (2009), pp. 357--365.
    Abstract
    Increasing the spatial resolution in functional Magnetic Resonance Imaging (fMRI) inherently lowers the signal-to-noise ratio (SNR). In order to still detect functionally significant activations in high-resolution images, spatial smoothing of the data is required. However, conventional non-adaptive smoothing comes with a reduced effective resolution, foiling the benefit of the higher acquisition resolution. We show how our recently proposed structural adaptive smoothing procedure for functional MRI data can improve signal detection of high-resolution fMRI experiments regardless of the lower SNR. The procedure is evaluated on human visual and sensory-motor mapping experiments. In these applications, the higher resolution could be fully utilized and high-resolution experiments were outperforming normal resolution experiments by means of both statistical significance and information content.

  • J. Polzehl, K. Tabelow, Structural adaptive smoothing in diffusion tensor imaging: The R package dti, Journal of Statistical Software, 31 (2009), pp. 1--24.
    Abstract
    Diffusion Weighted Imaging has become and will certainly continue to be an important tool in medical research and diagnostics. Data obtained with Diffusion Weighted Imaging are characterized by a high noise level. Thus, estimation of quantities like anisotropy indices or the main diffusion direction may be significantly compromised by noise in clinical or neuroscience applications. Here, we present a new package dti for R, which provides functions for the analysis of diffusion weighted data within the diffusion tensor model. This includes smoothing by a recently proposed structural adaptive smoothing procedure based on the Propagation-Separation approach in the context of the widely used Diffusion Tensor Model. We extend the procedure and show, how a correction for Rician bias can be incorporated. We use a heteroscedastic nonlinear regression model to estimate the diffusion tensor. The smoothing procedure naturally adapts to different structures of different size and thus avoids oversmoothing edges and fine structures. We illustrate the usage and capabilities of the package through some examples.

  • K. Tabelow, J. Polzehl, A.M. Uluğ, J.P. Dyke, R. Watts, L.A. Heier, H.U. Voss, Accurate localization of brain activity in presurgical fMRI by structure adaptive smoothing, IEEE Transactions on Medical Imaging, 27 (2008), pp. 531--537.
    Abstract
    An important problem of the analysis of fMRI experiments is to achieve some noise reduction of the data without blurring the shape of the activation areas. As a novel solution to this problem, the Propagation-Separation approach (PS), a structure adaptive smoothing method, has been proposed recently. PS adapts to different shapes of activation areas by generating a spatial structure corresponding to similarities and differences between time series in adjacent locations. In this paper we demonstrate how this method results in more accurate localization of brain activity. First, it is shown in numerical simulations that PS is superior over Gaussian smoothing with respect to the accurate description of the shape of activation clusters and and results in less false detections. Second, in a study of 37 presurgical planning cases we found that PS and Gaussian smoothing often yield different results, and we present examples showing aspects of the superiority of PS as applied to presurgical planning.

  • K. Tabelow, J. Polzehl, V. Spokoiny, H.U. Voss, Diffusion tensor imaging: Structural adaptive smoothing, NeuroImage, 39 (2008), pp. 1763--1773.
    Abstract
    Diffusion Tensor Imaging (DTI) data is characterized by a high noise level. Thus, estimation errors of quantities like anisotropy indices or the main diffusion direction used for fiber tracking are relatively large and may significantly confound the accuracy of DTI in clinical or neuroscience applications. Besides pulse sequence optimization, noise reduction by smoothing the data can be pursued as a complementary approach to increase the accuracy of DTI. Here, we suggest an anisotropic structural adaptive smoothing procedure, which is based on the Propagation-Separation method and preserves the structures seen in DTI and their different sizes and shapes. It is applied to artificial phantom data and a brain scan. We show that this method significantly improves the quality of the estimate of the diffusion tensor and hence enables one either to reduce the number of scans or to enhance the input for subsequent analysis such as fiber tracking.

  • D. Divine, J. Polzehl, F. Godtliebsen, A propagation-separation approach to estimate the autocorrelation in a time-series, Nonlinear Processes in Geophysics, 15 (2008), pp. 591--599.

  • V. Katkovnik, V. Spokoiny, Spatially adaptive estimation via fitted local likelihood techniques, IEEE Transactions on Signal Processing, 56 (2008), pp. 873--886.
    Abstract
    This paper offers a new technique for spatially adaptive estimation. The local likelihood is exploited for nonparametric modelling of observations and estimated signals. The approach is based on the assumption of a local homogeneity of the signal: for every point there exists a neighborhood in which the signal can be well approximated by a constant. The fitted local likelihood statistics is used for selection of an adaptive size of this neighborhood. The algorithm is developed for quite a general class of observations subject to the exponential distribution. The estimated signal can be uni- and multivariable. We demonstrate a good performance of the new algorithm for Poissonian image denoising and compare of the new method versus the intersection of confidence interval (ICI) technique that also exploits a selection of an adaptive neighborhood for estimation.

  • O. Minet, H. Gajewski, J.A. Griepentrog, J. Beuthan, The analysis of laser light scattering during rheumatoid arthritis by image segmentation, Laser Physics Letters, 4 (2007), pp. 604--610.

  • H.U. Voss, K. Tabelow, J. Polzehl, O. Tchernichovski, K. Maul, D. Salgado-Commissariat, D. Ballon, S.A. Helekar, Functional MRI of the zebra finch brain during song stimulation suggests a lateralized response topography, Proceedings of the National Academy of Sciences of the United States of America, 104 (2007), pp. 10667--10672.
    Abstract
    Electrophysiological and activity-dependent gene expression studies of birdsong have contributed to the understanding of the neural representation of natural sounds. However, we have limited knowledge about the overall spatial topography of song representation in the avian brain. Here, we adapt the noninvasive functional MRI method in mildly sedated zebra finches (Taeniopygia guttata) to localize and characterize song driven brain activation. Based on the blood oxygenation level-dependent signal, we observed a differential topographic responsiveness to playback of bird's own song, tutor song, conspecific song, and a pure tone as a nonsong stimulus. The bird's own song caused a stronger response than the tutor song or tone in higher auditory areas. This effect was more pronounced in the medial parts of the forebrain. We found left-right hemispheric asymmetry in sensory responses to songs, with significant discrimination between stimuli observed only in the right hemisphere. This finding suggests that perceptual responses might be lateralized in zebra finches. In addition to establishing the feasibility of functional MRI in sedated songbirds, our results demonstrate spatial coding of song in the zebra finch forebrain, based on developmental familiarity and experience.

  • J. Polzehl, K. Tabelow, Adaptive smoothing of digital images: The R package adimpro, Journal of Statistical Software, 19 (2007), pp. 1--17.
    Abstract
    Digital imaging has become omnipresent in the past years with a bulk of applications ranging from medical imaging to photography. When pushing the limits of resolution and sensitivity noise has ever been a major issue. However, commonly used non-adaptive filters can do noise reduction at the cost of a reduced effective spatial resolution only. Here we present a new package adimpro for R, which implements the Propagation-Separation approach by Polzehl and Spokoiny (2006) for smoothing digital images. This method naturally adapts to different structures of different size in the image and thus avoids oversmoothing edges and fine structures. We extend the method for imaging data with spatial correlation. Furthermore we show how the estimation of the dependence between variance and mean value can be included. We illustrate the use of the package through some examples.

  • J. Polzehl, K. Tabelow, fmri: A package for analyzing fmri data, Newsletter of the R Project for Statistical Computing, 7 (2007), pp. 13--17.

  • K. Tabelow, J. Polzehl, H.U. Voss, V. Spokoiny, Analyzing fMRI experiments with structural adaptive smoothing procedures, NeuroImage, 33 (2006), pp. 55--62.
    Abstract
    Data from functional magnetic resonance imaging (fMRI) consists of time series of brain images which are characterized by a low signal-to-noise ratio. In order to reduce noise and to improve signal detection the fMRI data is spatially smoothed. However, the common application of a Gaussian filter does this at the cost of loss of information on spatial extent and shape of the activation area. We suggest to use the propagation-separation procedures introduced by Polzehl and Spokoiny (2006) instead. We show that this significantly improves the information on the spatial extent and shape of the activation region with similar results for the noise reduction. To complete the statistical analysis, signal detection is based on thresholds defined by random field theory. Effects of ad aptive and non-adaptive smoothing are illustrated by artificial examples and an analysis of experimental data.

  • G. Blanchard, M. Kawanabe, M. Sugiyama, V. Spokoiny, K.-R. Müller, In search of non-Gaussian components of a high-dimensional distribution, Journal of Machine Learning Research (JMLR). MIT Press, Cambridge, MA. English, English abstracts., 7 (2006), pp. 247--282.
    Abstract
    Finding non-Gaussian components of high-dimensional data is an important preprocessing step for efficient information processing. This article proposes a new em linear method to identify the “non-Gaussian subspace” within a very general semi-parametric framework. Our proposed method, called NGCA (Non-Gaussian Component Analysis), is essentially based on the fact that we can construct a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector which belongs to the low dimensional non-Gaussian target subspace up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family. Numerical examples demonstrate the usefulness of our method.

  • H. Gajewski, J.A. Griepentrog, A descent method for the free energy of multicomponent systems, Discrete and Continuous Dynamical Systems, 15 (2006), pp. 505--528.

  • A. Goldenshluger, V. Spokoiny, Recovering convex edges of image from noisy tomographic data, IEEE Transactions on Information Theory, 52 (2006), pp. 1322--1334.

  • J. Polzehl, V. Spokoiny, Propagation-separation approach for local likelihood estimation, Probability Theory and Related Fields, 135 (2006), pp. 335--362.
    Abstract
    The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including, e.g., regression, density, Poisson and binary response models. The method extends the adaptive weights smoothing (AWS) procedure introduced by the authors [Adaptive weights smoothing with applications to image sequentation. J. R. Stat. Soc., Ser. B 62, 335-354 (2000)] in the context of image denoising. The main idea of the method is to describe a greatest possible local neighborhood of every design point in which the local parametric assumption is justified by the data. The method is especially powerful for model functions having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical examples for density estimation and classification. We also establish some remarkable theoretical non-asymptotic results on properties of the new algorithm. This includes the “propagation” property which particularly yields the root-$n$ consistency of the resulting estimate in the homogeneous case. We also state an “oracle” result which implies rate optimality of the estimate under usual smoothness conditions and a “separation” result which explains the sensitivity of the method to structural changes.

  • J. Griepentrog, On the unique solvability of a nonlocal phase separation problem for multicomponent systems, Banach Center Publications, 66 (2004), pp. 153-164.

  • A. Goldenshluger, V. Spokoiny, On the shape-from-moments problem and recovering edges from noisy Radon data, Probability Theory and Related Fields, 128 (2004), pp. 123--140.

  • J. Polzehl, V. Spokoiny, Image denoising: Pointwise adaptive approach, The Annals of Statistics, 31 (2003), pp. 30--57.
    Abstract
    A new method of pointwise adaptation has been proposed and studied in Spokoiny (1998) in context of estimation of piecewise smooth univariate functions. The present paper extends that method to estimation of bivariate grey-scale images composed of large homogeneous regions with smooth edges and observed with noise on a gridded design. The proposed estimator $, hatf(x) ,$ at a point $, x ,$ is simply the average of observations over a window $, hatU(x) ,$ selected in a data-driven way. The theoretical properties of the procedure are studied for the case of piecewise constant images. We present a nonasymptotic bound for the accuracy of estimation at a specific grid point $, x ,$ as a function of the number of pixel $n$, of the distance from the point of estimation to the closest boundary and of smoothness properties and orientation of this boundary. It is also shown that the proposed method provides a near optimal rate of estimation near edges and inside homogeneous regions. We briefly discuss algorithmic aspects and the complexity of the procedure. The numerical examples demonstrate a reasonable performance of the method and they are in agreement with the theoretical issues. An example from satellite (SAR) imaging illustrates the applicability of the method.

  • J. Polzehl, V. Spokoiny, Functional and dynamic Magnetic Resonance Imaging using vector adaptive weights smoothing, Journal of the Royal Statistical Society. Series C. Applied Statistics, 50 (2001), pp. 485--501.
    Abstract
    We consider the problem of statistical inference for functional and dynamic Magnetic Resonance Imaging (MRI). A new approach is proposed which extends the adaptive weights smoothing (AWS) procedure from Polzehl and Spokoiny (2000) originally designed for image denoising. We demonstrate how the AWS method can be applied for time series of images, which typically occur in functional and dynamic MRI. It is shown how signal detection in functional MRI and analysis of dynamic MRI can benefit from spatially adaptive smoothing. The performance of the procedure is illustrated using real and simulated data.

  • J. Polzehl, V. Spokoiny, Adaptive Weights Smoothing with applications to image restoration, Journal of the Royal Statistical Society. Series B. Statistical Methodology, 62 (2000), pp. 335--354.
    Abstract
    We propose a new method of nonparametric estimation which is based on locally constant smoothing with an adaptive choice of weights for every pair of data-points. Some theoretical properties of the procedure are investigated. Then we demonstrate the performance of the method on some simulated univariate and bivariate examples and compare it with other nonparametric methods. Finally we discuss applications of this procedure to magnetic resonance and satellite imaging.

  Contributions to Collected Editions

  • A.H. Erhardt, K. Tsaneva-Atanasova, G.T. Lines, E.A. Martens, Editorial: Dynamical systems, PDEs and networks for biomedical applications: Mathematical modeling, analysis and simulations, 10 of Front. Phys., Sec. Statistical and Computational Physics, Frontiers, Lausanne, Switzerland, 2023, pp. 01--03, DOI 10.3389/fphy.2022.1101756 .

  • S. Abdurakhmon, M. Danilova, E. Gorbunov, S. Horvath, G. Gauthier, P. Dvurechensky, P. Richtarik, High-probability bounds for stochastic optimization and variational inequalities: The case of unbounded variance, in: Proceedings of the 40th International Conference on Machine Learning, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., 202 of Proceedings of Machine Learning Research, 2023, pp. 29563--29648.

  • A. Beznosikov, P. Dvurechensky, A. Koloskova, V. Samokhin, S.U. Stich, A. Gasnikov, Decentralized local stochastic extra-gradient for variational inequalities, in: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), S. Kojeyo, S. Mohamed, A. Argawal, D. Belgrave, K. Cho, A. Oh, eds., 2022, pp. 38116--38133.

  • E. Gorbunov, M. Danilova, D. Dobre, P. Dvurechensky, A. Gasnikov, G. Gidel, Clipped stochastic methods for variational inequalities with heavy-tailed noise, in: Advances in Neural Information Processing Systems 35 (NeurIPS 2022), S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh, eds., 2022, pp. 31319--31332.

  • D. Yarmoshik, A. Rogozin, O.O. Khamisov, P. Dvurechensky, A. Gasnikov, Decentralized convex optimization under affine constraints for power systems control, in: Mathematical Optimization Theory and Operations Research. MOTOR 2022, P. Pardalos, M. Khachay, V. Mazalov, eds., 13367 of Lecture Notes in Computer Science, Springer, Cham, 2022, pp. 62--75, DOI 10.1007/978-3-031-09607-5_5 .

  • A. Agafonov, P. Dvurechensky, G. Scutari, A. Gasnikov, D. Kamzolov, A. Lukashevich, A. Daneshmand, An accelerated second-order method for distributed stochastic optimization, in: 60th IEEE Conference on Decision and Control (CDC), IEEE, 2021, pp. 2407--2413, DOI 10.1109/CDC45484.2021.9683400 .

  • A. Daneshmand, G. Scutari, P. Dvurechensky, A. Gasnikov, Newton method over networks is fast up to the statistical precision, in: Proceedings of the 38th International Conference on Machine Learning, 139 of Proceedings of Machine Learning Research, 2021, pp. 2398--2409.

  • E. Gladin, A. Sadiev, A. Gasnikov, P. Dvurechensky, A. Beznosikov, M. Alkousa, Solving smooth min-min and min-max problems by mixed oracle algorithms, in: Mathematical Optimization Theory and Operations Research: Recent Trends, A. Strekalovsky, Y. Kochetov, T. Gruzdeva, A. Orlov , eds., 1476 of Communications in Computer and Information Science book series (CCIS), Springer International Publishing, Basel, 2021, pp. 19--40, DOI 10.1007/978-3-030-86433-0_2 .
    Abstract
    In this paper, we consider two types of problems that have some similarity in their structure, namely, min-min problems and min-max saddle-point problems. Our approach is based on considering the outer minimization problem as a minimization problem with an inexact oracle. This inexact oracle is calculated via an inexact solution of the inner problem, which is either minimization or maximization problem. Our main assumption is that the available oracle is mixed: it is only possible to evaluate the gradient w.r.t. the outer block of variables which corresponds to the outer minimization problem, whereas for the inner problem, only zeroth-order oracle is available. To solve the inner problem, we use the accelerated gradient-free method with zeroth-order oracle. To solve the outer problem, we use either an inexact variant of Vaidya's cutting-plane method or a variant of the accelerated gradient method. As a result, we propose a framework that leads to non-asymptotic complexity bounds for both min-min and min-max problems. Moreover, we estimate separately the number of first- and zeroth-order oracle calls, which are sufficient to reach any desired accuracy.

  • S. Guminov, P. Dvurechensky, N. Tupitsa, A. Gasnikov, On a combination of alternating minimization and Nesterov's momentum, in: Proceedings of the 38th International Conference on Machine Learning, 139 of Proceedings of Machine Learning Research, 2021, pp. 3886--3898.
    Abstract
    Alternating minimization (AM) optimization algorithms have been known for a long time and are of importance in machine learning problems, among which we are mostly motivated by approximating optimal transport distances. AM algorithms assume that the decision variable is divided into several blocks and minimization in each block can be done explicitly or cheaply with high accuracy. The ubiquitous Sinkhorn's algorithm can be seen as an alternating minimization algorithm for the dual to the entropy-regularized optimal transport problem. We introduce an accelerated alternating minimization method with a $1/k^2$ convergence rate, where $k$ is the iteration counter. This improves over known bound $1/k$ for general AM methods and for the Sinkhorn's algorithm. Moreover, our algorithm converges faster than gradient-type methods in practice as it is free of the choice of the step-size and is adaptive to the local smoothness of the problem. We show that the proposed method is primal-dual, meaning that if we apply it to a dual problem, we can reconstruct the solution of the primal problem with the same convergence rate. We apply our method to the entropy regularized optimal transport problem and show experimentally, that it outperforms Sinkhorn's algorithm.

  • A. Rogozin, M. Bochko, P. Dvurechensky, A. Gasnikov, V. Lukoshkin, An accelerated method for decentralized distributed stochastic optimization over time-varying graphs, in: 2021 IEEE 60th Annual Conference on Decision and Control (CDC), IEEE, 2021, pp. 3367--3373, DOI 10.1109/CDC45484.2021.9683400 .

  • A. Sadiev , A. Beznosikov, P. Dvurechensky, A. Gasnikov, Zeroth-order algorithms for smooth saddle-point problems, in: Mathematical Optimization Theory and Operations Research: Recent Trends, A. Strekalovsky, Y. Kochetov, T. Gruzdeva, A. Orlov , eds., 1476 of Communications in Computer and Information Science book series (CCIS), Springer International Publishing, Basel, 2021, pp. 71--85, DOI 10.1007/978-3-030-86433-0_5 .
    Abstract
    Saddle-point problems have recently gained an increased attention from the machine learning community, mainly due to applications in training Generative Adversarial Networks using stochastic gradients. At the same time, in some applications only a zeroth-order oracle is available. In this paper, we propose several algorithms to solve stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles, and estimate their convergence rate and its dependence on the dimension n of the variable. In particular, our analysis shows that in the case when the feasible set is a direct product of two simplices, our convergence rate for the stochastic term is only by a factor worse than for the first-order methods. Finally, we demonstrate the practical performance of our zeroth-order methods on practical problems.

  • D. Kamzolov, A. Gasnikov, P. Dvurechensky, Optimal combination of tensor optimization methods, in: Optimization and Applications. OPTIMA 2020, N. Olenev, Y. Evtushenko, M. Khachay, V. Malkova, eds., 12422 of Lecture Notes in Computer Science, Springer International Publishing, Cham, 2020, pp. 166--183, DOI 10.1007/978-3-030-62867-3_13 .
    Abstract
    We consider the minimization problem of a sum of a number of functions having Lipshitz p-th order derivatives with different Lipschitz constants. In this case, to accelerate optimization, we propose a general framework allowing to obtain near-optimal oracle complexity for each function in the sum separately, meaning, in particular, that the oracle for a function with lower Lipschitz constant is called a smaller number of times. As a building block, we extend the current theory of tensor methods and show how to generalize near-optimal tensor methods to work with inexact tensor step. Further, we investigate the situation when the functions in the sum have Lipschitz derivatives of a different order. For this situation, we propose a generic way to separate the oracle complexity between the parts of the sum. Our method is not optimal, which leads to an open problem of the optimal combination of oracles of a different order

  • N. Tupitsa, A. Gasnikov, P. Dvurechensky, S. Guminov, Strongly convex optimization for the dual formulation of optimal transport, in: Mathematical Optimization Theory and Operations Research, A. Kononov, M. Khachay, V.A. Kalyagin, P. Pardalos, eds., 1275 of Theoretical Computer Science and General Issues, Springer International Publishing AG, Cham, 2020, pp. 192--204, DOI 10.1007/978-3-030-58657-7_17 .
    Abstract
    In this paper we experimentally check a hypothesis, that dual problem to discrete entropy regularized optimal transport problem possesses strong convexity on a certain compact set. We present a numerical estimation technique of parameter of strong convexity and show that such an estimate increases the performance of an accelerated alternating minimization algorithm for strongly convex functions applied to the considered problem.

  • N. Tupitsa , P. Dvurechensky, A. Gasnikov, C.A. Uribe, Multimarginal optimal transport by accelerated alternating minimization, in: 2020 59th IEEE Conference on Decision and Control (CDC), IEEE, 2020, pp. 6132--6137, DOI 10.1109/CDC42340.2020.9304010 .

  • D. Dvinskikh, A. Ogaltsov, A. Gasnikov, P. Dvurechensky, V. Spokoiny, On the line-search gradient methods for stochastic optimization, in: Proceedings of the 21th IFAC World Congress, R. Findeisen, S. Hirche, K. Janschek, M. Mönnigmann, eds., 53 of IFAC PapersOnLine, Elsevier, 2020, pp. 1715--1720, DOI 10.1016/j.ifacol.2020.12.2284 .
    Abstract
    In this paper we propose several adaptive gradient methods for stochastic optimization. Our methods are based on Armijo-type line search and they simultaneously adapt to the unknown Lipschitz constant of the gradient and variance of the stochastic approximation for the gradient. We consider an accelerated gradient descent for convex problems and gradient descent for non-convex problems. In the experiments we demonstrate superiority of our methods to existing adaptive methods, e.g. AdaGrad and Adam.

  • P. Dvurechensky, A. Gasnikov, S. Omelchenko, A. Tiurin, A stable alternative to Sinkhorn's algorithm for regularized optimal transport, in: Mathematical Optimization Theory and Operations Research. MOTOR 2020, A. Kononov, M. Khachay, V.A. Kalyagin, P. Pardalos, eds., Lecture Notes in Computer Science, Springer, Cham, 2020, pp. 406--423, DOI 10.1007/978-3-030-49988-4_28 .

  • P. Dvurechensky, P. Ostroukhov, K. Safin, S. Shtern, M. Staudigl, Self-concordant analysis of Frank--Wolfe algorithms, in: Proceedings of the 37th International Conference on Machine Learning, H. Daumé Iii, A. Singh, eds., 119 of Proceedings of Machine Learning Research (online), 2020, pp. 2814--2824.
    Abstract
    Projection-free optimization via different variants of the Frank-Wolfe (FW), a.k.a. Conditional Gradient method has become one of the cornerstones in optimization for machine learning since in many cases the linear minimization oracle is much cheaper to implement than projections and some sparsity needs to be preserved. In a number of applications, e.g. Poisson inverse problems or quantum state tomography, the loss is given by a self-concordant (SC) function having unbounded curvature, implying absence of theoretical guarantees for the existing FW methods. We use the theory of SC functions to provide a new adaptive step size for FW methods and prove global convergence rate O(1/k) after k iterations. If the problem admits a stronger local linear minimization oracle, we construct a novel FW method with linear convergence rate for SC functions.

  • F. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C.A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, in: Proceedings of the 18th International Conference on Mathematical Optimization Theory and Operations Research (MOTOR 2019), M. Khachay, Y. Kochetov, P. Pardalos, eds., 11548 of Lecture Notes in Computer Science, Springer Nature Switzerland AG 2019, Cham, Switzerland, 2019, pp. 97--114, DOI 10.1007/978-3-030-22629-9_8 .
    Abstract
    We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [16] and relative smoothness condition [36]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [41]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments.

  • D. Dvinskikh, E. Gorbunov, A. Gasnikov, A. Dvurechensky, C.A. Uribe, On primal and dual approaches for distributed stochastic convex optimization over networks, in: 2019 IEEE 58th Conference on Decision and Control (CDC), IEEE Xplore, 2019, pp. 7435--7440, DOI 10.1109/CDC40024.2019.9029798 .
    Abstract
    We introduce a primal-dual stochastic gradient oracle method for distributed convex optimization problems over networks. We show that the proposed method is optimal in terms of communication steps. Additionally, we propose a new analysis method for the rate of convergence in terms of duality gap and probability of large deviations. This analysis is based on a new technique that allows to bound the distance between the iteration sequence and the optimal point. By the proper choice of batch size, we can guarantee that this distance equals (up to a constant) to the distance between the starting point and the solution.

  • M. Hintermüller, A. Langer, C.N. Rautenberg, T. Wu, Adaptive regularization for image reconstruction from subsampled data, in: Imaging, Vision and Learning Based on Optimization and PDEs IVLOPDE, Bergen, Norway, August 29 -- September 2, 2016, X.-Ch. Tai, E. Bae, M. Lysaker, eds., Mathematics and Visualization, Springer International Publishing, Berlin, 2018, pp. 3--26, DOI 10.1007/978-3-319-91274-5 .
    Abstract
    Choices of regularization parameters are central to variational methods for image restoration. In this paper, a spatially adaptive (or distributed) regularization scheme is developed based on localized residuals, which properly balances the regularization weight between regions containing image details and homogeneous regions. Surrogate iterative methods are employed to handle given subsampled data in transformed domains, such as Fourier or wavelet data. In this respect, this work extends the spatially variant regularization technique previously established in [15], which depends on the fact that the given data are degraded images only. Numerical experiments for the reconstruction from partial Fourier data and for wavelet inpainting prove the efficiency of the newly proposed approach.

  • N. Buzun, A. Suvorikova, V. Spokoiny, Multiscale parametric approach for change point detection, in: Proceedings of Information Technology and Systems 2016 -- The 40th Interdisciplinary Conference & School, Institute for Information Transmission Problems (Kharkevich Institute), Moscow, pp. 979--996.

  • K. Tabelow, J. Polzehl, SHOWCASE 21 -- Towards in-vivo histology, in: MATHEON -- Mathematics for Key Technologies, M. Grötschel, D. Hömberg, J. Sprekels, V. Mehrmann ET AL., eds., 1 of EMS Series in Industrial and Applied Mathematics, European Mathematical Society Publishing House, Zurich, 2014, pp. 378--379.

  • H. Lamecker, H.-Ch. Hege, K. Tabelow, J. Polzehl, F2 -- Image processing, in: MATHEON -- Mathematics for Key Technologies, M. Grötschel, D. Hömberg, J. Sprekels, V. Mehrmann ET AL., eds., 1 of EMS Series in Industrial and Applied Mathematics, European Mathematical Society Publishing House, Zurich, 2014, pp. 359--376.

  • K. Tabelow, Viele Tests --- viele Fehler, in: Besser als Mathe --- Moderne angewandte Mathematik aus dem MATHEON zum Mitmachen, K. Biermann, M. Grötschel, B. Lutz-Westphal, eds., Reihe: Populär, Vieweg+Teubner, Wiesbaden, 2010, pp. 117--120.

  • H. Gajewski, J.A. Griepentrog, A. Mielke, J. Beuthan, U. Zabarylo, O. Minet, Image segmentation for the investigation of scattered-light images when laser-optically diagnosing rheumatoid arthritis, in: Mathematics -- Key Technology for the Future, W. Jäger, H.-J. Krebs, eds., Springer, Heidelberg, 2008, pp. 149--161.

  Preprints, Reports, Technical Reports

  • G. Dong, M. Flaschel, M. Hintermüller, K. Papafitsoros, C. Sirotenko, K. Tabelow, Data--driven methods for quantitative imaging, Preprint no. 3105, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3105 .
    Abstract, PDF (7590 kByte)
    In the field of quantitative imaging, the image information at a pixel or voxel in an underlying domain entails crucial information about the imaged matter. This is particularly important in medical imaging applications, such as quantitative Magnetic Resonance Imaging (qMRI), where quantitative maps of biophysical parameters can characterize the imaged tissue and thus lead to more accurate diagnoses. Such quantitative values can also be useful in subsequent, automatized classification tasks in order to discriminate normal from abnormal tissue, for instance. The accurate reconstruction of these quantitative maps is typically achieved by solving two coupled inverse problems which involve a (forward) measurement operator, typically ill-posed, and a physical process that links the wanted quantitative parameters to the reconstructed qualitative image, given some underlying measurement data. In this review, by considering qMRI as a prototypical application, we provide a mathematically-oriented overview on how data-driven approaches can be employed in these inverse problems eventually improving the reconstruction of the associated quantitative maps.

  • L. Schmitz, N. Tapia, Free generators and Hoffman's isomorphism for the two-parameter shuffle algebra, Preprint no. 3087, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3087 .
    Abstract, PDF (239 kByte)
    Signature transforms have recently been extended to data indexed by two and more parameters. With free Lyndon generators, ideas from B-algebras and a novel two-parameter Hoffman exponential, we provide three classes of isomorphisms between the underlying two-parameter shuffle and quasi-shuffle algebras. In particular, we provide a Hopf algebraic connection to the (classical, one-parameter) shuffle algebra over the extended alphabet of connected matrix compositions.

  • P. Ostroukhov, R. Kamalov, P. Dvurechensky, A. Gasnikov, Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities, Preprint no. 2820, WIAS, Berlin, 2021, DOI 10.20347/WIAS.PREPRINT.2820 .
    Abstract, PDF (302 kByte)
    In this paper we propose three tensor methods for strongly-convex-strongly-concave saddle point problems (SPP). The first method is based on the assumption of higher-order smoothness (the derivative of the order higher than 2 is Lipschitz-continuous) and achieves linear convergence rate. Under additional assumptions of first and second order smoothness of the objective we connect the first method with a locally superlinear converging algorithm in the literature and develop a second method with global convergence and local superlinear convergence. The third method is a modified version of the second method, but with the focus on making the gradient of the objective small. Since we treat SPP as a particular case of variational inequalities, we also propose two methods for strongly monotone variational inequalities with the same complexity as the described above.

  • A. Neumann, N. Peitek, A. Brechmann, K. Tabelow, Th. Dickhaus, Utilizing anatomical information for signal detection in functional magnetic resonance imaging, Preprint no. 2806, WIAS, Berlin, 2021, DOI 10.20347/WIAS.PREPRINT.2806 .
    Abstract, PDF (2995 kByte)
    We are considering the statistical analysis of functional magnetic resonance imaging (fMRI) data. As demonstrated in previous work, grouping voxels into regions (of interest) and carrying out a multiple test for signal detection on the basis of these regions typically leads to a higher sensitivity when compared with voxel-wise multiple testing approaches. In the case of a multi-subject study, we propose to define the regions for each subject separately based on their individual brain anatomy, represented, e.g., by so-called Aparc labels. The aggregation of the subject-specific evidence for the presence of signals in the different regions is then performed by means of a combination function for p-values. We apply the proposed methodology to real fMRI data and demonstrate that our approach can perform comparably to a two-stage approach for which two independent experiments are needed, one for defining the regions and one for actual signal detection.

  • F. Stonyakin, A. Gasnikov, A. Tyurin, D. Pasechnyuk, A. Agafonov, P. Dvurechensky, D. Dvinskikh, S. Artamonov, V. Piskunova, Inexact relative smoothness and strong convexity for optimization and variational inequalities by inexact model, Preprint no. 2709, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2709 .
    Abstract, PDF (463 kByte)
    In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, Bregman proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal conditional gradient method and universal method for variational inequalities with composite structure. These method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. As a particular case of our general framework, we introduce relative smoothness for operators and propose an algorithm for VIs with such operator. We also generalize our framework for relatively strongly convex objectives and strongly monotone variational inequalities.

  • L. Lücken, S. Yanchuk, Detection and storage of multivariate temporal sequences by spiking pattern reverberators, Preprint no. 2122, WIAS, Berlin, 2015, DOI 10.20347/WIAS.PREPRINT.2122 .
    Abstract, PDF (876 kByte)
    We consider networks of spiking coincidence detectors in continuous time. A single detector is a finite state machine that emits a pulsatile signal whenever the number incoming inputs exceeds a threshold within a time window of some tolerance width. Such finite state models are well-suited for hardware implementations of neural networks, as on integrated circuits (IC) or field programmable arrays (FPGAs) but they also reflect the natural capability of many neurons to act as coincidence detectors. We pay special attention to a recurrent coupling structure, where the delays are tuned to a specific pattern. Applying this pattern as an external input leads to a self-sustained reverberation of the encoded pattern if the tuning is chosen correctly. In terms of the coupling structure, the tolerance and the refractory time of the individual coincidence detectors, we determine conditions for the uniqueness of the sustained activity, i.e., for the funcionality of the network as an unambiguous pattern detector. We also present numerical experiments, where the functionality of the proposed pattern detector is demonstrated replacing the simplistic finite state models by more realistic Hodgkin-Huxley neurons and we consider the possibility of implementing several pattern detectors using a set of shared coincidence detectors. We propose that inhibitory connections may aid to increase the precision of the pattern discrimination.

  • D. Hoffmann, K. Tabelow, Structural adaptive smoothing for single-subject analysis in SPM: The aws4SPM-toolbox, Technical Report no. 11, WIAS, Berlin, 2008, DOI 10.20347/WIAS.TECHREPORT.11 .
    Abstract
    There exists a variety of software tools for analyzing functional Magnetic Resonance Imaging data. A very popular one is the freely available SPM package by the Functional Imaging Laboratory at the Wellcome Department of Imaging Neuroscience. In order to enhance the signal-to-noise ratio it provides the possibility to smooth the data in a pre-processing step by a Gaussian filter. However, this comes at the cost of reducing the effective resolution. In a series of recent papers it has been shown, that using a structural adaptive smoothing algorithm based on the Propagation-Separation method allows for enhanced signal detection while preserving the shape and spatial extent of the activation areas. Here, we describe our implementation of this algorithm as a toolbox for SPM.

  Talks, Poster

  • J. Polzehl, Smoothing techniques for quantitative MR, colloquium, Marquette University, Department of Mathematical and Statistical Sciences, Milwaukee, USA, November 3, 2023.

  • P. Dvurechensky, Decentralized local stochastic extra-gradient for variational inequalities, Thematic Einstein Semester Conference on Mathematical Optimization for Machine Learning, September 13 - 15, 2023, Mathematics Research Cluster MATH+, Berlin, September 14, 2023.

  • M. Hintermüller, Learning-informed and PINN-based multi scale PDE models in optimization, Conference on Deep Learning for Computational Physics, July 4 - 6, 2023, UCL -- London's Global University, UK, July 6, 2023.

  • C. Sirotenko, Dictionary learning for an inverse problem in quantitative MRI, 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023), Minisymposium 00687 ``Recent advances in deep learning--based inverse and imaging problems'', August 20 - 25, 2023, Waseda University, Tokyo, Japan, August 22, 2023.

  • K. Tabelow, Mathematical research data management in interdisciplinary research, Workshop on Biophysics-based Modeling and Data Assimilation in Medical Imaging (Hybrid Event), WIAS Berlin, August 31, 2023.

  • M. Hintermüller, A descent algorithm for the optimal control of ReLU neural network informed PDEs based on approximate directional derivatives (online talk), Workshop 2: Structured Optimization Models in High-Dimensional Data Analysis, December 12 - 16, 2022, National University of Singapore, Institute for Mathematical Sciences, December 15, 2022.

  • M. Hintermüller, Optimization subject to learning informed PDEs, International Conference on Continuous Optimization -- ICCOPT/MOPTA 2022, Cluster ``PDE-Constrained Optimization'', July 23 - 28, 2022, Lehigh University, Bethlehem, Pennsylvania, USA, July 27, 2022.

  • M. Hintermüller, Optimization with learning-informed differential equations, Robustness and Resilience in Stochastic Optimization and Statistical Learning: Mathematical Foundations, May 20 - 24, 2022, Ettore Majorana Foundation and Centre for Scientific Culture, Erice, Italy, May 24, 2022.

  • K. Papafitsoros, Automatic distributed parameter selection of regularization functionals via bilevel optimization (online talk), SIAM Conference on Imaging Science (IS22) (Online Event), Minisymposium ``Statistics and Structure for Parameter and Image Restoration'', March 21 - 25, 2022, March 22, 2022.

  • K. Papafitsoros, Total variation methods in image reconstruction, Institute Colloquium, Foundation for Research and Technology Hellas (IACM-FORTH), Institute of Applied and Computational Mathematics, Heraklion, Greece, May 3, 2022.

  • K. Papafitsoros, Optimization with learning-informed nonsmooth differential equation constraints, Second Congress of Greek Mathematicians SCGM-2022, Session Numerical Analysis & Scientific Computing, July 4 - 8, 2022, National Technical University of Athens, July 6, 2022.

  • K. Tabelow, Neural MRI, Tandem tutorial ``Mathematics of Imaging' ', Berlin Mathematics Research Center MATH+, February 18, 2022.

  • G. Dong, M. Hintermüller, K. Papafitsoros, Learning-informed model meets integrated physics-based method in quantitative MRI (online talk), 91st Annual Meeting of the International Association of Applied Mathematics and Mechanics, S21: ``Mathematical Signal and Image Processing'' (Online Event), March 15 - 19, 2021, Universität Kassel, March 18, 2021.

  • P. Dvurechensky, Accelerated gradient methods and their applications to Wasserstein barycenter problem (online talk), The XIII International Scientific Conference and Young Scientist School ``Theory and Numerics of Inverse and Ill-posed Problems'' (Online Event), April 12 - 22, 2021, Mathematical Center in Akademgorodok, Novosibirsk, Russian Federation, April 14, 2021.

  • P. Dvurechensky, Newton method over networks is fast up to the statistical precision (online talk), Thirty-eighth International Conference on Machine Learning (Online Event), July 18 - 24, 2021, Carnegie Mellon University, Pittsburgh, USA, July 20, 2021.

  • P. Dvurechensky, On a combination of alternating minimization and Nesterov's momentum (online talk), Thirty-eighth International Conference on Machine Learning (Online Event), July 18 - 24, 2021, Carnegie Mellon University, Pittsburgh, USA, July 20, 2021.

  • P. Dvurechensky, Primal-dual accelerated gradient methods with alternating minimization (online talk), Conference Optimization without Borders, July 12 - 18, 2021, Sirius University of Science and Technology, Sochi, Russian Federation, July 15, 2021.

  • P. Dvurechensky, Wasserstein barycenters from the computational perspective (online talk), Moscow Conference on Combinatorics and Applications (Online Event), May 31 - June 4, 2021, Moscow Institute of Physics and Technology, School of Applied Mathematics and Computer Science, Moscow, Russian Federation, June 2, 2021.

  • M. Hintermüller, Mathematics of quantitative MRI (online talk), The 5th International Symposium on Image Computing and Digital Medicine (ISICDM 2021), December 17 - 20, 2021, Guilin, China, December 18, 2021.

  • M. Hintermüller, Mathematics of quantitative imaging (online talk), MATH+ Thematic Einstein Semester on Mathematics of Imaging in Real-World Challenges, Berlin, November 12, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications, Online Conference ``Industrial and Applied Mathematics'', January 11 - 15, 2021, The Hong Kong University of Science and Technology, Institute for Advanced Study, January 13, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications (online talk), INdAM Workshop 2021: ``Analysis and Numerics of Design, Control and Inverse Problems'' (Online Event), July 1 - 7, 2021, Istituto Nazionale di Alta Matematica, Rome, Italy, July 5, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications (online talk), Deep Learning and Inverse Problems (MDLW02), September 27 - October 1, 2021, Isaac Newton Institute for Mathematical Sciences (Hybrid Event), Oxford, UK, October 1, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications (online talk), Seminar CMAI, George Mason University, Center for Mathematics and Artificial Intelligence, Fairfax, USA, March 19, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications (online talk), One World Optimization Seminar, Universität Wien, Fakultät für Mathematik, Austria, May 10, 2021.

  • M. Hintermüller, Optimization with learning-informed differential equation constraints and its applications (online talk), Oberseminar Numerical Optimization, Universität Konstanz, Fachbereich Mathematik und Statistik, December 14, 2021.

  • M. Hintermüller, Quantitative imaging: Physics integrated and machine learning based models in MRI (online talk), MATH-IMS Joint Applied Mathematics Colloquium Series, The Chinese University of Hong Kong, Center for Mathematical Artificial Intelligence, China, December 3, 2021.

  • M. Hintermüller, Semi-smooth Newton methods: Theory, numerical algorithms and applications II (online talk), International Forum on Frontiers of Intelligent Medical Image Analysis and Computing 2021 (Online Forum), Xidian University, Southeastern University, and Hong Kong Baptist University, China, July 26, 2021.

  • K. Papafitsoros, Optimization with learning-informed differential equation constraints and its applications (online talk), University of Graz, Institute of Mathematics and Scientific Computing, Austria, January 21, 2021.

  • K. Papafitsoros, Optimization with learning-informed differential equation constraints and its applications (online talk), Seminar Modern Methods in Applied Stochastics and Nonparametric Statistics, WIAS Berlin, March 16, 2021.

  • K. Papafitsoros, Total variation methods in image reconstruction, Departmental Seminar, National Technical University of Athens, Department of Mathematics, Greece, December 21, 2021.

  • M. Hintermüller, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, The Fifth International Conference on Numerical Analysis and Optimization, January 6 - 9, 2020, Sultan Qaboos University, Oman, January 6, 2020.

  • M. Hintermüller, Magnetic resonance fingerprinting of integrated physics models, Efficient Algorithms in Data Science, Learning and Computational Physics, January 12 - 16, 2020, Sanya, China, January 15, 2020.

  • K. Papafitsoros, Automatic distributed regularization parameter selection in Total Generalized Variation image reconstruction via bilevel optimization, Seminar, Southern University of Science and Technology, Shenzhen, China, January 17, 2020.

  • K. Papafitsoros, Automatic distributed regularization parameter selection in Total Generalized Variation image reconstruction via bilevel optimization, Seminar, Shenzhen MSU-BIT University, Department of Mathematics, Shenzhen, China, January 16, 2020.

  • K. Papafitsoros, Automatic distributed regularization parameter selection in imaging via bilevel optimization, Workshop on PDE Constrained Optimization under Uncertainty and Mean Field Games, January 28 - 30, 2020, WIAS, Berlin, January 30, 2020.

  • K. Papafitsoros, Spatially dependent parameter selection in TGV based image restoration via bilevel optimization, Efficient Algorithms in Data Science, Learning and Computational Physics, Sanya, China, January 12 - 16, 2020.

  • A. Gasnikov, P. Dvurechensky, E. Gorbunov, E. Vorontsova, D. Selikhanovych, C.A. Uribe, Optimal tensor methods in smooth convex and uniformly convex optimization, Conference on Learning Theory, COLT 2019, Phoenix, Arizona, USA, June 24 - 28, 2019.

  • A. Kroshnin, N. Tupitsa, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, C.A. Uribe , On the complexity of approximating Wasserstein barycenters, Thirty-sixth International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA, June 9 - 15, 2019.

  • D. Dvinskikh, Distributed decentralized (stochastic) optimization for dual friendly functions, Optimization and Statistical Learning, Les Houches, France, March 24 - 29, 2019.

  • P. Dvurechensky, On the complexity of optimal transport problems, Computational and Mathematical Methods in Data Science, Berlin, October 24 - 25, 2019.

  • K. Papafitsoros, A function space framework for structural total variation regularization with applications in inverse problems, Applied Inverse Problems Conference, Minisymposium ``Multi-Modality/Multi-Spectral Imaging and Structural Priors'', July 8 - 12, 2019, Grenoble, France, August 8, 2019.

  • K. Papafitsoros, Quantitative MRI: From fingerprinting to integrated physics-based models, Synergistic Reconstruction Symposium, November 3 - 6, 2019, Chester, UK, November 4, 2019.

  • J. Polzehl, K. Tabelow, Analyzing neuroimaging experiments within R, 2019 OHBM Annual Meeting, Organization for Human Brain Mapping, Rome, Italy, June 9 - 13, 2019.

  • K. Tabelow, Adaptive smoothing data from multi-parameter mapping, 7th Nordic-Baltic Biometric Conference, June 3 - 5, 2019, Vilnius University, Faculty of Medicine, Lithuania, June 5, 2019.

  • K. Tabelow, Model-based imaging for quantitative MRI, KoMSO Challenge-Workshop Mathematical Modeling of Biomedical Problems, December 12 - 13, 2019, Friedrich-Alexander-Universität Erlangen-Nürnberg, December 12, 2019.

  • K. Tabelow, Quantitative MRI for in-vivo histology, Doktorandenseminar, Berlin School of Mind and Brain, April 1, 2019.

  • K. Tabelow, Speaker of Neuroimaging Workshop, Workshop in Advanced Statistics: Good Scientific Practice for Neuroscientists, February 13 - 14, 2019, University of Zurich, Center for Reproducible Science, Switzerland.

  • M. Hintermüller, M. Holler, K. Papafitsoros, A function space framework for structural total variation regularization in inverse problems, MIA 2018 -- Mathematics and Image Analysis, Humboldt-Universität zu Berlin, January 15 - 17, 2018.

  • J. Polzehl, High resolution magnetic resonance imaging experiments -- Lessons in nonlinear statistical modeling, 3rd Leibniz MMS Days, February 28 - March 2, 2018, Wissenschaftszentrum Leipzig, March 1, 2018.

  • M. Hintermüller, (Pre)Dualization, dense embeddings of convex sets, and applications in image processing, HCM Workshop: Nonsmooth Optimization and its Applications, May 15 - 19, 2017, Hausdorff Center for Mathematics, Bonn, May 15, 2017.

  • M. Hintermüller, Bilevel optimization and applications in imaging, Workshop ``Emerging Developments in Interfaces and Free Boundaries'', January 22 - 28, 2017, Mathematisches Forschungsinstitut Oberwolfach.

  • M. Hintermüller, Bilevel optimization and applications in imaging, Mathematisches Kolloquium, Universität Wien, Austria, January 18, 2017.

  • M. Hintermüller, Bilevel optimization and some ``parameter learning'' applications in image processing, LMS Workshop ``Variational Methods Meet Machine Learning'', September 18, 2017, University of Cambridge, Centre for Mathematical Sciences, UK, September 18, 2017.

  • M. Hintermüller, On (pre)dualization, dense embeddings of convex sets, and applications in image processing, Seminar, Isaac Newton Institute, Programme ``Variational Methods and Effective Algorithms for Imaging and Vision'', Cambridge, UK, August 30, 2017.

  • M. Hintermüller, On (pre)dualization, dense embeddings of convex sets, and applications in image processing, University College London, Centre for Inverse Problems, UK, October 27, 2017.

  • J. Polzehl, Connectivity networks in neuroscience -- Construction and analysis, Summer School 2017: Probabilistic and Statistical Methods for Networks, August 21 - September 1, 2017, Technische Universität Berlin, Berlin Mathematical School.

  • J. Polzehl, Structural adaptation -- A statistical concept for image denoising, Seminar, Isaac Newton Institute, Programme ``Variational Methods and Effective Algorithms for Imaging and Vision'', Cambridge, UK, December 5, 2017.

  • J. Polzehl, Toward in-vivo histology of the brain, Neuro-Statistics: The Interface between Statistics and Neuroscience, University of Minnesota, School of Statistics (IRSA), Minneapolis, USA, May 5, 2017.

  • J. Polzehl, Towards in-vivo histology of the brain, Berlin Symposium 2017: Modern Statistical Methods From Data to Knowledge, December 14 - 15, 2017, organized by Indiana Laboratory of Biostatistical Analysis of Large Data with Structure (IL-BALDS), Berlin, December 14, 2017.

  • K. Tabelow, Ch. D'alonzo, L. Ruthotto, M.F. Callaghan, N. Weiskopf, J. Polzehl, S. Mohammadi, Removing the estimation bias due to the noise floor in multi-parameter maps, The International Society for Magnetic Resonance in Medicine (ISMRM) 25th Annual Meeting & Exhibition, Honolulu, USA, April 22 - 27, 2017.

  • K. Tabelow, Adaptive smoothing of multi-parameter maps, Berlin Symposium 2017: Modern Statistical Methods From Data to Knowledge, December 14 - 15, 2017, organized by Indiana Laboratory of Biostatistical Analysis of Large Data with Structure (IL-BALDS), Berlin, December 14, 2017.

  • K. Tabelow, High resolution MRI by variance and bias reduction, Channel Network Conference 2017 of the International Biometric Society (IBS), April 24 - 26, 2017, Hasselt University, Diepenbeek, Belgium, April 25, 2017.

  • K. Tabelow, To smooth or not to smooth in fMRI, Cognitive Neuroscience Seminar, Universitätsklinikum Hamburg-Eppendorf, Institut für Computational Neuroscience, April 4, 2017.

  • T. Wu, Bilevel optimization and applications in imaging sciences, August 24 - 25, 2016, Shanghai Jiao Tong University, Institute of Natural Sciences, China.

  • M. Hintermüller, K. Papafitsoros, C. Rautenberg, A fine scale analysis of spatially adapted total variation regularisation, Imaging, Vision and Learning based on Optimization and PDEs, Bergen, Norway, August 29 - September 1, 2016.

  • M. Hintermüller, Bilevel optimization and applications in imaging, Imaging, Vision and Learning based on Optimization and PDEs, August 29 - September 1, 2016, Bergen, Norway, August 30, 2016.

  • M. Hintermüller, Shape and topological sensitivities in mathematical image processing, BMS Summer School ``Mathematical and Numerical Methods in Image Processing'', July 25 - August 5, 2016, Berlin Mathematical School, Technische Universität Berlin, Humboldt-Universität zu Berlin, Berlin, August 4, 2016.

  • J. Polzehl, Assessing dynamics in learning experiments, Novel Statistical Methods in Neuroscience, June 22 - 24, 2016, Otto-von-Guericke-Universität Magdeburg, Institut für Mathematische Stochastik, June 22, 2016.

  • J. Polzehl, Modeling high resolution MRI: Statistical issues, Mathematical and Statistical Challenges in Neuroimaging Data Analysis, January 31 - February 5, 2016, Banff International Research Station (BIRS), Banff, Canada, February 1, 2016.

  • K. Tabelow, V. Avanesov, M. Deliano, R. König, A. Brechmann, J. Polzehl, Assessing dynamics in learning experiments, Challenges in Computational Neuroscience: Transition Workshop, Research Triangle Park, North Carolina, USA, May 4 - 6, 2016.

  • K. Tabelow, Ch. D'alonzo, J. Polzehl, M.F. Callaghan, L. Ruthotto, N. Weiskopf, S. Mohammadi, How to achieve very high resolution quantitative MRI at 3T?, 22th Annual Meeting of the Organization of Human Brain Mapping (OHBM 2016), Geneva, Switzerland, June 26 - 30, 2016.

  • K. Tabelow, Adaptive smoothing in quantitative imaging, In-vivo histology/VBQ meeting, Max Planck Institute for Human Cognitinve and Brain Sciences, Leipzig, April 13, 2016.

  • N. Buzun, Multiscale parametric approach for change point detection, Information Technologies and Systems 2015, September 6 - 11, 2015, Russian Academy of Sciences, Institute for Information Transmission Problems, Sochi, Russian Federation, September 9, 2015.

  • J. Krämer, M. Deppe, K. Göbel, K. Tabelow, H. Wiendl, S. Meuth, Recovery of thalamic microstructural damage after Shiga toxin 2-associated hemolytic-uremic syndrome, 21th Annual Meeting of the Organization for Human Brain Mapping, Honolulu, USA, June 14 - 18, 2015.

  • H.U. Voss, J. Dyke, D. Ballon, N. Schiff, K. Tabelow, Magnetic resonance advection imaging (MRAI) depicts vascular anatomy, 21th Annual Meeting of the Organization for Human Brain Mapping, Honolulu, USA, June 14 - 18, 2015.

  • J. Polzehl, Analysing dMRI data: Consequences of low SNR, SAMSI Working group ``Structural Connectivity'', Statistical and Applied Mathematical Sciences Institute (SAMSI), Research Triangle Park, USA, December 8, 2015.

  • J. Polzehl, K. Tabelow, H.U. Voss, Towards higher spatial resolution in DTI using smoothing, 21th Annual Meeting of the Organization for Human Brain Mapping, Honolulu, USA, June 14 - 18, 2015.

  • J. Polzehl, K. Tabelow, Bias in low SNR diffusion MRI experiments: Problems and solution, 21th Annual Meeting of the Organization for Human Brain Mapping, Honolulu, USA, June 14 - 18, 2015.

  • J. Polzehl, Statistical problems in diffusion weighted MR, University of Minnesota, Biostatistics-Statistics Working Group in Imaging, Minneapolis, USA, January 30, 2015.

  • K. Tabelow, M. Deliano, M. Jörn, R. König, A. Brechmann, J. Polzehl, Towards a population analysis of behavioral and neural state transitions during associative learning, 21th Annual Meeting of the Organization for Human Brain Mapping, Honolulu, USA, June 14 - 18, 2015.

  • K. Tabelow, To smooth or not to smooth in fMRI, Seminar ``Bildgebende Verfahren in den Neurowissenschaften: Grundlagen und aktuelle Ergebnisse'', Universitätsklinikum Jena, IDIR, Medical Physics Group, April 17, 2015.

  • K. Tabelow, msPOAS -- An adaptive denoising procedure for dMRI data, Riemannian Geometry in Shape Analysis and Computational Anatomy, February 23 - 27, 2015, Universität Wien, Erwin Schrödinger International Institute for Mathematical Physics, Austria, February 25, 2015.

  • S. Mohammadi, L. Ruthotto, K. Tabelow, T. Feiweier, J. Polzehl, N. Weiskopf, ACID -- A post-processing toolbox for advanced diffusion MRI, 20th Annual Meeting of the Organization for Human Brain Mapping, Hamburg, June 8 - 12, 2014.

  • N. Angenstein, J. Polzehl, K. Tabelow, A. Brechmann, Categorical versus sequential processing of sound duration, 20th Annual Meeting of the Organization for Human Brain Mapping, Hamburg, June 8 - 12, 2014.

  • J. Polzehl, Estimation of sparse precision matrices, MMS-Workshop ``large p small n'', WIAS-Berlin, April 15, 2014.

  • J. Polzehl, Quantification of noise in MR experiments, Statistical Challenges in Neuroscience, September 3 - 5, 2014, University of Warwick, Centre for Research in Statistical Methodology, UK, September 4, 2014.

  • J. Polzehl, Quantification of noise in MR experiments, International Workshop ``Advances in Optimization and Statistics'', May 15 - 16, 2014, Russian Academy of Sciences, Institute of Information Transmission Problems (Kharkevich Institute), Moscow, May 16, 2014.

  • J. Polzehl, Statistical problems in diffusion weighted MR, CoSy Seminar, University of Uppsala, Department of Mathematics, Sweden, November 11, 2014.

  • K. Tabelow, S. Mohammadi, N. Weiskopf, J. Polzehl, Adaptive noise reduction in multi-shell dMRI data with SPM by POAS4SPM, 20th Annual Meeting of the Organization for Human Brain Mapping, Hamburg, June 8 - 12, 2014.

  • K. Tabelow, H.U. Voss, J. Polzehl, Local estimation of noise standard deviation in MRI images using propagation separation, 20th Annual Meeting of the Organization for Human Brain Mapping, Hamburg, June 8 - 12, 2014.

  • K. Tabelow, H.U. Voss, J. Polzehl, Local estimation of the noise level in MRI images using structural adaptation, 5th Ultra-Highfield MRI Scientific Symposium, Max Delbrück Center, Berlin, June 20, 2014.

  • K. Tabelow, High-resolution diffusion MRI by msPOAS, Statistical Challenges in Neuroscience, September 3 - 5, 2014, University of Warwick, Centre for Research in Statistical Methodology, UK, September 4, 2014.

  • K. Tabelow, S. Becker, S. Mohammadi, N. Weiskopf, J. Polzehl, Multi-shell position-orientation adaptive smoothing (msPOAS), 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, USA, June 16 - 20, 2013.

  • K. Tabelow, H.U. Voss, J. Polzehl, Analyzing fMRI and dMRI experiments with R, 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, USA, June 16 - 20, 2013.

  • K. Tabelow, Assessing the structure of the brain, WIAS-Day, WIAS Berlin, February 18, 2013.

  • K. Tabelow, Noise in diffusion MRI -- Impact and treatment, Strukturelle MR-Bildgebung in der neuropsychiatrischen Forschung, September 13 - 14, 2013, Philipps Universität Marburg, September 13, 2013.

  • M. Welvaert, K. Tabelow, R. Seurinck, Y. Rosseel, Defining ROIs based on localizer studies: More specific localization using adaptive smoothing, 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, USA, June 16 - 20, 2013.

  • S. Mohammadi, K. Tabelow, Th. Feiweier, J. Polzehl, N. Weiskopf, High-resolution diffusion kurtosis imaging (DKI) improves detection of gray-white matter boundaries, 19th Annual Meeting of the Organization for Human Brain Mapping, Seattle, USA, June 16 - 20, 2013.

  • J. Polzehl, Diffusion weighted magnetic resonance imaging -- Data, models and problems, Statistics Seminar, University of Minnesota, School of Statistics, USA, June 6, 2013.

  • J. Polzehl, Position-orientation adaptive smoothing (POAS) in diffusion weighted imaging, Neuroimaging Data Analysis, June 9 - 14, 2013, Statistical and Applied Mathematical Sciences Institute (SAMSI), Durham (NC), USA, June 9, 2013.

  • J. Polzehl, Position-orientation adaptive smoothing -- Noise reduction in dMRI, Strukturelle MR-Bildgebung in der Neuropsychiatrischen Forschung, September 13 - 14, 2013, Philipps-Universität Marburg, Klinik für Psychiatrie und Psychotherapie, Zentrum für Psychische Gesundheit, September 14, 2013.

  • J. Polzehl, dMRI modeling: An intermediate step to fiber tracking and connectivity, Neuroimaging Data Analysis, June 9 - 14, 2013, Statistical and Applied Mathematical Sciences Institute (SAMSI), Durham (NC), USA, June 9, 2013.

  • S. Becker, K. Tabelow, H.U. Voss, A. Anwander, R.M. Heidemann, J. Polzehl, Position-orientation adaptive smoothing (POAS) at 7T dMRI, Ultra-Highfield MRI Scientific Symposium, Max Delbrück Communication Center, Berlin, June 8, 2012.

  • S. Becker, Diffusion weighted imaging: Modeling and analysis beyond the diffusion tensor, Methodological Workshop: Structural Brain Connectivity: Diffusion Imaging---State of the Art and Beyond, October 30 - November 2, 2012, Humboldt-Universität zu Berlin, November 2, 2012.

  • S. Becker, Image processing via orientation scores, Workshop ``Computational Inverse Problems'', October 23 - 26, 2012, Mathematisches Forschungsinstitut Oberwolfach, October 25, 2012.

  • S. Becker, Revisiting: Propagation-separation approach for local likelihood estimation, PreMoLab: Moscow-Berlin-Stochastic and Predictive Modeling, May 29 - June 1, 2012, Russian Academy of Sciences, Institute for Information Transmission Problems (Kharkevich Institute), Moscow, May 31, 2012.

  • K. Tabelow, Adaptive methods for noise reduction in diffusion weighted MRI -- Position orientation adaptive smoothing (POAS), University College London, Wellcome Trust Centre for Neuroimaging, UK, November 1, 2012.

  • K. Tabelow, Functional magnetic resonance imaging: Estimation and signal detection, PreMoLab: Moscow-Berlin Stochastic and Predictive Modeling, May 31 - June 1, 2012, Russian Academy of Sciences, Institute for Information Transmission Problems (Kharkevich Institute), Moscow, May 31, 2012.

  • K. Tabelow, Position-orientation adaptive smoothing (POAS) diffusion weighted imaging data, Workshop on Neurogeometry, November 15 - 17, 2012, Masaryk University, Department of Mathematics and Statistics, Brno, Czech Republic, November 16, 2012.

  • J. Polzehl, Adaptive methods for noise reduction in diffusion weighted MR, BRIC Seminar Series, University of North Carolina, School of Medicine, Chapel Hill, NC, USA, July 10, 2012.

  • J. Polzehl, Medical image analysis in R (tutorial), The 8th International R User Conference (Use R!2012), June 11 - 15, 2012, Vanderbilt University, Department of Biostatics, Nashville, TN, USA, June 12, 2012.

  • J. Polzehl, Modeling dMRI data: An introduction from a statistical viewpoint, Workshop on Neurogeometry, November 15 - 17, 2012, Masaryk University, Department of Mathematics and Statistics, Brno, Czech Republic, November 16, 2012.

  • J. Polzehl, Statistical issues in diffusion weighted MR (dMRI), PreMoLab: Moscow-Berlin Stochastic and Predictive Modeling, May 31 - June 1, 2012, Russian Academy of Sciences, Institute for Information Transmission Problems (Kharkevich Institute), Moscow, May 31, 2012.

  • K. Tabelow, S. Keller , S. Mohammadi, H. Kugel, J.-S. Gerdes, J. Polzehl, M. Deppe, Structural adaptive smoothing increases sensitivity of DTI to detect microstructure alterations, 17th Annual Meeting of the Organization on Human Brain Mapping (HBM 2011), Quebec City, Canada, June 26 - 30, 2011.

  • K. Tabelow, H. Voss, J. Polzehl , Package dti: A framework for HARDI modeling in R, 17th Annual Meeting of the Organization on Human Brain Mapping (HBM 2011), Quebec City, Canada, June 26 - 30, 2011.

  • K. Tabelow, H. Voss, J. Polzehl , Structural adaptive smoothing methods for fMRI and its implementation in R, 17th Annual Meeting of the Organization on Human Brain Mapping (HBM 2011), Quebec City, Canada, June 26 - 30, 2011.

  • K. Tabelow, B. Whitcher, J. Polzehl, Performing tasks in medical imaging with R, 17th Annual Meeting of the Organization on Human Brain Mapping (HBM 2011), Quebec City, Canada, June 26 - 30, 2011.

  • K. Tabelow, Diffusion weighted imaging (DTI and beyond) using dti, The R User Conference 2011, August 15 - 18, 2011, University of Warwick, Department of Statistics, Coventry, UK, August 15, 2011.

  • K. Tabelow, Functional MRI using fmri, The R User Conference 2011, August 15 - 18, 2011, University of Warwick, Department of Statistics, Coventry, UK, August 15, 2011.

  • K. Tabelow, Modeling the orientation distribution function by mixtures of angular central Gaussian distributions, Cornell University, New York, Weill Medical College, USA, June 23, 2011.

  • K. Tabelow, Statistical parametric maps for functional MRI experiments in R: The package fmri, The R User Conference 2011, August 15 - 18, 2011, University of Warwick, Department of Statistics, Coventry, UK, August 18, 2011.

  • K. Tabelow, Structural adaptive smoothing fMRI and DTI data, SFB Research Center ``Mathematical Optimization and Applications in Biomedical Sciences'', Karl-Franzens-Universität Graz, Institut für Mathematik und Wissenschaftliches Rechnen, Austria, June 8, 2011.

  • K. Tabelow, Structural adaptive smoothing fMRI and DTI data, Maastricht University, Faculty of Psychology and Neuroscience, Netherlands, September 28, 2011.

  • J. Polzehl, Statistical issues in modeling diffusion weighted magnetic resonance data, 3rd International Conference on Statistics and Probability 2011 (IMS-China), July 8 - 11, 2011, Institute of Mathematical Statistics, Xian, China, July 10, 2011.

  • J. Polzehl, Modeling the orientation distribution function by mixtures of angular central Gaussian distributions, Workshop on Statistics and Neuroimaging 2011, November 23 - 25, 2011, WIAS, November 24, 2011.

  • K. Tabelow, J.D. Clayden, P. Lafaye DE Micheaux, J. Polzehl, V.J. Schmid, B. Whitcher, Image analysis and statistical inference in NeuroImaging with R., Human Brain Mapping 2010, Barcelona, Spain, June 6 - 10, 2010.

  • K. Tabelow, J. Polzehl, S. Mohammadi, M. Deppe, Impact of smoothing on the interpretation of FA maps, Human Brain Mapping 2010, Barcelona, Spain, June 6 - 10, 2010.

  • K. Tabelow, Structural adaptive smoothing fMRI and DTI data, Workshop on Novel Reconstruction Strategies in NMR and MRI 2010, September 9 - 11, 2010, Georg-August-Universität Göttingen, Fakultät für Mathematik und Informatik, September 11, 2010.

  • J. Polzehl, K. Tabelow, Image and signal processing in the biomedical sciences: Diffusion-weighted imaging modeling and beyond, 1st Annual Scientific Symposium ``Ultrahigh Field Magnetic Resonance'', Max Delbrück Center, Berlin, April 16, 2010.

  • J. Polzehl, Medical image analysis for structural and functional MRI, The R User Conference 2010, July 20 - 23, 2010, National Institute of Standards and Technology (NIST), Gaithersburg, USA, July 20, 2010.

  • J. Polzehl, Statistical issues in accessing brain functionality and anatomy, The R User Conference 2010, July 20 - 23, 2010, National Institute of Standards and Technology (NIST), Gaithersburg, USA, July 22, 2010.

  • J. Polzehl, Statistical problems in functional and diffusion weighted magnetic resonance, Uppsala University, Dept. of Mathematics, Graduate School in Mathematics and Computing, Sweden, May 27, 2010.

  • J. Polzehl, Structural adaptive smoothing in neuroscience applications, Statistische Woche Nürnberg 2010, September 14 - 17, 2010, Friedrich-Alexander-Universität Erlangen-Nürnberg, Naturwissenschaftliche Fakultät, September 16, 2010.

  • V. Spokoiny, Local parametric estimation, October 18 - 22, 2010, École Nationale de la Statistique et de l'Analyse de l'Information (ENSAI), Rennes, France.

  • V. Spokoiny, Semidefinite non-Gaussian component analysis, Bivariate Penalty Choice in Model Selection, Deutsches Diabetes Zentrum Düsseldorf, June 17, 2010.

  • K. Tabelow, J. Polzehl, H.U. Voss, Structural adaptive smoothing methods for high-resolution fMRI, 15th Annual Meeting of the Organization for Human Brain Mapping (HBM 2009), San Francisco, USA, June 18 - 22, 2009.

  • K. Tabelow, A3 - Image and signal processing in the biomedical sciences: diffusion weighted imaging - modeling and beyond, Center Days 2009 (DFG Research Center scshape Matheon), March 30 - April 1, 2009, Technische Universität Berlin, March 30, 2009.

  • K. Tabelow, Structural adaptive methods in fMRI and DTI, Biomedical Imaging Research Seminar Series, Weill Cornell Medical College, Department of Radiology & Citigroup Biomedical Imaging Center, New York, USA, June 25, 2009.

  • K. Tabelow, Structural adaptive methods in fMRI and DTI, Memorial Sloan-Kettering Cancer Center, New York, USA, June 25, 2009.

  • K. Tabelow, Structural adaptive smoothing in fMRI and DTI, Workshop on Recent Developments in fMRI Analysis Methods, Bernstein Center for Computational Neuroscience Berlin, January 23, 2009.

  • J. Polzehl, K. Tabelow, Structural adaptive smoothing diffusion tensor imaging data: The R-package dti, 15th Annual Meeting of the Organization for Human Brain Mapping (HBM 2009), San Francisco, USA, June 18 - 22, 2009.

  • N. Serdyukova, Local parametric estimation under noise misspecification in regression problem, Workshop on structure adapting methods, November 6 - 8, 2009, WIAS, November 7, 2009.

  • V. Spokoiny, Adaptive local parametric estimation, Université Joseph Fourier Grenoble I, Équipe de Statistique et Modélisation Stochastique, Laboratoire Jean Kuntzmann, France, February 26, 2009.

  • V. Spokoiny, Adaptive local parametric methods in imaging, Technische Universität Kaiserslautern, Fachbereich Mathematik, January 23, 2009.

  • V. Spokoiny, Modern nonparametric statistics (block lecture), October 2 - 13, 2009, École Nationale de la Statistique et de l'Analyse de l'Information (ENSAI), Rennes, France.

  • V. Spokoiny, Modern nonparametric statistics (block lecture), October 18 - 29, 2009, Yale University, New Haven, USA.

  • V. Spokoiny, Modern nonparametric statistics (block lecture), January 13 - 16, 2009, École Nationale de la Statistique et de l'Analyse de l'Information (ENSAI), Rennes, France.

  • V. Spokoiny, Parameter tuning in statistical inverse problem, European Meeting of Statisticians (EMS2009), July 20 - 22, 2009, Université Paul Sabatier, Toulouse, France, July 21, 2009.

  • V. Spokoiny, Saddle point model selection, Université Toulouse 1 Capitole, Toulouse School of Economics, France, November 24, 2009.

  • V. Spokoiny, Saddle point model selection, Workshop on structure adapting methods, November 6 - 8, 2009, WIAS, November 7, 2009.

  • V. Spokoiny, Sparse non-Gaussian component analysis, Workshop ``Sparse Recovery Problems in High Dimensions: Statistical Inference and Learning Theory'', March 15 - 21, 2009, Mathematisches Forschungsinstitut Oberwolfach, March 16, 2009.

  • K. Tabelow, A3 - Image and signal processing in medicine and biosciences, Center Days 2008 (DFG Research Center scshape Matheon), April 7 - 9, 2008, Technische Universität Berlin, April 7, 2008.

  • K. Tabelow, Structure adaptive smoothing medical images, 22. Treffpunkt Medizintechnik: Fortschritte in der medizinischen Bildgebung, Charité, Campus Virchow Klinikum Berlin, May 22, 2008.

  • K. Tabelow, Strukturadaptive Bild- und Signalverarbeitung, Workshop of scshape Matheon with Siemens AG (Health Care Sector) in cooperation with Center of Knowledge Interchange (CKI) of Technische Universität (TU) Berlin and Siemens AG, TU Berlin, July 8, 2008.

  • J. Polzehl, New developments in structural adaptive smoothing: Images, fMRI and DWI, University of Tromsoe, Norway, May 27, 2008.

  • J. Polzehl, Smoothing fMRI and DWI data using the propagation-separation approach, University of Utah, Computing and Scientific Imaging Institute, Salt Lake City, USA, September 11, 2008.

  • J. Polzehl, Structural adaptive smoothing in diffusion tensor imaging, Workshop on ``Locally Adaptive Filters in Signal and Image Processing'', November 24 - 26, 2008, EURANDOM, Eindhoven, Netherlands, November 25, 2008.

  • J. Polzehl, Structural adaptive smoothing using the propagation-separation approach, University of Chicago, Department of Statistics, USA, September 3, 2008.

  • K. Tabelow, J. Polzehl, H.U. Voss, Increasing SNR in high resolution fMRI by spatially adaptive smoothing, Human Brain Mapping Conference 2007, Chicago, USA, June 10 - 14, 2007.

  • K. Tabelow, J. Polzehl, H.U. Voss, Reducing the number of necessary diffusion gradients by adaptive smoothing, Human Brain Mapping Conference 2007, Chicago, USA, June 10 - 14, 2007.

  • K. Tabelow, A3: Image and signal processing in medicine and biosciences, A-Day des sc Matheon, Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB), December 5, 2007.

  • K. Tabelow, Improving data quality in fMRI and DTI by structural adaptive smoothing, Cornell University, Weill Medical College, New York, USA, June 18, 2007.

  • K. Tabelow, Structural adaptive signal detection in fMRI and structure enhancement in DTI, International Workshop on Image Analysis in the Life Sciences, Theory and Applications, February 28 - March 2, 2007, Johannes Kepler Universität Linz, Austria, March 2, 2007.

  • K. Tabelow, Structural adaptive smoothing in medical imaging, Seminar ``Visualisierung und Datenanalyse'', Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB), January 30, 2007.

  • J. Polzehl, Propagation-separation procedures for image processing, International Workshop on Image Analysis in the Life Sciences, Theory and Applications, February 28 - March 2, 2007, Johannes Kepler Universität Linz, Austria, March 2, 2007.

  • J. Polzehl, Structural adaptive smoothing in imaging problems, Spring Seminar Series, University of Minnesota, School of Statistics, College of Liberal Arts, USA, May 24, 2007.

  • J. Polzehl, Structural adaptive smoothing procedures by propagation-separation methods, Final meeting of the DFG Priority Program 1114, November 7 - 9, 2007, Freiburg, November 7, 2007.

  • K. Tabelow, J. Polzehl, H.U. Voss, V. Spokoiny, Analyzing fMRI experiments with structural adaptive smoothing methods, Human Brain Mapping Conference, Florence, Italy, June 12 - 15, 2006.

  • K. Tabelow, J. Polzehl, V. Spokoiny, J.P. Dyke, L.A. Heier, H.U. Voss, Accurate localization of functional brain activity using structure adaptive smoothing, ISMRM 14th Scientific Meeting & Exhibition, Seattle, USA, May 10 - 14, 2006.

  • K. Tabelow, Analyzing fMRI experiments with structural adaptive smoothing methods, BCCN PhD Symposium 2006, June 7 - 8, 2006, Bernstein Center for Computational Neuroscience Berlin, Bad Liebenwalde, June 8, 2006.

  • K. Tabelow, Image and signal processing in medicine and biosciences, Evaluation Colloquium of the DFG Research Center sc Matheon, Berlin, January 24 - 25, 2006.

  • J. Polzehl, Structural adaptive smoothing by propagation-separation, 69th Annual Meeting of the IMS and 5th International Symposium on Probability and its Applications, July 30 - August 4, 2006, Rio de Janeiro, Brazil, July 30, 2006.

  • K. Tabelow, J. Polzehl, Structure adaptive smoothing procedures in medical imaging, 19. Treffpunkt Medizintechnik ``Imaging und optische Technologien für die Medizin'', Berlin, June 1, 2005.

  • K. Tabelow, Adaptive weights smoothing in the analysis of fMRI data, Ludwig-Maximilians-Universität München, SFB 386, December 8, 2005.

  • K. Tabelow, Detecting shape and borders of activation areas infMRI data, Forschungsseminar ''Mathematische Statistik'', WIAS, Berlin, November 23, 2005.

  • K. Tabelow, Spatially adaptive smoothing infMRI analysis, Neuroimaging Center, Cahrité, Berlin, November 10, 2005.

  • J. Polzehl, Adaptive smoothing by propagation-separation, Australian National University, Center of Mathematics and its Applications, Canberra, March 31, 2005.

  • J. Polzehl, Image reconstruction and edge enhancement by structural adaptive smoothing, 55th Session of the International Statistical Institute (ISI), April 5 - 12, 2005, Sydney, Australia, April 8, 2005.

  • J. Polzehl, Propagation-separation at work: Main ideas and applications, National University of Singapore, Department of Probability Theory and Statistics, March 24, 2005.

  • J. Polzehl, Spatially adaptive smoothing: A propagation-separation approach for imaging problems, Joint Statistical Meetings, August 7 - 11, 2005, Minneapolis, USA, August 11, 2005.

  • J. Polzehl, Structural adaptive smoothing by propagation-separation methods, Ludwig-Maximilians-Universität München, SFB 386, December 7, 2005.

  • J. Polzehl, Local likelihood modeling by structural adaptive smoothing, University of Minnesota, School of Statistics, Minneapolis, USA, September 9, 2004.

  • J. Polzehl, Smoothing by adaptive weights: An overview, Chalmers University of Technology, Department of Mathematical Statistics, Gothenburg, Sweden, May 11, 2004.

  • J. Polzehl, Structural adaptive smoothing methods, Georg-August-Universität Göttingen, Institut für Mathematische Stochastik, January 14, 2004.

  • J. Polzehl, Structural adaptive smoothing methods, Tandem-Workshop on Non-linear Optimization at the Crossover of Discrete Geometry and Numerical Analysis, July 15 - 16, 2004, Technische Universität Berlin, Institut für Mathematik, July 15, 2004.

  • J. Polzehl, Structural adaptive smoothing methods and possible applications in imaging, Charité Berlin, NeuroImaging Center, Berlin, July 1, 2004.

  • J. Polzehl, Structural adaptive smoothing methods for imaging problems, Annual Conference of Deutsche Mathematiker-Vereinigung (DMV), September 13 - 17, 2004, Heidelberg, September 14, 2004.

  • J. Polzehl, Structural adaptive smoothing methods for imaging problems, German-Israeli Binational Workshop, October 20 - 22, 2004, Ollendorff Minerva Center for Vision and Image Sciences, Technion, Haifa, Israel, October 21, 2004.

  • A. Hutt, J. Polzehl, Spatial adaptive signal detection in fMRT, Human Brain Mapping Conference, New York, USA, June 17 - 22, 2003.

  • J. Polzehl, Adaptive smoothing procedures for image processing, Workshop on Nonlinear Analysis of Multidimensional Signals, February 25 - 28, 2003, Teistungenburg, February 25, 2003.

  • J. Polzehl, Image processing using Adaptive Weights Smoothing, Uppsala University, Department of Mathematics, Sweden, May 7, 2003.

  • J. Polzehl, Local likelihood modeling by Adaptive Weights Smoothing, Joint Statistical Meetings, August 3 - 7, 2003, San Francisco, USA, August 6, 2003.

  • J. Polzehl, Local modeling by structural adaptation, The Art of Semiparametrics, October 19 - 21, 2003, Berlin, October 20, 2003.

  • J. Polzehl, Structural adaptive smoothing methods and applications in imaging, Magnetic Resonance Seminar, Physikalisch-Technische Bundesanstalt, March 13, 2003.

  • J. Polzehl, Structural adaptation I: Pointwise adaptive smoothing and imaging, University of Tromso, Department of Mathematics, Norway, April 11, 2002.

  • J. Polzehl, Structural adaptation I: Varying coefficient regression modeling by adaptive weights smoothing, Workshop on Nonparametric Smoothing in Complex Statistical Models, April 27 - May 4, 2002, Ascona, Switzerland, April 30, 2002.

  • J. Polzehl, Structural adaptation methods in imaging, Joint Statistical Meetings 2002, August 11 - 15, 2002, New York, USA, August 12, 2002.

  • J. Polzehl, Structural adaptive smoothing and its applications in imaging and time series, Uppsala University, Department of Mathematics, Sweden, May 2, 2002.

  • J. Polzehl, Structural adaptive estimation, Bayer AG, Leverkusen, November 29, 2001.

  • J. Polzehl, Adaptive weights smoothing with applications in imaging, Universität Essen, Fachbereich Mathematik, Sfb 475, November 6, 2000.

  • J. Polzehl, Adaptive weights smoothing with applications to image denoising and signal detection, Université Catholique de Louvain-la-Neuve, Institut de Statistique, Belgium, September 29, 2000.

  • J. Polzehl, Functional and dynamic Magnet Resonance Imaging using adaptive weights smoothing, Workshop "`Mathematical Methods in Brain Mapping"', Université de Montréal, Centre de Recherches Mathématiques, Canada, December 11, 2000.

  • J. Polzehl, Spatially adaptive procedures for signal detection in fMRI, Tagung "`Controlling Complexity for Strong Stochastic Dependencies"', September 10 - 16, 2000, Mathematisches Forschungsinstitut Oberwolfach, September 11, 2000.

  • J. Polzehl, Spatially adaptive smoothing techniques for signal detection in functional and dynamic Magnet Resonance Imaging, Human Brain Mapping 2000, San Antonio, Texas, USA, June 12 - 16, 2000.

  • J. Polzehl, Spatially adaptive smoothing techniques for signal detection in functional and dynamic Magnet Resonance Imaging, MEDICA 2000, Düsseldorf, November 22 - 25, 2000.

  External Preprints

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Investigating apparent differences between standard DKI and axisymmetric DKI and its consequences for biophysical parameter estimates, Preprint no. bioRxiv:2023.06.21.545891, Cold Spring Harbor Laboratory, bioRxiv, 2024, DOI 10.1101/2023.06.21.545891 .

  • P. Dvurechensky, Y. Nesterov, Improved global performance guarantees of second-order methods in convex minimization, Preprint no. arXiv:2408.11022, Cornell University, 2024.

  • P. Dvurechensky, M. Staudigl, Barrier algorithms for constrained non-convex optimization, Preprint no. arXiv:2404.18724, Cornell University, 2024, DOI 10.48550/arXiv.2404.18724 .

  • O. Yufereva, M. Persiianov, P. Dvurechensky, A. Gasnikov, D. Kovalev, Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters, Preprint no. arXiv:2205.15669, Cornell University, 2023, DOI 10.48550/arXiv.2205.15669 .

  • D. Gergely, B. Fricke, J.M. Oeschger, L. Ruthotto, P. Freund, K. Tabelow, S. Mohammadi, ACID: A comprehensive toolbox for image processing and modeling of brain, spinal cord, and ex vivo diffusion MRI data, Preprint no. bioRxiv:2023.10.13.562027, Cold Spring Harbor Laboratory, 2023, DOI 10.1101/2023.10.13.562027 .

  • E. Gladin, A. Gasnikov, P. Dvurechensky, Accuracy certificates for convex minimization with inexact Oracle, Preprint no. arXiv:2310.00523, Cornell University, 2023, DOI 10.48550/arXiv.2310.00523 .
    Abstract
    Accuracy certificates for convex minimization problems allow for online verification of the accuracy of approximate solutions and provide a theoretically valid online stopping criterion. When solving the Lagrange dual problem, accuracy certificates produce a simple way to recover an approximate primal solution and estimate its accuracy. In this paper, we generalize accuracy certificates for the setting of inexact first-order oracle, including the setting of primal and Lagrange dual pair of problems. We further propose an explicit way to construct accuracy certificates for a large class of cutting plane methods based on polytopes. As a by-product, we show that the considered cutting plane methods can be efficiently used with a noisy oracle even thought they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in the numerical experiments highlighting that our certificates provide a tight upper bound on the objective residual.

  • E. Gorbunov, A. Sadiev, D. Dolinova, S. Horvát, G. Gidel, P. Dvurechensky, A. Gasnikov, P. Richtárik, High-probability convergence for composite and distributed stochastic minimization and variational inequalities with heavy-tailed noise, Preprint no. arXiv:2310.01860, Cornell University, 2023, DOI 10.48550/arXiv.2310.01860 .
    Abstract
    High-probability analysis of stochastic first-order optimization methods under mild assumptions on the noise has been gaining a lot of attention in recent years. Typically, gradient clipping is one of the key algorithmic ingredients to derive good high-probability guarantees when the noise is heavy-tailed. However, if implemented naïvely, clipping can spoil the convergence of the popular methods for composite and distributed optimization (Prox-SGD/Parallel SGD) even in the absence of any noise. Due to this reason, many works on high-probability analysis consider only unconstrained non-distributed problems, and the existing results for composite/distributed problems do not include some important special cases (like strongly convex problems) and are not optimal. To address this issue, we propose new stochastic methods for composite and distributed optimization based on the clipping of stochastic gradient differences and prove tight high-probability convergence results (including nearly optimal ones) for the new methods. Using similar ideas, we also develop new methods for composite and distributed variational inequalities and analyze the high-probability convergence of these methods.

  • A. Kofler, F. Altekrüger, F.A. Ba, Ch. Kolbitsch, E. Papoutsellis, D. Schote, C. Sirotenko, F.F. Zimmermann, K. Papafitsoros, Learning regularization parameter-maps for variational image reconstruction using deep neural networks and algorithm unrolling, Preprint no. arXiv:2301.05888, Cornell University, 2023, DOI 10.48550/arXiv.2301.05888 .

  • A. Kofler, F. Altekrüger, F.A. Ba, Ch. Kolbitsch, E. Papoutsellis, D. Schote, C. Sirotenko, F.F. Zimmermann, K. Papafitsoros, Unrolled three-operator splitting for parameter-map learning in low dose X-ray CT reconstruction, Preprint no. arXiv:2304.08350, Cornell University, 2023, DOI 10.48550/arXiv.2304.08350 .

  • N. Kornilov, E. Gorbunov, M. Alkousa, F. Stonyakin, P. Dvurechensky, A. Gasnikov, Intermediate gradient methods with relative inexactness, Preprint no. arXiv:2310.00506, Cornell University, 2023, DOI 10.48550/arXiv.2310.00506 .
    Abstract
    This paper is devoted to first-order algorithms for smooth convex optimization with inexact gradi- ents. Unlike the majority of the literature on this topic, we consider the setting of relative rather than absolute inexactness. More precisely, we assume that an additive error in the gradient is propor- tional to the gradient norm, rather than being globally bounded by some small quantity. We propose a novel analysis of the accelerated gradient method under relative inexactness and strong convex- ity and improve the bound on the maximum admissible error that preserves the linear convergence of the algorithm. In other words, we analyze how robust is the accelerated gradient method to the relative inexactness of the gradient information. Moreover, based on the Performance Estimation Problem (PEP) technique, we show that the obtained result is optimal for the family of accelerated algorithms we consider. Motivated by the existing intermediate methods with absolute error, i.e., the methods with convergence rates that interpolate between slower but more robust non-accelerated algorithms and faster, but less robust accelerated algorithms, we propose an adaptive variant of the intermediate gradient method with relative error in the gradient.

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Investigating apparent differences between standard DKI and axisymmetric DKI and its consequences for biophysical parameter estimates, Preprint no. bioRxiv:2023.06.21.545891, Cold Spring Harbor Laboratory, 2023, DOI 10.1101/2023.06.21.545891 .

  • M. Alkousa, A. Gasnikov, P. Dvurechensky, A. Sadiev, L. Razouk, An approach for non-convex uniformly concave structured saddle point problem, Preprint no. arXiv:2202.06376, Cornell University, 2022, DOI 10.48550/arXiv.2202.06376 .
    Abstract
    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the non-convex uniformly-concave setting. We study a general class of saddle point problems with composite structure and Hölder-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, outer minimization problem w.r.t. primal variables, and inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for non-convex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has Hölder-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  • A. Gasnikov, A. Novitskii, V. Novitskii, F. Abdukhakimov, D. Kamzolov, A. Beznosikov, M. Takáč, P. Dvurechensky, B. Gu, The power of first-order smooth optimization for black-box non-smooth problems, Preprint no. arXiv:2201.12289, Cornell University, 2022, DOI 10.48550/arXiv.2201.12289 .
    Abstract
    Gradient-free/zeroth-order methods for black-box convex optimization have been extensively studied in the last decade with the main focus on oracle calls complexity. In this paper, besides the oracle complexity, we focus also on iteration complexity, and propose a generic approach that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems. Our approach not only leads to optimal oracle complexity, but also allows to obtain iteration complexity similar to first-order methods, which, in turn, allows to exploit parallel computations to accelerate the convergence of our algorithms. We also elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.

  • S. Mohammadi, T. Streubel, L. Klock, A. Lutti, K. Pine, S. Weber, L. Edwards, P. Scheibe, G. Ziegler, J. Gallinat, S. Kuhn, M. Callaghan, N. Weiskopf, K. Tabelow, Error quantification in multi-parameter mapping facilitates robust estimation and enhanced group level sensitivity, Preprint no. bioRxiv: 2022.01.11.475846, Cold Spring Harbor Laboratory, 2022, DOI 10.1101/2022.01.11.475846 .
    Abstract
    Multi-Parameter Mapping (MPM) is a comprehensive quantitative neuroimaging protocol that enables estimation of four physical parameters (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD, and magnetization transfer saturation MTsat) that are sensitive to microstructural tissue properties such as iron and myelin content. Their capability to reveal microstructural brain differences, however, is tightly bound to controlling random noise and artefacts (e.g. caused by head motion) in the signal. Here, we introduced a method to estimate the local error of PD, R1, and MTsat maps that captures both noise and artefacts on a routine basis without requiring additional data. To investigate the method's sensitivity to random noise, we calculated the model-based signal-to-noise ratio (mSNR) and showed in measurements and simulations that it correlated linearly with an experimental raw-image-based SNR map. We found that the mSNR varied with MPM protocols, magnetic field strength (3T vs. 7T) and MPM parameters: it halved from PD to R1 and decreased from PD to MT_sat by a factor of 3-4. Exploring the artefact-sensitivity of the error maps, we generated robust MPM parameters using two successive acquisitions of each contrast and the acquisition-specific errors to down-weight erroneous regions. The resulting robust MPM parameters showed reduced variability at the group level as compared to their single-repeat or averaged counterparts. The error and mSNR maps may better inform power-calculations by accounting for local data quality variations across measurements. Code to compute the mSNR maps and robustly combined MPM maps is available in the open-source hMRI toolbox.

  • J.M. Oeschger, K. Tabelow, S. Mohammadi, Axisymmetric diffusion kurtosis imaging with Rician bias correction: A simulation study, Preprint no. bioRxiv2022.03.15.484442, Cold Spring Harbor Laboratory, bioRxiv, 2022, DOI 10.1101/2022.03.15.484442 .

  • A. Agafonov, P. Dvurechensky, G. Scutari, A. Gasnikov, D. Kamzolov, A. Lukashevich, A. Daneshmand, An accelerated second-order method for distributed stochastic optimization, Preprint no. arXiv:2103.14392, Cornell University Library, arXiv.org, 2021.
    Abstract
    We consider distributed stochastic optimization problems that are solved with master/workers computation architecture. Statistical arguments allow to exploit statistical similarity and approximate this problem by a finite-sum problem, for which we propose an inexact accelerated cubic-regularized Newton's method that achieves lower communication complexity bound for this setting and improves upon existing upper bound. We further exploit this algorithm to obtain convergence rate bounds for the original stochastic optimization problem and compare our bounds with the existing bounds in several regimes when the goal is to minimize the number of communication rounds and increase the parallelization by increasing the number of workers.

  • A. Beznosikov, P. Dvurechensky, A. Koloskova, V. Samokhin, S.U. Stich, A. Gasnikov, Decentralized local stochastic extra-gradient for variational inequalities, Preprint no. arXiv:2106.08315, Cornell University Library, arXiv.org, 2021.
    Abstract
    We consider decentralized stochastic variational inequalities where the problem data is distributed across many participating devices (heterogeneous, or non-IID data setting). We propose a novel method - based on stochastic extra-gradient - where participating devices can communicate over arbitrary, possibly time-varying network topologies. This covers both the fully decentralized optimization setting and the centralized topologies commonly used in Federated Learning. Our method further supports multiple local updates on the workers for reducing the communication frequency between workers. We theoretically analyze the proposed scheme in the strongly monotone, monotone and non-monotone setting. As a special case, our method and analysis apply in particular to decentralized stochastic min-max problems which are being studied with increased interest in Deep Learning. For example, the training objective of Generative Adversarial Networks (GANs) are typically saddle point problems and the decentralized training of GANs has been reported to be extremely challenging. While SOTA techniques rely on either repeated gossip rounds or proximal updates, we alleviate both of these requirements. Experimental results for decentralized GAN demonstrate the effectiveness of our proposed algorithm.

  • A. Daneshmand, G. Scutari, P. Dvurechensky, A. Gasnikov, Newton method over networks is fast up to the statistical precision, Preprint no. arXiv:2102.06780, Cornell University Library, arXiv.org, 2021.

  • E. Gladin, A. Sadiev, A. Gasnikov, P. Dvurechensky, A. Beznosikov, M. Alkousa, Solving smooth min-min and min-max problems by mixed oracle algorithms, Preprint no. arXiv:2103.00434, Cornell University Library, arXiv.org, 2021.
    Abstract
    In this paper, we consider two types of problems that have some similarity in their structure, namely, min-min problems and min-max saddle-point problems. Our approach is based on considering the outer minimization problem as a minimization problem with inexact oracle. This inexact oracle is calculated via inexact solution of the inner problem, which is either minimization or a maximization problem. Our main assumptions are that the problem is smooth and the available oracle is mixed: it is only possible to evaluate the gradient w.r.t. the outer block of variables which corresponds to the outer minimization problem, whereas for the inner problem only zeroth-order oracle is available. To solve the inner problem we use accelerated gradient-free method with zeroth-order oracle. To solve the outer problem we use either inexact variant of Vaydya's cutting-plane method or a variant of accelerated gradient method. As a result, we propose a framework that leads to non-asymptotic complexity bounds for both min-min and min-max problems. Moreover, we estimate separately the number of first- and zeroth-order oracle calls which are sufficient to reach any desired accuracy.

  • E. Gorbunov, M. Danilova, I. Shibaev, P. Dvurechensky, A. Gasnikov, Near-optimal high probability complexity bounds for non-smooth stochastic optimization with heavy-tailed noise, Preprint no. arXiv:2106.05958, Cornell University Library, arXiv.org, 2021.
    Abstract
    Thanks to their practical efficiency and random nature of the data, stochastic first-order methods are standard for training large-scale machine learning models. Random behavior may cause a particular run of an algorithm to result in a highly suboptimal objective value, whereas theoretical guarantees are usually proved for the expectation of the objective value. Thus, it is essential to theoretically guarantee that algorithms provide small objective residual with high probability. Existing methods for non-smooth stochastic convex optimization have complexity bounds with the dependence on the confidence level that is either negative-power or logarithmic but under an additional assumption of sub-Gaussian (light-tailed) noise distribution that may not hold in practice, e.g., in several NLP tasks. In our paper, we resolve this issue and derive the first high-probability convergence results with logarithmic dependence on the confidence level for non-smooth convex stochastic optimization problems with non-sub-Gaussian (heavy-tailed) noise. To derive our results, we propose novel stepsize rules for two stochastic methods with gradient clipping. Moreover, our analysis works for generalized smooth objectives with Hölder-continuous gradients, and for both methods, we provide an extension for strongly convex problems. Finally, our results imply that the first (accelerated) method we consider also has optimal iteration and oracle complexity in all the regimes, and the second one is optimal in the non-smooth setting.

  • A. Rogozin, A. Beznosikov, D. Dvinskikh, D. Kovalev, P. Dvurechensky, A. Gasnikov, Decentralized distributed optimization for saddle point problems, Preprint no. arXiv:2102.07758, Cornell University Library, arXiv.org, 2021.

  • A. Rogozin, M. Bochko, P. Dvurechensky, A. Gasnikov, V. Lukoshkin, An accelerated method for decentralized distributed stochastic optimization over time-varying graphs, Preprint no. arXiv:2103.15598, Cornell University Library, arXiv.org, 2021.
    Abstract
    We consider a distributed stochastic optimization problem that is solved by a decentralized network of agents with only local communication between neighboring agents. The goal of the whole system is to minimize a global objective function given as a sum of local objectives held by each agent. Each local objective is defined as an expectation of a convex smooth random function and the agent is allowed to sample stochastic gradients for this function. For this setting we propose the first accelerated (in the sense of Nesterov's acceleration) method that simultaneously attains optimal up to a logarithmic factor communication and oracle complexity bounds for smooth strongly convex distributed stochastic optimization. We also consider the case when the communication graph is allowed to vary with time and obtain complexity bounds for our algorithm, which are the first upper complexity bounds for this setting in the literature.

  • V. Tominin , Y. Tominin , E. Borodich , D. Kovalev, A. Gasnikov, P. Dvurechensky, On accelerated methods for saddle-point problems with composite structure, Preprint no. arXiv:2103.09344, Cornell University Library, arXiv.org, 2021.
    Abstract
    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and the dual variables. First, we consider such problems with smooth composite terms, one of which having finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for non-composite setting. Besides that, our algorithms allow to separate the complexity bounds, i.e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  • A. Vasin, A. Gasnikov, P. Dvurechensky, V. Spokoiny, Accelerated gradient methods with absolute and relative noise in the gradient, Preprint no. arXiv:2102.02921, Cornell University, 2021, DOI 10.48550/arXiv.2102.02921 .

  • P. Dvurechensky, D. Kamzolov, A. Lukashevich, S. Lee, E. Ordentlich, C.A. Uribe, A. Gasnikov, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Preprint no. arXiv:2102.08246, Cornell University Library, arXiv.org, 2021.

  • P. Dvurechensky, M. Staudigl, S. Shtern, First-order methods for convex optimization, Preprint no. arXiv:2101.00935, Cornell University Library, arXiv.org, 2021.
    Abstract
    First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.

  • P. Dvurechensky, M. Staudigl, Hessian barrier algorithms for non-convex conic optimization, Preprint no. arXiv:2111.00100, Cornell University Library, arXiv.org, 2021.
    Abstract
    We consider the minimization of a continuous function over the intersection of a regular cone with an affine set via a new class of adaptive first- and second-order optimization methods, building on the Hessian-barrier techniques introduced in [Bomze, Mertikopoulos, Schachinger, and Staudigl, Hessian barrier algorithms for linearly constrained optimization problems, SIAM Journal on Optimization, 2019]. Our approach is based on a potential-reduction mechanism and attains a suitably defined class of approximate first- or second-order KKT points with the optimal worst-case iteration complexity O(??2) (first-order) and O(??3/2) (second-order), respectively. A key feature of our methodology is the use of self-concordant barrier functions to construct strictly feasible iterates via a disciplined decomposition approach and without sacrificing on the iteration complexity of the method. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained optimization problems.

  • M. Danilova, P. Dvurechensky, A. Gasnikov, E. Gorbunov, S. Guminov, D. Kamzolov, I. Shibaev, Recent theoretical advances in non-convex optimization, Preprint no. arXiv:2012.06188, Cornell University, 2020.

  • A. Sadiev, A. Beznosikov, P. Dvurechensky, A. Gasnikov, Zeroth-order algorithms for smooth saddle-point problems, Preprint no. arXiv:2009.09908, Cornell University, 2020.
    Abstract
    In recent years, the importance of saddle-point problems in machine learning has increased. This is due to the popularity of GANs. In this paper, we solve stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles. Theoretical analysis shows that in the case when the optimization set is a simplex, we lose only logn times in the stochastic convergence term. The paper also provides an approach to solving saddle-point problems, when the oracle for one of the variables has zero order, and for the second - first order. Subsequently, we implement zeroth-order and 1/2th-order methods to solve practical problems.

  • D. Tiapkin, A. Gasnikov, P. Dvurechensky, Stochastic saddle-point optimization for Wasserstein barycenters, Preprint no. arXiv:2006.06763, Cornell University, 2020.
    Abstract
    We study the computation of non-regularized Wasserstein barycenters of probability measures supported on the finite set. The first result gives a stochastic optimization algorithm for the discrete distribution over the probability measures which is comparable with the current best algorithms. The second result extends the previous one to the arbitrary distribution using kernel methods. Moreover, this new algorithm has a total complexity better than the Stochastic Averaging approach via the Sinkhorn algorithm in many cases.

  • N. Tupitsa, P. Dvurechensky, A. Gasnikov, C.A. Uribe , Multimarginal optimal transport by accelerated alternating minimization, Preprint no. arXiv:2004.02294, Cornell University Library, arXiv.org, 2020.
    Abstract
    We consider a multimarginal optimal transport, which includes as a particular case the Wasserstein barycenter problem. In this problem one has to find an optimal coupling between m probability measures, which amounts to finding a tensor of the order m. We propose an accelerated method based on accelerated alternating minimization and estimate its complexity to find the approximate solution to the problem. We use entropic regularization with sufficiently small regularization parameter and apply accelerated alternating minimization to the dual problem. A novel primal-dual analysis is used to reconstruct the approximately optimal coupling tensor. Our algorithm exhibits a better computational complexity than the state-of-the-art methods for some regimes of the problem parameters.

  • P. Dvurechensky, K. Safin, S. Shtern, M. Staudigl, Generalized self-concordant analysis of Frank--Wolfe algorithms, Preprint no. arXiv:2010.01009, Cornell University, 2020.
    Abstract
    Projection-free optimization via different variants of the Frank-Wolfe (FW) method has become one of the cornerstones in large scale optimization for machine learning and computational statistics. Numerous applications within these fields involve the minimization of functions with self-concordance like properties. Such generalized self-concordant (GSC) functions do not necessarily feature a Lipschitz continuous gradient, nor are they strongly convex. Indeed, in a number of applications, e.g. inverse covariance estimation or distance-weighted discrimination problems in support vector machines, the loss is given by a GSC function having unbounded curvature, implying absence of theoretical guarantees for the existing FW methods. This paper closes this apparent gap in the literature by developing provably convergent FW algorithms with standard O(1/k) convergence rate guarantees. If the problem formulation allows the efficient construction of a local linear minimization oracle, we develop a FW method with linear convergence rate.

  • P. Dvurechensky, S. Shtern, M. Staudigl, P. Ostroukhov, K. Safin, Self-concordant analysis of Frank--Wolfe algorithms, Preprint no. arXiv:2002.04320, Cornell University, 2020.
    Abstract
    Projection-free optimization via different variants of the Frank-Wolfe (FW) method has become one of the cornerstones in optimization for machine learning since in many cases the linear minimization oracle is much cheaper to implement than projections and some sparsity needs to be preserved. In a number of applications, e.g. Poisson inverse problems or quantum state tomography, the loss is given by a self-concordant (SC) function having unbounded curvature, implying absence of theoretical guarantees for the existing FW methods. We use the theory of SC functions to provide a new adaptive step size for FW methods and prove global convergence rate O(1/k), k being the iteration counter. If the problem can be represented by a local linear minimization oracle, we are the first to propose a FW method with linear convergence rate without assuming neither strong convexity nor a Lipschitz continuous gradient.

  • F. Stonyakin, A. Gasnikov, A. Tyurin, D. Pasechnyuk, A. Agafonov, P. Dvurechensky, D. Dvinskikh, A. Kroshnin, V. Piskunova, Inexact model: A framework for optimization and variational inequalities, Preprint no. arXiv:1902.00990, Cornell University Library, arXiv.org, 2019.

  • F. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C.A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, Preprint no. arXiv:1902.09001, Cornell University Library, arXiv.org, 2019.

  • D. Dvinskikh, E. Gorbunov, A. Gasnikov, P. Dvurechensky, C.A. Uribe, On dual approach for distributed stochastic convex optimization over networks, Preprint no. arXiv:1903.09844, Cornell University Library, arXiv.org, 2019.
    Abstract
    We introduce dual stochastic gradient oracle methods for distributed stochastic convex optimization problems over networks. We estimate the complexity of the proposed method in terms of probability of large deviations. This analysis is based on a new technique that allows to bound the distance between the iteration sequence and the solution point. By the proper choice of batch size, we can guarantee that this distance equals (up to a constant) to the distance between the starting point and the solution.