
 
The detailed time schedule will be announced soon. Below are the titles and the abstracts of the talks (click on the title to read the abstract):


 


PierreAntoine Absil, Curve fitting on Riemannian manifolds
In this talk I will discuss curve fitting problems on
manifolds. Manifolds of interest include the rotation group SO(3)
(generation of rigid body motions from sample points), the set of 3x3
symmetric positivedefinite matrices (interpolation of diffusion
tensors) and the shape manifold (morphing). Ideally, we would like to
find the curve that minimizes an energy function E defined as a
weighted sum of (i) a sumofsquares term penalizing the lack of
fitting to the data points and (ii) a regularity term defined as the
mean squared acceleration of the curve. The EulerLagrange necessary
conditions for this problem are known to take the form of a
fourthorder ordinary differential equation involving the curvature
tensor of the manifold, which is in general hard to solve. Instead, we
simplify the problem by restricting the set of admissible curves and
by resorting to suboptimal Riemannian generalizations of Euclidean
techniques.


Björn Andres, Title: TBA
Abstract: TBA


Joan Bruna, Statistics, Computation and Learning with Graph Neural Networks
Deep Learning, thanks mostly to Convolutional architectures, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional operators that separate the estimation into different scales.
Many problems across applied sciences, from particle physics to recommender systems, are formulated in terms of signals defined over nonEuclidean geometries, and also come with strong geometric stability priors. In this talk, I will present techniques that exploit geometric stability in general geometries with appropriate graph neural network architectures. We will show that these techniques can all be framed in terms of local graph generators such as the graph Laplacian. We will present some stability certificates, as well as applications to computer graphics, particle physics and graph estimation problems. In particular, we will describe how graph neural networks can be used to reach statistical detection thresholds in community detection, and attack hard combinatorial optimization problems, such as the Quadratic Assignment Problem.


Julianne Chung, Efficient generalized GolubKahan based methods for dynamic imaging
In many applications, the problem is assumed to be static, in the sense that the underlying parameters do not change during the measurement process. However, in many realistic scenarios such as in passive seismic tomography or dynamic photoacoustic tomography, the underlying parameters of interest may change during the measurement procedure. Incorporating prior information regarding temporal smoothness in reconstruction algorithms can lead to better reconstructions. However, this can become computationally intensive, in part, due to the large number of unknown parameters. In a Bayesian framework, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible. In this talk, we describe efficient, iterative, matrixfree methods based on the generalized GolubKahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples demonstrate the range of applicability and effectiveness of the described approaches.


Marco Cuturi, Generative Modeling with Optimal Transport
We present in this talk recent advances on the topic of parameter estimation using optimal transport, and discuss possible implementations for "Minimum Kantorovich Estimators". We show why these estimators are currently of great interest in the deep learning community, in which researchers have tried to formulate generative models for images. We will present a few algorithmic solutions to this problem.


Yiqiu Dong, Directional Regularization for Image Reconstruction
In this talk, I will introduce a new highorder anisotropic regularization based on the total generalized variation (TGV), which is very useful for applications with strong directional information. I will show that it has the same essential properties as TGV. With automatic direction estimators, we demonstrate the improvement of using directional TGV compared to standard TGV. Numerical simulations are carried out for image restoration and tomography reconstruction.


Remco Duits, Optimal Paths for Variants of the 2D and 3D ReedsShepp Car with Applications in Image Analysis
We consider a PDEbased approach for finding minimal paths for the ReedsShepp car. In our model we minimize a (datadriven) functional involving both curvature and length penalization, with several generalizations. Our approach encompasses the two and three dimensional variants of this model, state dependent costs, and moreover, the possibility of removing the reverse gear of the vehicle. We prove both global and local controllability results of the models.
Via eikonal equations on the manifold $\mathbb{R}^{d}\times\mathbb{S}^{d1}$ we compute distance maps w.r.t. highly anisotropic Finsler functions, which approximate the singular (pseudo)metrics underlying the model. This is achieved using a FastMarching (FM) method, building on Mirebeau [5,4]. The FM method is based on specific discretization stencils which are adapted to the preferred directions of the metric and obey a generalized acuteness property. The shortest paths can be found with a gradient descent method on the distance map, which we formalize in a theorem. We justify the use of our approximating metrics by proving convergence results.
Our curve optimization model in $\mathbb{R}^{d}\times\mathbb{S}^{d1}$ with datadriven cost allows to extract complex tubular structures from medical images, e.g. crossings, and incomplete data due to occlusions or low contrast. Our work extends the results of Sanguinetti et al. [7] on numerical subRiemannian eikonal equations and the Reeds Shepp Car to 3D, with comparisons to exact solutions by Duits et al. [2]. Numerical experiments show the high potential of our method in two applications: vessel tracking in retinal images for the case $d = 2$, and brain connectivity measures from diffusion weighted MRIdata for the case $d = 3$, extending the work of Bekkers et al [3] and Duits et al. [6] to 3D. We demonstrate how the new model without reverse gear better handles bifurcations.
Keywords: Finsler geometry on $\mathbb{R}^{d}\times\mathbb{S}^{d1}$, SubRiemannian geometry, fastmarching, ReedShepp car, keypoints, tracking
Acknowledgements: The research leading to the results of this article has received funding from the European Research Council under the European Community's 7th Framework Programme (FP7/20072014)/ERC grant agreement No. 335555 (Lie Analysis). This work was partly funded by ANR grant NSLBR. ANR13 JS01000301.
References
[1] R. Duits, S. Meesters, J. Mirebeau, and J. Portegies, “Optimal paths for variants of the 2d and 3d reedsshepp car with applications in image analysis,” submitted to JMIV, preprint available on https://arxiv.org/pdf/1612.06137.pdf, 2017.
[2] R. Duits, A. Ghosh, T. C. J. Dela Haije, and A. Mashtakov. On SubRiemannian Geodesics in SE(3) Whose Spatial Projections do not Have Cusps. J Dyn Control Syst, 22(4):771–805, October 2016.
[3] E. Bekkers, R. Duits, A. Mashtakov, and G. Sanguinetti. A PDE Approach to DataDriven SubRiemannian Geodesics in SE(2). SIAM J. Imaging Sci., 8(4):2740–2770, 2015.
[4] JM. Mirebeau. Efficient fast marching with Finsler metrics. Numer. Math., 126(3):515–557, July 2013.
[5] JM. Mirebeau. Anisotropic FastMarching on Cartesian Grids Using Lattice Basis Reduction. SIAM J. Numer.
Anal., 52(4):1573–1599, January 2014.
[6] R. Duits, U. Boscain, F. Rossi, and Y. Sachkov. Association Fields via Cuspless SubRiemannian Geodesics in SE(2).
J Math Imaging Vis, 49(2):384–417, December 2013.
[7] G.R. Sanguinetti, E.J. Bekkers, R. Duits, M.H.J. Janssen, A. Mashtakov, and JM. Mirebeau. SubRiemannian
Fast Marching in SE(2). In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, number 9423 in Lecture Notes in Computer Science, pages 366–374. Springer International Publishing, 2015. DOI: 10.1007/9783319257518 44.


Selim Esedoglu, Auction dynamics: A volume constrained MBO scheme
I will discuss a variant of Merriman, Bence, and Osher's threshold dynamics scheme for minimizing multiphase interfacial energies that allows various volume constraints on the individual phases. The upshot is an interesting and new connection between volume constrained multiphase mean curvature motion and auction algorithms that are inspired by market mechanisms. Applications to minimal partition problems in Euclidean space, and to semisupervised machine learning on graphs for computer vision, will be emphasized. Joint work with Matt Jacobs and Ekaterina Merkurjev.


 

Andrew Fitzgibbon, Discrete images, continuous world: A basis for discussion?
Work done with Fabio Viola, John MacCormick, Vitaliy Kurlin and others.
An image provides a finite set of measurements of the world, and our goal in image analysis is often to reconstruct a representation of the world from those measurements. I will argue that that representation of the world, at any scale compatible with these images, is best thought of as continuous. Thus we propose that image reconstruction should minimize a functional of the form
\[\mathcal{E}(f)=\sum_{i=1}^{n}\psi\left (z_{i}\int_{\Omega} f(u)\cdot \kappa_{i}(u)\mathrm{d}u \right ) +\mathcal{R}(f),\]
Where:
 The world is modelled as a function $f:\Omega\mapsto \mathbb{R}^{d}$. For a 2D camera $\Omega$ might be a sphere surrounding the imaging device.
 The image comprises $n$ measurements $\{z_{i}\,\, 1\le i\le n\}\subset \mathbb{R}$. Yet, the measurements live in $\mathbb{R}$, not in chromaticity space, in $k$space, or pretty much anywhere else.
 Each measurement is equipped with a pointspread function $\kappa_{i}:\Omega\mapsto \mathbb{R}^{d}$, which may be given, or may be inferred as part of the optimization.
 The kernel $\psi(t)$ might be e.g. $t^{2}$, $t^{p}$, or $\min (t^{2},\tau)$.
 The regularizer $\mathcal{R}(f)$ can be anything we already use for continuous variational image reconstruction: TV, elastica, etc...
Numerous advantages accrue from this formulation, such a refinement invariance, no loss of rotational symmetry, and perhaps not least, a better match to reality. I will talk about experiments we've done using a movingmesh finite element approach, showing the feasibility of optimization of this functional. Many existing approaches can be expressed as special cases of this approach, sometimes requiring amusing or revealing choices of finite element.


Bastian Goldlücke, Variational Inverse Problems in Light Field Analysis
In contrast to a traditional 2D image, a light field is defined on a 4D space of rays instead of a 2D image domain. In particular, different rays can intersect in the same point of a surface, and thus for a Lambertian scene, the light field should have similar radiance for these. This constraint generates a rich structure on the 4D domain, which is strongly linked to scene geometry. Commonly, this structure is exploited to infer depth from a single light field, but it also needs to be respected if one intends to solve inverse problems on ray space.
In this talk, I will first give an introduction to the light field structure and the various transformations used to recover depth and perform occlusion analysis. From this, I will derive a rich set of example applications, such as convex ray space priors which preserve its structure, sparse coding techniques to infer depth for multilayered scenes, and variational models for intrinsic light field decomposition.


Bernadette Hahn, Timedependent inverse problems in imaging
The reconstruction process in tomography represents a wellknown application of the theory of inverse problems and is well understood if the specimen is stationary during the data acquisition process.
However, tomographic modalities are also utilized to visualize structural and functional changes of a specimen, e.g. in medical diagnosis, radiotherapy treatment planning or imaging objects at working stage. In general, this dynamic behavior leads to a temporal undersampling and inconsistent measurements. Thus, if the dynamics are not taken into account, the reconstructed images can suffer from motion artefacts like blurring, ghosting, etc., which can significantly impede a reliable diagnosis.
Consequently, the imaging process has to be modeled and solved as a timedependent inverse problem. Especially, efficient numerical methods are required which can extract the searchedfor information from motioncorrupted tomographic data. This talk addresses these challenges, and examples from computerized tomography and magnetic resonance tomography illustrate the advantages of our approach. In addition, we also analyze peculiarities compared to the classic static case.


Martin Holler, Total Generalized Variation for Manifoldvalued Data
Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vectorspace data.
On the other hand, variational models for manifoldvalued data have become popular recently and many successful approaches, such as first and secondorder TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifoldvalued data was still missing.
In this talk we introduce the notion of secondorder total generalized variation (TGV) regularization for manifoldvalued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove wellposedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifoldvalued data.
This is joint work with K. Bredies, M. Storath and A. Weinmann.


Jan Lellmann, MeasureValued Variational Models
We investigate a class of variational problems where the unknown function assumes values in the space of probability measures on some metric space. We consider mathematical questions regarding existence and suitable regularization, and discuss applications from DiffusionWeighted Imaging and nonconvex lifting.


Dirk Lorenz, Using the DouglasRachford method in imaging
The DouglasRachford method has been around for quite some time, but its
usage in imaging has increased significantly only recently. The method
usually compares favorably to other proximal methods and its convergence
has been analyzed in several settings. It has been observed that the
choice of the stepsize has a great impact on the performance of the
method and we will present a way to tune the stepsize adaptively. To
show convergence of the method with this adaptive choice, we prove a new
convergence result on the nonstationary iteration. Due to the
connection of the DouglasRachford method and the alternating direction
of multipliers method (ADMM) our results also apply to the ADMM method.
Numerical experiments illustrate that the adaptive stepsize choice
usually improves the performance of the method significantly.


Michael Möller, Edge Aligning Image Regularizations
This talk will give an overview of different strategies for image regularizations that encourage the edges in multiple channels to coincide. The position and dirction of such edges can either be predetermined by a guidance image or estimated during the reconstruction of multiple channels. For the latter I will present an iterative strategy based on Bregman distances as well as collaborative total variation and total generalized variation type penalties. Finally, I will discuss the idea to replace the proximal operator arising from the regularization in common convex optimization frameworks by a neural network trained for image denoising.


Peter Ochs, Nonsmooth Nonconvex Bregman Minimization: Unification and new Algorithms
We propose a unifying algorithm for nonsmooth nonconvex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijolike line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, ForwardBackward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and nonlinear inverse problems in image processing and machine learning.


 

Xavier Pennec, Barycentric Subspace Analysis: an extension of PCA to Manifolds
I address in this talk the generalization of Principal Component Analysis (PCA) to Riemannian manifolds and potentially more general stratified spaces. Tangent PCA is often sufficient for analyzing data which are sufficiently centered around a central value (unimodal or Gaussianlike data), but fails for multimodal or large support distributions (e.g. uniform on close compact subspaces). Instead of a covariance matrix analysis, Principal Geodesic Analysis (PGA) and Geodesic PCA (GPCA) are proposing to minimize the distance to Geodesic Subspaces (GS) which are spanned by the geodesics going through a point with tangent vector is a restricted linear subspace of the tangent space. Other methods like Principal Nested Spheres (PNS) restrict to simpler manifolds but emphasize on the need for the nestedness of the resulting principal subspaces.
In this work, we first propose a new and more general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of $k+1$ reference points. As this definition relies on points and do not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows to have principal subspaces that span over several strata, which is not the case with PGA. Barycentric subspaces locally define a submanifold of dimension k which generalizes geodesic subspaces. Like PGA, barycentric subspaces can naturally be nested, which allow the construction of inductive forward nested subspaces approximating data points which contains the Frechet mean. However, it also allows the construction of backward flags which may not contain the mean. Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchies of properly embedded linear subspaces of increasing dimension). We propose for that an extension of the unexplained variance criterion that generalizes nicely to flags of barycentric subspaces in Riemannian manifolds. This results into a particularly appealing generalization of PCA on manifolds, that we call Barycentric Subspaces Analysis (BSA). The method will be illustrated on spherical and hyperbolic spaces, and on diffeomorphisms encoding the deformation of the heart in cardiac image sequences.


Konrad Polthier, Title: Branched Covering Surfaces
Multivalued functions and differential forms naturally lead to the concept of branched covering surfaces and more generally of branched covering manifolds in the spirit of Hermann Weyl’s book "Die Idee der Riemannschen Fläche" from 1913. This talk will illustrate and discretize basic concepts of branched (simplicial) covering surfaces starting from complex analysis and surface theory up to their recent appearance in geometry processing algorithms and artistic mathematical designs. Applications will touch differential based surface modeling, image and geometry retargeting, global surface and volume remeshing, and novel weaved geometry representations with recent industrial applications.


Edoardo Provenzi, Retinexlike models in color enhancement
In this talk, I will present and compare many different interpretations of the famous Retinex model of color vision, developed by E. Land in the late sixties. In particular, I will underlined the difference between physicallybased and spatiallybased Retinex interpretations thanks to the use of variational techniques. Finally, I will briefly discuss the role of Retinex in the development of perceptuallybased contrast measures.


Irène Waldspurger, Alternating projections for phase retrieval with random sensing vectors
Phase retrieval consists in reconstructing an unknown element in a complex vector space from the modulus of linear measurements. The first reconstruction algorithms for which theoretical guarantees could be proven relied on convexification techniques. It has only recently been realized that similar guarantees hold for nonconvex local search methods, that are faster and simpler, provided that their starting point is carefully chosen. We will explain how to establish these guarantees for the most wellknown local search method: alternating projections. We will also discuss the role of the initialization method.


Clarice Poon, Multidimensional Sparse Superresolution
This talk is concerned with the superresolution problem for positive spikes in arbitrary dimensions. More precisely, I will discuss the issue of support recovery for the socalled BLASSO method. While superresolution is of paramount importance in overcoming the limitations of many imaging devices, its theoretical analysis is still lacking beyond the 1dimensional case. The reason is that in the 2dimensional case and beyond, the relative positions of the spikes enter the picture, and one needs to account for these different geometrical configurations. After presenting an algorithmic description of the limit of the associated dual problems as the spikes cluster around a given point, I will present a detailed analysis of the support stability and
superresolution effect in the case of a pair of spikes. This is joint work with Gabriel Peyré.


JeanMichel Morel, A theory of anomaly detection in images
Anomaly detection can not be formulated in a Bayesian framework which would require to simultaneously learn a model of the anomaly, and a model of the background. (In the case where there are plenty of examples of the background and for the object to be detected, neural networks may provide a practical answer, but without explanatory power). In the case of anomalies, we dispose of only one image as unique informer on the background, and of no example at all for the anomaly. Thus one is led to learn a background model from very few samples, and to detect an anomaly as a large deviation from this background model. I’ll show how the anomaly detection problem can be led back to the simpler problem of detecting outliers in noise. I’ll develop the proposed solution as a logical deduction of the huge literature on anomaly detection. Work carried out in collaboration with Axel Davy, Mauricio Delbracio and Thibaud Ehret.


Zuowei Shen, Title: TBA
Abstract: TBA


Wotao Yin, Delays are not noise  Asynchronous Parallel Algorithms for LargeScale Optimization
Even since ten years ago, the singlethreaded CPU performance stopped improving significantly, due to physical limitations; it is the numbers of cores in each machine that continue to arise. Today we can buy 8core phones, 64core workstations, and 2.5kcore GPUs at affordable prices. On the other hand, most of our algorithms are still singlethreaded, and the already parallelized algorithms are still synchronous. Because so, their running time is restricted by the performance of one or the slowest core, respectively. To develop faster algorithms, especially for those largescale problems that arise in imaging processing and machine learning, it is inevitable to consider asynchronous parallel computing.
Many numerical problems reduce to the fixedpoint problem of solving $x=T(x)$. This talk discusses the structure in the operator T that prevails in many optimization problems and enables highly efficient asynchronous algorithms. By ``asynchronous'', we mean that the algorithm runs on multiple threads, and each thread computes with possibly delayed information from the other threads. The threads do not coordinate their iterations. On modern computer architecture, asynchronous algorithms can be orderofmagnitude faster than the standard (synchronous) parallel algorithms! We present a fundamental theoretical result: as long as $T$ has a fixed point and is nonexpansive, the asynchronous coordinateupdate algorithm converges to a fixed point under either a bounded delay or certain kinds of unbounded delays. We also present an analysis to explain the asynchronous linear speed up: for large problems, asynchronous algorithms complete more iterations per second and have the same quality per iteration. Finally, we demonstrate how to solve some largescale applications.

 