Lecture 1: Hierarchical Low Rank Tensor Formats
The first lecture on hierarchical low rank formats starts with a general introduction
to notions of rank in higher dimensions, namely Canonical Polyadic (CP), Tucker, TT and
Hierarchical Tucker (HT) ranks and the corresponding data-sparse tensor representations.
Each of the notions of rank gives rise to a different set or manifold of tensors of fixed
rank and we compare the advantages and drawbacks between the different formats. The
concepts are introduced in the discrete setting where a tensor is a mapping from a d-fold
Cartesian product of finite index sets, but we also point out the relation to d-variate
functions, the continuous setting. We summarize some interesting open questions that open
new and sometimes very difficult areas of research.
Lecture 2: Low Rank Model Reduction and Uncertainty Quantification
The second lecture is devoted to the application of hierarchical low rank formats
for uncertainty quantification, or more specifically for the data-sparse representation
of parameter dependent quantities of interest. The data-sparse low rank formats have
a strong relation to reduced basis techniques for linear and nonlinear model reduction,
but both methods have their individual advantages or disadvantages. The most challenging
question in this area is whether or not the object of interest possesses the low rank
structure and how one can reliably approximate it. From the data sparse low rank representation
of the object of interest one can quickly but not trivially derive a variety of interesting
quantities like expectation, variance, correlations etc.