Direct and Inverse Problems for PDEs with Random Coefficients - Abstract
Matthies, Herman G.
The inverse problem of determining data (coefficient fields) in an SPDE model from observations -- or some other computational model -- is intimately tied to being able to compute a conditional expectation.
The talk will quickly review the Bayesian setting in connection with the conditional expectation, and then class the different algorithms and possibilities ranging from the sampling of the posterior to filter-like algorithms. Here we will mainly concentrate on the latter class, and look at the various approximations involved.
The conditional expectation in its original concept is just an orthogonal projection, but when factored through a possibly non-linear observation operator, it becomes more complicated, and can typically not be computed exactly. This involves further approximation, involving finite dimensional subspaces onto which to approximate the conditional expectation.
The next question to address is which characteristics of the posterior to compute and update.
The filter algorithms typically correct for the conditional mean, leaving anything else unchanged. This gives a mapping from prior to posterior variable. Higher order characteristics can typically not be formulated any more as a mapping from prio to posterior, and the task becomes one of constructing a new random variable which has certain desired properties, and which agrees with the posterior characteristics.