Dynamical Monte Carlo Methods (MCMC)

The usage of Markov chains in numerical simulation may serve different purposes. While on the one-hand side it may be desirable to see a certain Markovian system evolving in time, it is on the other-hand side just used as a tool for approximate numerical integration. Depending on the purpose, different error criteria are relevant.

The work within this area aims at setting up mathematical foundations of using Markov chains for numerical analysis, a topic which is relevant within the project field Applied Mathematical Finance, in particular for Monte Carlo methods in finance. However, this is also of general interest, thus fits Numerical Methods.


is on using Markov chains for numerical integration, when direct simulation is not feasible. An appropriate error criterion is introduced as follows.

Suppose that a Markov chain with invariant distribution $\pi$ is given by some transition kernel K. The error of a sample $X_1,\dots,X_N$ from this Markov chain used for integrating some square integrable function f is then determined by

err(f,K):=\left(\mathbf E\vert\frac 1 N \sum_{j=1}^N f(X_j) - \int f\; d\pi\vert^2\right)^{1/2}.\end{displaymath}

Typically we expect this to behave like $C(f,K)\times N^{-1/2}$. For application it is important to control the dependencies C(f,K).

Complexity of Ill-posed Problems

In this sub-project we study ill-posed problems, where we wish to recover some element x from some Hilbert space from indirectly observed data near y=Ax, where A is some injective compact linear operator acting from X to X. In practice indirect observations cannot be observed exactly but only in discretized and noisy form, such that we have only a vector $\varphi(y_\delta)=\left\{y_{\delta,i}\right\}_{i=1}^{n}\in 
R^n$defined by

y_{\delta,i}=\langle y_\delta,\varphi_i\rangle = \langle Ax,\varphi_i\rangle + 
\delta \xi_i,\quad i=1,\dots,n, \end{displaymath}

where $\langle,\rangle$ denotes the inner product in X, $\varphi_i,
i=1, \dots,n$ is some orthonormal system, usually called design, and $\xi_i, i=1, \dots,n$ is the noise, which is assumed to be normalized. For deterministic noise this means $\Vert \xi \Vert _{Y}\leq 1$. In the stochastic setting we assume Gaussian white noise for simplicity, i.e., the family $\xi_i, i=1, \dots,n$ consists of independent standard normal variables. The operator A determines the way the observations are indirect.

This mathematical problem is accompanied with a numerical one. To this end we have to specify the class of admissible numerical methods. To this end it is assumed to be based on some design, say $\Phi:=\left\{\varphi_1,\dots,\varphi_n\right\}$, which describes the way we obtain noisy observations. The resulting approximation based on such design may be obtained by any (measurable) mapping $S:
R^n\to Y$, hence the approximation is

u({y_\delta}):=S(\langle y_\delta,\varphi_1\rangle,\langle y_\delta,\varphi_2\rangle,\dots,\langle y_\delta,\varphi_n\rangle).\end{displaymath}

It is the aim in this project to study efficiency issues for recovering the unknown element x from indirect and noisy discrete observations, as described above.

The studies within this sub-project are primarily carried out within the project field Statistical Data Analysis, specifically under the topic Numerics of statistical ill-posed problems.


is on

A first discussion of these problems was given in the joint paper Optimal discretization of inverse problems in Hilbert scales. Regularization and self-regularization of projection methods. with Sergei V. Pereverzev.

Since then the collaboration continued, resulting a a series of papers in the same spirit, see

Recently, focus is on equations with variable source conditions, which leads to equations in variable Hilbert scales. Results in this direction can be found in Several papers deal with the a posteriori choice of the regularization parameters. Finally we mention the studies on discretization of ill-posed problems.
Peter Mathé