| 
 | 
 | 
 | [Contents] | [Index] | 
Collaborator: D. Belomestny, S. Jaschke, A. Kolodko, D. Mercurio, G.N. Milstein, O. Reiß, J. Schoenmakers, V. Spokoiny, J.-H. Zacharias-Langhans
Cooperation with: C. Byrne (Springer, Heidelberg), R. Cont (Ecole Polytechnique, Palaiseau, France), B. Coffey (Merrill Lynch, London, UK), H. Föllmer, W. Härdle, U. Küchler, R. Stehle (Humboldt-Universität (HU) zu Berlin), H. Friebel (Finaris, Frankfurt am Main), P. Glasserman (Columbia University, New York, USA), H. Haaf, U. Wystup (Commerzbank AG, Frankfurt am Main), A.W. Heemink, E.v.d. Berg, D. Spivakovskaya (Technical University Delft, The Netherlands), F. Jamshidian (NIB Capital Den Haag, The Netherlands), J. Kampen (Ruprecht-Karls-Universität, Heidelberg) J. Kienitz (Postbank, Bonn), P. Kloeden (Johann Wolfgang Goethe-Universität Frankfurt am Main), C. März, D. Dunuschat, T. Sauder, S. Wernicke (Bankgesellschaft Berlin AG, Berlin), B. Matzack (Kreditanstalt für Wiederaufbau (KfW), Frankfurt am Main), S. Nair (Chapman & Hall, London, UK), M. Schweizer (Universität München / ETH Zürich), S. Schwalm (Reuters FS, Paris, France), G. Stahl (Bundesaufsichtsamt für das Kreditwesen (BAFin) Bonn), D. Tasche (Deutsche Bundesbank, Frankfurt am Main)
Supported by: 
BMBF: ``Effiziente Methoden zur Bestimmung von Risikomaßen''
(Efficient methods for valuation of risk measures),
DFG: DFG-Forschungszentrum ``Mathematik für Schlüsseltechnologien'' 
(Research Center ``Mathematics for Key Technologies''), project E5;
SFB 373 ``Quantifikation und Simulation ökonomischer Prozesse''
(Quantification and simulation of economic processes),
Reuters Financial Software, Paris
Description:
The central theme of the project Applied mathematical finance is the quantitative treatment of problems raised by the financial industry, based on innovative methods and algorithms developed in accordance with fundamental principles of mathematical finance. These problems include stochastic modeling of financial data, valuation of complex derivative instruments (options), and risk analysis. The methods and algorithms developed benefit strongly from the synergy with the projects Statistical data analysis and Numerical methods for stochastic models.
The valuation of financial derivatives based on arbitrage-free asset pricing involves non-trivial mathematical problems in martingale theory, stochastic differential equations, and partial differential equations. While its main principles are established (Harrison, Pliska, 1981), many numerical problems remain such as the numerical valuation of (multidimensional) American equity options and the valuation of Bermudan-style derivatives involving the term structure of interest rates (LIBOR models), [6]. The valuation and optimal exercise of American and Bermudan derivatives is one of the most important problems both in theory and practice, see, e.g., [1]. American options are options contingent on a set of underlyings which can be exercised at any time in some prespecified future time interval, whereas Bermudan options may be exercised at a prespecified discrete set of future exercise dates. In general, the fair price of an American- or Bermudan-style derivative can be represented as the solution of an optimal stopping problem.
 
  t
 t  T. By using the free boundary 
conditions and taking boundary limits of the Black-Scholes 
PDE and its time derivative in the continuation region, 
we have shown that the derivative   
g'(
 T. By using the free boundary 
conditions and taking boundary limits of the Black-Scholes 
PDE and its time derivative in the continuation region, 
we have shown that the derivative   
g'( )
can be expressed in 
uxxx(
)
can be expressed in 
uxxx( , g'(
, g'( ) +) and 
known quantities 
ux(
) +) and 
known quantities 
ux( , g'(
, g'( ) +), 
uxx(
) +), 
uxx( , g'(
, g'( ) +)
which can be expressed in the PDE coefficients and the pay-off function  
f. As a result we may compute numerically the third derivative uxxx 
via a Taylor expansion by an accurate enough computation of 
u(
) +)
which can be expressed in the PDE coefficients and the pay-off function  
f. As a result we may compute numerically the third derivative uxxx 
via a Taylor expansion by an accurate enough computation of 
u( ,
, +
 +  hq)
in a neighborhood point in the continuation region. The latter 
can be done by standard Monte Carlo simulation using the 
known exercise curve for
hq)
in a neighborhood point in the continuation region. The latter 
can be done by standard Monte Carlo simulation using the 
known exercise curve for 
 
  t
 t  T, see Figure 1.  
Having 
g'(
 T, see Figure 1.  
Having 
g'( ), the exercise curve can be extended one step,
g(
), the exercise curve can be extended one step,
g( - h)
 - h)  g(
 g( ) - g'(
) - g'( )h, and then we proceed 
in the same way. In [22] this method is generalized to the   
multidimensional case.
)h, and then we proceed 
in the same way. In [22] this method is generalized to the   
multidimensional case.
 =
 = 

 Xt -
Xt -  Xt+s
Xt+s e
e s ds
s ds
 
|  | 
Another new method for constructing an upper bound of the Bermudan/American price using some lower bound is currently in development. This approach is based on the fact that an American option is equivalent to a European option with a consumption process involved. The value of the upper bound V(t, x) at a position (t, x) is constructed by the Monte Carlo method. Our attention focuses on constructing new numerical procedures and their practical implementation. The results of numerical experiments confirm efficiency of the algorithms proposed ([3]).
In an initiated research cooperation with J. Kampen at Heidelberg University, we aim to value Bermudan-style derivatives in the LIBOR market model based on higher order approximation of Greenian kernels. The Greenian kernels are connected with the (high-dimensional) LIBOR process and integration with respect to these kernels will be implemented on sparse grids.
Robust calibration of the LIBOR market model, a popular benchmark model for effective forward interest rates ([5], [17], [28]), to liquidly traded instruments such as caps and swaptions has been a challenging problem for several years. In particular, calibration methods which avoid the use of historical data are very desirable, both from a practical and a more fundamental point of view. The dynamics of the LIBOR model is given by
where the LIBOR/EurIBOR processes Li are defined in [t0, Ti], with = Ti+1 - Ti being day count fractions
and
 = Ti+1 - Ti being day count fractions
and  being scalar  deterministic volatility
functions. Further, 
(W(n)i(t) | t0
 
being scalar  deterministic volatility
functions. Further, 
(W(n)i(t) | t0  t
 
t  Tn-1) are correlated 
Wiener processes under the so-called terminal measure
 Tn-1) are correlated 
Wiener processes under the so-called terminal measure 
 n,
with deterministic local covariance structure
n,
with deterministic local covariance structure 
 dt.
dt.
|  (t) | : = | cig(Ti - t) | |
|  (t) | : = |  ,        m(t) : = min{m : Tm  t}, | 
 + (1 - g
 + (1 - g + as)exp(- bs),
and, for example, one of the correlation structures 
developed by Schoenmakers & Coffey in [40], [41],
based on some semi-parametric framework,  [19], [39].
 + as)exp(- bs),
and, for example, one of the correlation structures 
developed by Schoenmakers & Coffey in [40], [41],
based on some semi-parametric framework,  [19], [39].
|  | = | exp  -   -ln  + | |
|   +    ![$\displaystyle \left.\vphantom{\left.+\eta\frac{i^2+j^2+ij-3mi-3mj+3i+3j+2m^2-m-4}{(m-2)(m-3)}
\right)}\right]$](img596.gif) , | |||
|  > 0, 0 <  < - ln  . | 
 c. Calibration of this volatility 
structure to a set of market cap and swaption volatilities comes down to
fit the model cap and swaption volatilities to a rather flat surface 
of market quotes, see the first picture in Figure 3.
 c. Calibration of this volatility 
structure to a set of market cap and swaption volatilities comes down to
fit the model cap and swaption volatilities to a rather flat surface 
of market quotes, see the first picture in Figure 3.
The LIBOR model is in a sense designed to price cap(let)s 
in closed form. Indeed, for any function g and correlation structure 
 , 
caps can be matched perfectly by appropriate 
choice of the coefficients ci. However, since caps are in fact one period 
swaptions, the model swaption volatility surface intersects with 
the market volatility surface at the 1 period 
swaption line,  regardless of 
the choice of g and
, 
caps can be matched perfectly by appropriate 
choice of the coefficients ci. However, since caps are in fact one period 
swaptions, the model swaption volatility surface intersects with 
the market volatility surface at the 1 period 
swaption line,  regardless of 
the choice of g and 
 . 
See the second picture in Figure 3 
for a model swapvol surface for some rather 
arbitrary choice of g and
. 
See the second picture in Figure 3 
for a model swapvol surface for some rather 
arbitrary choice of g and 
 . This is in fact 
 an intrinsic stability 
problem (see also [37]), since basically two explaining powers are available determining
one rotation angle. We have resolved this problem by introducing 
economically motivated regularizations of the least squares object function
and implemented the so developed stable procedures for testing against 
market data. Meanwhile our calibration methods are gaining interest,
appearing from consulting requests (Reuters FS, Bankgesellschaft Berlin AG)
and a currently developing book project ([38]).
. This is in fact 
 an intrinsic stability 
problem (see also [37]), since basically two explaining powers are available determining
one rotation angle. We have resolved this problem by introducing 
economically motivated regularizations of the least squares object function
and implemented the so developed stable procedures for testing against 
market data. Meanwhile our calibration methods are gaining interest,
appearing from consulting requests (Reuters FS, Bankgesellschaft Berlin AG)
and a currently developing book project ([38]).      
In cooperation with the project Statistical data analysis, new techniques for the volatility estimation for financial time series have been developed, [12], [21], [30].
 )0
)0 n
n N for arbitrary
N for arbitrary  > 0. We prove optimality of our
nonparametric estimation method in a minimax sense for fixed
 > 0. We prove optimality of our
nonparametric estimation method in a minimax sense for fixed
 > 0 and the asymptotics 
N
 > 0 and the asymptotics 
N
 . First simulation
results indicate that already for relatively small observation
distances
. First simulation
results indicate that already for relatively small observation
distances  our method outperforms classical procedures
based on quadratic variation estimation.
 our method outperforms classical procedures
based on quadratic variation estimation.
Since the Basel Committee's proposal for ``An internal model-based approach to market risk capital requirements'' (1995) was implemented in national laws, banks have been allowed to use internal models for estimating their market risk and have been able to compete in the innovation of risk management methodology. Since all banks are required to hold adequate capital reserves with regard to their outstanding risks, there has been a tremendous demand for risk management solutions. A similar ``internal ratings-based approach'' is planned for the controlling of credit risk in the ``Basel II'' process, which is due to be implemented in national laws by 2006. Meanwhile, credit derivatives play an important role as vehicle for banks to transform credit risk into de jure market risk and to potentially lower the required reserves. Such problems of risk measurement and risk modeling are the subject of the research on ``Mathematical methods for risk management''. This research is supported by the BMBF project ``Efficient methods for valuation of risk measures'', which continued in 2003 in cooperation with the Bankgesellschaft Berlin AG. Problems of both market and credit risk from the viewpoint of supervisory authorities are being worked on in cooperation with the BAFin.
Although the basic principles of the evaluation of market risks are now more or less settled, e.g., [2], [8], [9], [29], in practice many thorny statistical and numerical issues remain to be solved. Specifically the industry standard, the approximation of portfolio risk by the so-called ``delta-gamma normal'' approach, can be criticized because of the quadratic loss approximation and the Gaussian assumptions. Further, in the context of the ``Basel II'' consultations fundamental questions arise in the area of Credit Risk Modeling.
 (L - VaR
(L - VaR )2), approximating for large
)2), approximating for large  the condition 
L = VaR
the condition 
L = VaR , see Figure 5.
, see Figure 5.
 is the loss exposure of obligor i with average loss 
probability pi, and
wk, i and
 is the loss exposure of obligor i with average loss 
probability pi, and
wk, i and 
 are economic sector weights and volatilities,
respectively. 
In [32], [33], Fourier inversion techniques are applied successfully 
to  CreditRisk+ and generalizations of it. Moreover, in 
[31], [34],
a unified model is proposed which incorporates CreditRisk+ and 
Delta-normal as special cases.  
Further, in [13] we present an alternative
numerical recursion scheme for computing the loss probabilities in 
(2) of the standard CreditRisk+ model,
based on well-known expansions of the logarithm and the
exponential of a power series. We show that it is advantageous to
the Panjer recursion advocated in the original CreditRisk+ document, in
that it is numerically stable  whereas the Panjer algorithm is known to 
suffer from stability problems.
Also we show that this stable 
recursion method can be extended   
to a model which incorporates stochastic exposures 
as proposed by Tasche ([42]).
 are economic sector weights and volatilities,
respectively. 
In [32], [33], Fourier inversion techniques are applied successfully 
to  CreditRisk+ and generalizations of it. Moreover, in 
[31], [34],
a unified model is proposed which incorporates CreditRisk+ and 
Delta-normal as special cases.  
Further, in [13] we present an alternative
numerical recursion scheme for computing the loss probabilities in 
(2) of the standard CreditRisk+ model,
based on well-known expansions of the logarithm and the
exponential of a power series. We show that it is advantageous to
the Panjer recursion advocated in the original CreditRisk+ document, in
that it is numerically stable  whereas the Panjer algorithm is known to 
suffer from stability problems.
Also we show that this stable 
recursion method can be extended   
to a model which incorporates stochastic exposures 
as proposed by Tasche ([42]).
Based on knowledge of the complete loss distribution one can easily compute different risk measures such as Value at Risk and Expected shortfall. These risk measures are also important in the context of stochastic optimization ([14]).
Monte Carlo methods are very important in the field of applied mathematical finance, and we present here some interesting applications.
 ) which can be constructed  
via the (formal) adjoint of the generator of the original process.
For estimating worst-case scenario probability 
densities in financial applications it is desirable to have
a variation of the method in   [25] for 
discrete time models, which have basically more 
potential for modeling heavy tails. 
To this end we have constructed in [24] 
the discrete adjoint process for a large class of discrete
time Markov processes such that the 
forward-reverse density estimator (3) 
goes through for such processes as well. Several financial applications are 
currently studied in cooperation with the project Statistical 
data analysis.
) which can be constructed  
via the (formal) adjoint of the generator of the original process.
For estimating worst-case scenario probability 
densities in financial applications it is desirable to have
a variation of the method in   [25] for 
discrete time models, which have basically more 
potential for modeling heavy tails. 
To this end we have constructed in [24] 
the discrete adjoint process for a large class of discrete
time Markov processes such that the 
forward-reverse density estimator (3) 
goes through for such processes as well. Several financial applications are 
currently studied in cooperation with the project Statistical 
data analysis.  
Prototypically we exhibit the following situation, describing a
portfolio of a set
{It1, It2,..., Itm},
of m underlyings with respective shares 
w1, w2,..., wm, determining
the present value 
  
Vt =  wjItj.
There are many numerical schemes for the valuation of such financial
products. Often a reasonable approximation is obtained using the
wjItj.
There are many numerical schemes for the valuation of such financial
products. Often a reasonable approximation is obtained using the 
 
 -
 -  -normal-method for the forecast of  Vt+1 at a future time 
 knowing Vt, given by
-normal-method for the forecast of  Vt+1 at a future time 
 knowing Vt, given by 
 
 ,
, , and
, and  completely determined by the structure of the
 portfolio, and y Gaussian innovations, 
see [11]. In this case we obtain a quadratic functional in the
 underlyings. This remains quadratic when turning to the independent risk
   factors. Normally,
 completely determined by the structure of the
 portfolio, and y Gaussian innovations, 
see [11]. In this case we obtain a quadratic functional in the
 underlyings. This remains quadratic when turning to the independent risk
   factors. Normally,  is sparse.
 is sparse.
In this situation, the ANOVA decomposition of the integrand in (4) admits only up to bivariate contributions, such that it makes sense to apply Monte Carlo methods, which intrinsically use this information. For the situation at hand, we propose the use of randomized orthogonal arrays of strength 2 ([4]), which are known to show super-convergence, i.e. converge faster than usual Monte Carlo. Test calculations based on real-world data provided by the Bankgesellschaft Berlin AG actually show superiority above conventional simulations, see Figure 6.
References:
 -
 -  -normal,
working paper, 2003.
-normal,
working paper, 2003.
| 
 | 
 | 
 | [Contents] | [Index] |