Nora Serdyukova (WIAS Berlin)
Local parametric estimation under noise misspecification in regression.
Abstract: The problem of
pointwise estimation for local polynomial regression with
heteroscedastic additive Gaussian noise is considered. The approach is
an adaptive estimation with application of the Lepski's procedure
selecting one estimate from the set of estimates obtained by different
degrees of localization in combination with "propagation conditions"
on the choice of the critical values of the procedure under the simplest
null hypothesis suggested recently by V. Spokoiny in his joint work with
The development of the Lepski-Spokoiny approach is firstly in
consideration of rather general collection of localizing schemes,
including as a particular case the popular kernel smoothing. Secondly,
general polynomial approximation to the mean function is considered. The
third and, probably, the main step forward is in relaxing the
propagation approach to the model with unknown covariance
structure. This means that the covariance matrix is supposed to be
wrongly known implying "noise misspecification". The model with
unknown mean and variance is approximated by the one with parametric
assumption of local linearity of the mean function and with a wrong
covariance matrix. An analysis of this procedure allows for a
misspecification of the covariance matrix with a relative error up to
o(1/ log n), where n is the sample size.
The Degrees of Freedom of Partial Least Squares Regression
Abstract:Partial Least Squares (PLS) is a regression framework that fits the data
onto an orthogonal set of latent components that have maximal covariance
with the response variable.
In this talk, I introduce an unbiased estimate of the Degrees of Freedom
for PLS. It is defined as the trace of the Jacoby matrix of the fitted
values, seen as a function of the response. I illustrate on several
benchmark data that the complexity depends on the co-linearity of the
predictor variables. Under additional assumptions on the co-linearity
structure of the data, I also provide a lower bound for the Degrees of
Freedom if one component is used. On benchmark data, I compare the
performance of model selection based on the Degrees of Freedom in terms
of both prediction performance and model complexity.
In the remainder of my talk, I give a short outlook on consistency
results for PLS.(joint work with Mikio L. Braun and Gilles Blanchard)
Solving parabolic SPDEs via averaging over characteristics
Abstract: The resulsts of the talk belong to G.N.Milstein and M.V Tretyakov.
On the errors committed by
sequences of estimator functionals
Abstract: Consider a sequence of estimators $\hat \theta_n$ which converges
almost surely to $\theta_0$ as the sample size $n$ tends to infinity.
Under weak smoothness conditions, we identify the asymptotic limit of
the last time $\hat \theta_n$ is further than $\eps$ away from
$\theta_0$ when $\eps \rightarrow 0^+$. These limits lead to the
construction of sequentially fixed width confidence regions for which
we find analytic approximations. The smoothness conditions we impose
is that $\hat \theta_n$ is to be close to a Hadamard-differentiable
functional of the empirical distribution, an assumption valid for a
very large class of widely used statistical estimators. Similar
results were derived in Hjort and Fenstad (1992, Annals of Statistics)
for the case of Euclidean parameter spaces; part of the present
contribution is to lift these results to situations involving
parameter functionals. The apparatus we develop is also used to derive
appropriate limit distributions of other quantities related to the far
tail of an almost surely convergent sequence of estimators, like the
number of times the estimator is more than $\eps$ away from its
target. We illustrate our results by giving a new sequential
simultaneous confidence set for the cumulative hazard function based
on the Nelson--Aalen estimator.
(This is a joint work with Nils Lid Hjort.)