IMI Interdisciplinary Mathematics InstituteCollege of Arts and Sciences

Some error estimates in learning theory


A 2004 Preprint by S. Konyagin and V. Temlyakov

  • 2004:05
  • We continue investigation of some problems in learning theory in the setting formulated by F. Cucker and S. Smale [CS]. The goal is to find an estimator $f _ z$ on the base of given data $z := ((x _ 1, y _ 1), . . . , (x _ m, y _ m))$ that approximates well the regression function $f _ \rho$ of an unknown Borel probability measure $\rho$ defined on $Z = X \times Y$. Following [CS] we consider a problem of approximate recovery of a projection $f _ W$ of an unknown regression function $f _ \rho$ onto a given class of functions $W$. It is known from [CS] and [DKPT] that the behavior of the entropy numbers $\in _ n(W)$ of $W$ in the uniform norm plays an important role in the above problem. In this paper we obtain sharp (in the sense of order) estimates for the error between $f _ W$ and $f _ z$ for the classes $W$ satisfying $\in _ n(W) \leq Dn^{-r}, n = 1, 2, . . . , |f| \leq D, f \in W$. We observe that the error estimates exhibit a saturation phenomenon for the range $r > 1/2$. We improve the error estimates by imposing one additional assumption on the relation between $f _ \rho$ and $W$, namely, we assume $f _ \rho \in W$.

    We discuss one more issue in the paper. We provide a method that calculates from the data $z$ an approximate value of the average variance $\int _ z(y-f _ \rho(x))^2d\rho$ of the random variable $y$ with controlled error estimate.

© Interdisciplinary Mathematics Institute | The University of South Carolina Board of Trustees | Webmaster
USC