Abstract
We consider the problem of estimating a density function from a sequence of independent and identically distributed observations xi taking value in Rd. The estimation procedure constructs a convex mixture of "basis" densities and estimates the parameters using the maximum likelihood method. Viewing the error as a combination of two terms, the approximation error measuring the adequacy of the model, and the estimation error resulting from the finiteness of the sample size, we derive upper bounds to the expected total error, thus obtaining bounds for the rate of convergence. These results then allow us to derive explicit expressions relating the sample complexity and model complexity.
Full Citation
Neural Networks
vol.
10
,
(January 01, 1997):
99
-109
.