Material Detail

Mistake bounds and risk bounds for on-line learning algorithms

Mistake bounds and risk bounds for on-line learning algorithms

This video was recorded at Workshop on Modelling in Classification and Statistical Learning, Eindhoven 2004. In statistical learning theory, risk bounds are typically obtained via the manipulation of suprema of empirical processes measuring the largest deviation of the empirical risk from the true risk in a class of models. In this talk we describe the alternative approach of deriving risk bounds for the ensemble of hypotheses obtained by running an arbitrary learning algorithm in an-on line fashion. This allows us to replace the uniform large deviation argument with a simpler argument based on the analysis of the empirical process engendered by the on-line learner. The large deviations of such empirical processes are easily controlled by a single application of Bernstein's inequality for martingales, and the resulting risk bounds exhibit strong data-dependence.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.