Material Detail

AdaBoost is Universally Consistent

AdaBoost is Universally Consistent

This video was recorded at Machine Learning Summer School (MLSS), Taipei 2006. We consider the risk, or probability of error, of the classifier produced by AdaBoost, and in particular the stopping strategy to be used to ensure universal consistency. (A classification method is universally consistent if the risk of the classifiers it produces approaches the Bayes risk---the minimal risk---as the sample size grows.) Several related algorithms---regularized versions of AdaBoost---have been shown to be universally consistent, but AdaBoost's universal consistency has not been established. Jiang has demonstrated that, for each probability distribution satisfying certain smoothness conditions, there is a stopping time for sample size n, so that if AdaBoost is stopped after iterations, its risk approaches the Bayes risk for that distribution. Our main result is that if AdaBoost is stopped after iterations, it is universally consistent, where n is the sample size and .

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.