Material Detail
Bounding Excess Risk in Machine Learning
This video was recorded at Machine Learning Summer School (MLSS), Chicago 2009. We will discuss a general approach to the problem of bounding the excess risk of learning algorithms based on empirical risk minimization (possibly penalized). This approach has been developed in the recent years by several authors (among others: Massart; Bartlett, Bousquet and Mendelson; Koltchinskii). It is based on powerful concentration inequalities due to Talagrand as well as on a variety of tools of empirical processes theory (comparison inequalities, entropy and generic chaining bounds on Gaussian, empirical and Rademacher processes, etc.). It provides a way to obtain sharp excess risk bounds in a number of problems such as regression, density estimation and classification and for many different classes of learning methods (kernel machines, ensemble methods, sparse recovery). It also provides a general way to construct sharp data dependent bounds on excess risk that can be used in model selection and adaptation problems.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info