Material Detail
The Sample-Computational Tradeo ff
This video was recorded at NIPS Workshops, Lake Tahoe 2012. When analyzing the error of a learning algorithm, it is common to decompose the error into approximation error (measuring how well the hypothesis class fits the problem) and estimation error (due to the fact we only receive a finite training set). In practice, we usually pay an additional error, called optimization error, due to the fact that we have a limited computational power. I will describe this triple tradeoff and will demonstrate how more training examples can lead to more efficient learning algorithms.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info