Material Detail
Putting Bayes to Sleep
This video was recorded at Large-scale Online Learning and Decision Making (LSOLDM) Workshop, Cumberland Lodge 2012. In online multitask learning a single algorithm faces a collection of interleaved tasks. By addressing these tasks jointly rather than in isolation, the algorithm can discover and exploit similarities between tasks and hence learn faster and perform each task better. We present a new method for multitask learning, built atop the "specialist" framework. We obtain a new intriguing efficient update that achieves a significantly better bound. Our method has linear efficiency and hence scales to very large data sets.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info