Material Detail
Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families
This video was recorded at 26th Annual Conference on Learning Theory (COLT), Princeton 2013. We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, which occurs if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. We answer this question for one-dimensional exponential families: SNML is exchangeable only for three classes of natural exponential family distributions,namely the Gaussian, the gamma, and the Tweedie exponential family of order 3/2.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info