Material Detail

Suboptimality of MDL and Bayes in Classification under Misspecification

Suboptimality of MDL and Bayes in Classification under Misspecification

This video was recorded at Workshop on Modelling in Classification and Statistical Learning, Eindhoven 2004. We show that forms of Bayesian and MDL learning that are often applied to classification problems can be *statistically inconsistent*. We present a large family of classifiers and a distribution such that the best classifier within the model has generalization error (expected 0/1-prediction loss) almost 0. Nevertheless, no matter how many data are observed, both the classifier inferred by MDL and the classifier based on the Bayesian posterior will behave much worse than this best classifier in the sense that their expected 0/1-prediction loss is substantially larger. Our result can be re-interpreted as showing that under misspecification, Bayes and MDL do not always converge to the distribution in the model that is closest in KL divergence to the data generating distribution. We compare this result with earlier results on Bayesian inconsistency by Diaconis, Freedman and Barron.


  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material


Log in to participate in the discussions or sign up if you are not already a MERLOT member.