Material Detail

Minimum Error Entropy Principle for Learning

Minimum Error Entropy Principle for Learning

This video was recorded at International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines (ROKS): theory and applications, Leuven 2013. Information theoretical learning is inspired by introducing information theory ideas into a machine learning paradigm. Minimum error entropy is a principle of information theoretical learning and provides a family of supervised learning algorithms. It is a substitution of the classical least squares method when the noise is non-Gaussian. Its idea is to extract from data as much information as possible about the data generating systems by minimizing error entropies. In this talk we will discuss some minimum error entropy algorithms in a regression setting by minimizing empirical Renyi's entropy of order 2. Consistency results and learning rates are presented. In particular, some error estimates dealing with heavy-tailed noise will be given.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.