Material Detail

Unsupervised learning

Unsupervised learning

This video was recorded at Summer Schools in Logic and Learning, Canberra 2009. The first part of his tutorial will discuss un-supervised, semi-supervised and partially-supervised learning. Convex relaxations will be presented for un-supervised and semi-supervised training of support vector machines, max-margin Markov networks, log-linear models and Bayesian networks. The concept of partially-supervised training will then be introduced, with convex relaxations developed for training multi-layer perceptrons and deep networks. Relationships of these methods to classical training algorithms (EM, Viterbi-EM, and self-supervised training) will be discussed. Limitations of convex relaxations will also be considered. The tutorial will then present methods for scaling up such training algorithms. Finally, some simple approximation bounds will be introduced, along with a rudimentary generalization theory for self-supervised training.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.