Material Detail

10 Year Best Paper: Combining Labeled and Unlabeled Data with Co-Training

10 Year Best Paper: Combining Labeled and Unlabeled Data with Co-Training

This video was recorded at 25th International Conference on Machine Learning (ICML), Helsinki 2008. We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular we consider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views. For example the description of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the examplewould be sufficient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled examples. Specically the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algorithms predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a stand-alone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.