Material Detail

Relational Learning as Collective Matrix Factorization

Relational Learning as Collective Matrix Factorization

This video was recorded at Carnegie Mellon Machine Learning Lunch seminar. We present a unified view of matrix factorization models, including singular value decompositions, non-negative matrix factorization, probabilistic latent semantic indexing, and generalizations of these models to exponential families and non-regular Bregman divergences. One can model relational data as a set of matrices, where each matrix represents the value of a relation between two entity-types. Instead of a single matrix, relational data is represented as a set of matrices with shared dimensions and tied low-rank representation. Our example domain is augmented collaborative filtering, where both user ratings and side information about items are available. To predict the value of a relation, we extend Bregman matrix factorization to a set of related matrices. Using an alternating minimization scheme, we show the existence of a practical Newton step. The use of stochastic second-order methods for large matrices is also covered.


  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material


Log in to participate in the discussions or sign up if you are not already a MERLOT member.