Material Detail

New Quasi-Newton Methods for Efficient Large-Scale Machine Learning

New Quasi-Newton Methods for Efficient Large-Scale Machine Learning

This video was recorded at NIPS Workshop on Efficient Machine Learning, Whistler 2007. The BFGS quasi-Newton method and its limited-memory variant LBFGS revolutionized nonlinear optimization, and dominate it to this day. Their application to large-scale machine learning, however, has been hindered by the fact that they assume a smooth, strictly convex, and deterministic objective function in a finite-dimensional vector space. Here we relax these assumptions one by one, and present (L)BFGS variants newly developed in our group that perform well on non-convex smooth, quasi-convex non-smooth, and non-deterministic objectives. Paradigmatic applications include parameter estimation in MLPs (non-convex smooth) and SVMs (convex non-smooth), and stochastic approximation of gradients (non-deterministic) for efficient online learning on large data sets. We are also able to lift LBFGS to an RKHS for online SVM training. In all these cases our BFGS variants outperform previous methods on a wide variety of models and data sets, from toy problems to large-scale data-mining tasks.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.