Material Detail
Large-Scale Learning and Inference: What We Have Learned with Markov Logic Networks
This video was recorded at NIPS Workshops, Whistler 2009. Markov logic allows very large and rich graphical models to be compactly specified. Current learning and inference algorithms for Markov logic can routinely handle models with millions of variables, billions of features, thousands of latent variables, and strong dependencies. In this talk I will give an overview of the main ideas in these algorithms, including weighted satisfiability, MCMC with deterministic dependencies, lazy inference, lifted inference, relational cutting planes, scaled conjugate gradient, relational clustering and relational pathfinding. I will also discuss the lessons learned in developing successive generations of these algorithms and promising ideas for the next round of scaling up.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info