Material Detail

Dual decomposition for inference in natural language processing

Dual decomposition for inference in natural language processing

This video was recorded at NIPS Workshops, Whistler 2010. There has been a long history in combinatorial optimization of methods that exploit structure in complex problems, using methods such as dual decomposition or Lagrangian relaxation. These methods leverage the observation that complex inference problems can often be decomposed into efficiently solvable subproblems. In this talk I'll describe work on inference algorithms for NLP based on dual decomposition. I'll describe work on two problems: 1) Non-projective dependency parsing, where the inference problem is NP-hard for all but the simplest models. 2) Problems that involve ''intersections'' of two or more dynamic programming problems. These problems are solvable in polynomial time, but in practice the intersected dynamic programs are far too large to be practical. The algorithms are simple and efficient, building on standard combinatorial algorithms (e.g., dynamic programming, minimum spanning tree) as oracles; they provably solve a linear programming relaxation of the original problem; and empirically they very often lead to an exact solution to the original problem. The work is related to recent methods that use dual decomposition as an alternative to belief propagation for inference in Markov random fields. This is joint work with Tommi Jaakkola, Terry Koo, Sasha Rush, and David Sontag.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.