Material Detail

Tractable Inference for Probabilistic Models by Free Energy Approximations

Tractable Inference for Probabilistic Models by Free Energy Approximations

This video was recorded at Machine Learning Workshop, Sheffield 2004. Probabilistic models explain complex observed data by a set of unobserved, hidden random variables based on the joint distribution of the variables. Statistical inference requires the evaluation of high dimensional sums or integrals. Hence, one has to deal with a vast computational complexity when the number of hidden variables is large and it is important to develop tractable approximations. I will discuss ideas for such approximations which are based on a variational formulation of inference. Quantities of interest, like marginal moments of the distribution are found as minima of an entropic quantity, often called the Gibbs Free Energy. While an exact computation of the Free Energy is computationally intractable, sensible approximations often provide quite accurate results. I will discusss applications of these techniques to the estimation of wind fields from satellite measurements, to a model of an error correcting code in telecommunication and to approximate resampling methods.


  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material


Log in to participate in the discussions or sign up if you are not already a MERLOT member.