Material Detail

Manifold Embeddings for Model-Based Reinforcement Learning of Neurostimulation Policies

Manifold Embeddings for Model-Based Reinforcement Learning of Neurostimulation Policies

This video was recorded at 26th International Conference on Machine Learning (ICML), Montreal 2009. Real-world reinforcement learning problems often exhibit nonlinear, continuous-valued, noisy, partially-observable state-spaces that are prohibitively expensive to explore. The formal reinforcement learning framework, unfortunately, has not been successfully demonstrated in a real-world domain having all of these constraints. We approach this domain with a two-part solution. First, we overcome continuous-valued, partially observable state-spaces by constructing manifold embeddings of the system's underlying dynamics, which substitute as a complete state-space representation. We then define a generative model over this manifold to learn a policy off-line. The model-based approach is preferred because it enables simplification of the learning problem by domain knowledge. In this work we formally integrate manifold embeddings into the reinforcement learning framework, summarize a spectral method for estimating embedding parameters, and demonstrate the model-based approach in a complex domain-adaptive seizure suppression of an epileptic neural system.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.