Material Detail

Improved Regret Guarantees for Online Smooth Convex Optimization with Bandit Feedback

Improved Regret Guarantees for Online Smooth Convex Optimization with Bandit Feedback

This video was recorded at 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Ft. Lauderdale 2011. The study of online convex optimization in the bandit setting was initiated by Kleinberg (2004) and Flaxman et al. (2005). Such a setting models a decision maker that has to make decisions in the face of adversarially chosen convex loss functions. Moreover, the only information the decision maker receives are the losses. The identities of the loss functions themselves are not revealed. In this setting, we reduce the gap between the best known lower and upper bounds for the class of smooth convex functions, i.e. convex functions with a Lipschitz continuous gradient. Building upon existing work on selfconcordant regularizers and one-point gradient estimation, we give the rst algorithm whose expected regret is O(T2=3), ignoring constant and logarithmic factors.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.