Abstract:
We consider an online optimization problem on a subset S of R^n (not necessarily convex), in which a decision maker chooses, at each iteration t, a probability distribution x^(t) over S, and seeks to minimize a cumulative expected loss, where each loss is a Lipschitz function revealed at the end of iteration t. Building on previous work, we propose a generalized Hedge algorithm and show a O(\sqrtt \log t) bound on the regret when the losses are uniformly Lipschitz and S is uniformly fat (a weaker condition than convexity). Finally, we propose a generalization to the dual averaging method on the set of Lebesgue-continuous distributions over S.
Publication date:
June 1, 2015
Publication type:
Conference Paper
Citation:
Krichene, W., Balandat, M., Tomlin, C., & Bayen, A. (2015). The Hedge Algorithm on a Continuum. Proceedings of the 32nd International Conference on Machine Learning, 824–832. https://proceedings.mlr.press/v37/krichene15.html