We study convergence properties of distributed learning dynamics in repeated stochastic routing games. The game is stochastic in that each player observes a stochastic vector, the conditional expectation of which is equal to the true loss (almost surely). In particular, we propose a model in which every player m follows a stochastic mirror descent dynamics with Bregman divergence Dψm and learning rates ηtm = θmt-αm. We prove that if all players use the same sequence of learning rates, then their joint strategy converges almost surely to the equilibrium set. If the learning dynamics are heterogeneous, that is, different players use different learning rates, then the joint strategy converges to equilibrium in expectation, and we give upper bounds on the convergence rate. This result holds for general routing games (no smoothness or strong convexity assumptions are required). These results provide a distributed learning model that is robust to measurement noise and other stochastic perturbations, and allows flexibility in the choice of learning algorithm of each player. The results also provide estimates of convergence rates, which are confirmed in simulation.
Abstract:
Publication date:
September 1, 2015
Publication type:
Conference Paper
Citation:
Krichene, S., Krichene, W., Dong, R., & Bayen, A. (2015). Convergence of Heterogeneous Distributed Learning in Stochastic Routing Games. 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 480–487. https://doi.org/10.1109/ALLERTON.2015.7447043