This paper presents a general Hamilton-Jacobi (HJ) framework for optimal control and two-player zero-sum game problems, both with state constraints. In the optimal control problem, a control signal and terminal time are determined to minimize the given cost and satisfy the state constraints. In the game problem, the two players interact via the system dynamics. Here, a strategy for each player, as well as a terminal time, are determined so that player A minimizes the cost and satisfies the state constraints while player B tries to prevent the success of player A. Dynamics, costs, and state constraints are time-varying. HJ equations are proposed, bridging the viability theory for constrained problems [1] and problems in which the terminal time is specified [2]. A numerical algorithm for computing the solution of the proposed HJ equations is presented and demonstrated with a practical example: vehicle lane-changing while avoiding other vehicles.
Abstract:
Publication date:
December 1, 2020
Publication type:
Conference Paper
Citation:
Lee, D., Keimer, A., Bayen, A. M., & Tomlin, C. J. (2020). Hamilton-Jacobi Formulation for State-Constrained Optimal Control and Zero-Sum Game Problems. 2020 59th IEEE Conference on Decision and Control (CDC), 1078–1085. https://doi.org/10.1109/CDC42340.2020.9304334