Optimal control problems on dynamical systems are concerned with finding a control policy, which minimizes a desired objective, where the objective value depends on the future evolution of the system (the state of the system), which, in turn, depends on the control policy. For systems which contain subsystems that are disjoint across the state variables, distributed optimization techniques exist, which iteratively update subsystems concurrently and then exchange information between subsystems with shared control variables. This article presents a method, based on the asynchronous...