Loading [MathJax]/extensions/MathMenu.js
Learning Dynamic-Objective Policies from a Class of Optimal Trajectories | IEEE Conference Publication | IEEE Xplore

Learning Dynamic-Objective Policies from a Class of Optimal Trajectories


Abstract:

Optimal state-feedback controllers, capable of changing between different objective functions, are advantageous to systems in which unexpected situations may arise. Howev...Show More

Abstract:

Optimal state-feedback controllers, capable of changing between different objective functions, are advantageous to systems in which unexpected situations may arise. However, synthesising such controllers, even for a single objective, is a demanding process. In this paper, we present a novel and straightforward approach to synthesising these policies through a combination of trajectory optimisation, homotopy continuation, and imitation learning. We use numerical continuation to efficiently generate optimal demonstrations across several objectives and boundary conditions, and use these to train our policies. Additionally, we demonstrate the ability of our policies to effectively learn families of optimal state- feedback controllers, which can be used to change objective functions online. We illustrate this approach across two trajectory optimisation problems, an inverted pendulum swingup and a spacecraft orbit transfer, and show that the synthesised policies, when evaluated in simulation, produce trajectories that are near-optimal. These results indicate the benefit of trajectory optimisation and homotopy continuation to the synthesis of controllers in dynamic-objective contexts.
Date of Conference: 14-18 December 2020
Date Added to IEEE Xplore: 11 January 2021
ISBN Information:

ISSN Information:

Conference Location: Jeju, Korea (South)

Contact IEEE to Subscribe

References

References is not available for this document.