Optimal control is the problem of determining the control function for a dynamical system to minimize a cost related to the system trajectory. The subject has its roots in the calculus of variations but it evolved to an independent branch of applied mathematics and engineering in the 1950s.
The overall goal of the course is to provide an understanding of the main results in calculus of variations and optimal control, and how these results can be used in various applications such as in robotics, finance, economics, and biology