By Topic

Optimal control-1950 to 1985

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Bryson, A.E., Jr. ; Dept. of Aeronaut. & Astronaut., Stanford Univ., CA, USA

Optimal control had its origins in the calculus of variations in the 17th century. The calculus of variations was developed further in the 18th century by Euler and Lagrange and in the 19th century by Legendre, Jacobi, Hamilton, and Weierstrass. In the early 20th century, Bolza and Bliss put the final touches of rigor on the subject. In 1957, Bellman gave a new view of Hamilton-Jacobi theory which he called dynamic programming, essentially a nonlinear feedback control scheme. McShane (1939) and Pontryagin (1962) extended the calculus of variations to handle control variable inequality constraints, the latter enunciating his elegant maximum principle. The truly enabling element for use of optimal control theory was the digital computer, which became available commercially in the 1950s. In the 1980s research began, and continues today, on making optimal feedback logic more robust to variations in the plant and disturbance models; one element of this research is worst-case and H-infinity control, which developed out of differential game theory

Published in:

Control Systems, IEEE  (Volume:16 ,  Issue: 3 )