By Topic

Automatic Control, IEEE Transactions on

Issue 1 • Date February 1967

Filter Results

Displaying Results 1 - 25 of 45
  • [Front cover and table of contents]

    Page(s): 0
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • More issues

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • Issue in brief

    Page(s): 2 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (297 KB)  
    Freely Available from IEEE
  • Optimal investment policy: An example of a control problem in economic theory

    Page(s): 4 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (921 KB)  

    A problem in mathematical economics concerning the optimal investment of resources is solved via the techniques of optimal control theory. Interesting theoretical complications include the simultaneous presence of interdependent control variable inequality constraints, state variable inequality constraints, and singularity conditions. Economic implications of the results are briefly discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On a goal-keeping differential game

    Page(s): 15 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    A tactical situation analogous to the goal-keeping and scoring problem in hockey is considered. The problem is formulated as a differential game and solved analytically using dynamic programming techniques. The analytic solution is represented by return function maps (in two variables). Other approaches to the problem are indicated with a discussion of their particular advantages and difficulties. Results are quoted for more complex goal-keeping games obtained by a combination of techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the design of optimum distributed parameter system with boundary control function

    Page(s): 22 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB)  

    The problem of optimum control of a distributed parameter system with boundary control is studied. The distributed parameter system considered is described by theN-dimensional wave equation. The error measure is quadratic. The control function is unconstrained. The Riccati equation for optimum boundary control is derived. Methods for solving the Riccati equation and calculating optimum control are discussed. The resulting control is closed loop. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimum control of a class of distributed-parameter systems

    Page(s): 29 - 37
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (653 KB)  

    Optimum control of a class of distributed-parameter systems subject to control signal saturation is studied. The system performance is optimized with respect to an integral criterion function. This problem is solved by making use of coordinate transformation and Butkovskii's maximum principle. For this class of systems with distinct characteristic roots, the optimum control law is found to be the solution of a Fredholm integral equation of the second kind, subject to the condition of bounded signal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approximation to bounded phase coordinate control problem for linear discrete systems

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    This paper considers the penalty function method to obtain an approximate solution to the bounded phase coordinate optimal control problem for linear discrete systems with essentially quadratic cost functionals. The penalty function assumes positive values outside the phase constraint set, and zero inside the phase constraint set. The problem is to find an optimal control from a convex compact control restraint set such that the cost functional is minimum, and the sum of the penalty function along the response is smaller than a prescribed constant. It is shown that the maximum principle is a necessary and sufficient condition for an optimal control in a number of cases, and an analytic method of finding an optimal control is given. Also, the existence of an optimal control is proved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frequency criteria for bounded-input--Bounded-output stability of nonlinear sampled-data systems

    Page(s): 46 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (602 KB)  

    Sufficient conditions for absolute stability in the bounded-input-bounded-output sense are obtained for a class of nonlinear sampled-data systems. The criteria are identical to those establishing absolute stability in a global stability sense for the same class of autonomous nonlinear sampled-data systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite time stability under perturbing forces and on product spaces

    Page(s): 54 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB)  

    This paper continues the development of a qualitative theory of stability, recently initiated by the authors, for systems operating over finite time intervals. The theory is motivated by 1) the need for a more practical concept of stability than is provided by the classical theory; and 2) the search for methods for investigating stability of a system trajectory (either analytically or numerically given) without the necessity of performing complicated transformations of the differential equations involved. The systems studied in this paper are nonautonomous, i.e., they are under the influence of external forces, and the concept of finite time stability (precisely defined in the paper) in this case involves the bounding of trajectories within specified regions of the state space during a given finite time interval. (The input is assumed to be bounded by a known quantity during this time interval.) Sufficient conditions are given for various types of finite time stability of a system under the influence of perturbing forces which enter the system equations linearly. These conditions take the form of existence of "Liapunov-like" functions whose properties differ significantly from those of classical Liapunov functions. In particular, there is no requirement of definiteness on such functions or their derivative. The remainder of the paper deals with the problem of determining finite time stability properties of a system from knowledge of the finite time stability properties of lower-order subsystems which, when appropriately coupled, form the original system. An example is given which illustrates some of the concepts discussed in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Numerical computation of optimal control

    Page(s): 59 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (674 KB)  

    The formal solution for dynamically optimized control of a process is specified in terms of a two-point boundary problem. The numerical computation then presents great difficulties and speed of convergence is vital, even with the most powerful of contemporary computers. The authors give a method for rapid final convergence in which the two-point boundary problem is avoided altogether, although it is in fact only slightly different to methods of second variations. Computing experience with two examples is then described: a) aircraft landing and b) control of a boiler, with which the speed of convergence is well illustrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An application of an analog computer to solve the two-point boundary-value problem for a fourth-order optimal control problem

    Page(s): 67 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB)  

    The application of Pontryagin's maximum principle to trajectory optimization problems results in a two-point boundary-value problem. Computationally, this problem is solved by various digital techniques which are sometimes inconvenient and costly. An analog computer can be used to solve a large class of two-point boundary-value problems if the accuracy is acceptable. In this paper, an analog computer is used in conjunction with a human operator who has a display of the phase planes of the admissible trajectories. The human operator, having a general knowledge of the behavior of the system, adjusts control law parameters until the boundary conditions of the system are satisfied. Apparently this technique has been avoided previously due to the impression that unacceptable errors would be introduced in solving the problem. This technique was applied to a time-optimal rendezvous with bounds on rocket thrust and fuel available and demonstrated that accurate analog computer solutions are possible. Solutions of the rendezvous problem were compared with an exact solution using MIMIC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach to model-referenced adaptive control systems

    Page(s): 75 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    This paper considers the model-referenced adaptive control problem. An adaption technique that is extremely simple to implement is derived analytically. The simplicity of this technique gives it a distinct advantage over other techniques that have been described in the literature and makes it attractive for practical applications. A direct approach to the problem is taken, employing the state-space point of view. By solving the differential equations of the reference model and the adaptive control system, an expression is obtained showing the explicit functional dependence of the performance error on the adaptive parameters. Manipulation of the expression for performance error yields the adaption equations which are subsequently shown to be very simple to implement. To illustrate the theory developed in this paper, a simple example is discussed and a stability analysis employing Lyapunov's second method is undertaken. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control of linear plants with zeros and slowly varying parameters

    Page(s): 80 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    A reduction of order technique is developed which is useful in controller design for single-input, single-output, linear plants which have transfer functions with zeros and parameters which vary slowly compared to the response time of the plant. The controller design is based on a Liapunov-like method and employs a model reference. The reduction of order technique allows the control signal to be generated from the plant output and only its firstn-m-1derivatives if the plant transfer function has annth order denominator and anmth order numerator and if the transfer function hasmknown fixed poles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On line solution of optimal nonstationary linear systems

    Page(s): 83 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    The gradient acceleration technique is described as a means for solving optimal nonstationary linear systems with unbounded control. A numerical example with a free terminal state is solved. Also, the usefulness of this method for nonlinear and bounded control linear systems is indicated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of game theory to the sensitivity design of optimal systems

    Page(s): 85 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (322 KB)  

    The theory of games is applied to the design of systems with unknown plant parameters. It is assumed that a controller structure is known and furthermore that this controller is optimal when the controller parameters are equal to the plant parameters. The performance index then becomes a function of plant and controller parameters. This function is treated as a pay-off function with the antagonists represented by the controller and plant parameters. The theory is illustrated with some simple samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear optimal control problems with isoperimetric constraints

    Page(s): 87 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB)  

    Optimum control problems with convex inequality side constraints are studied. It is shown that in quite general circumstances an optimum controller can be obtained, and necessary and sufficient conditions for its determination are derived. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear feedback solution to a bounded Brachistochrone problem in a reduced state space

    Page(s): 90 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (495 KB)  

    The optimal nonlinear feedback law for both an unbounded and bounded Brachistochrone problem is given. These control laws describe the Brachistochrone problem in a very simple manner. These laws are derived by using a dimensionless set of state variables which spans a lower dimensional space than the original state space. Also using this reduced state space a two-dimensional instead of three-dimensional field of extremals is constructed for a space bounded Brachistochrone problem. This illustrates that the state space may be reduced so that the storage of a field of extremal trajectories will not exceed the storage capacity of a practical on-board computer. In this way, optimal feedback control may become a practical technique for a larger class of nonlinear systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A closed-loop, approximately time-optimal control method for linear systems

    Page(s): 94 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB)  

    In this paper, a method is developed for the closed-loop, approximately time-optimal control of a class of linear systems with a total effort constraint. This method is based on a special class of solutions of the Hamilton-Jacobi equation, called eigenvector scalar products. The procedure to be followed is systematically presented, and the method is shown to represent an effective compromise between system complexity and speed of response. In designing a closed-loop system using this method, the controller-computer must only solve algebraic equations, and hence the control can be computed continuously.This is to be contrasted with many of the present methods which require on-line solution of two-point boundary value problems, and hence discrete control. The method is illustrated by an example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interpolation and extrapolation schemes in dynamic programming

    Page(s): 97 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB)  

    Classical dynamic programming techniques for the solution of optimal control problems require excessive computer storage capacity. A recently published variation overcomes the problem at the expense of requiring occasional extrapolation along the time axis. The present paper suggests that extrapolation in state space may give higher accuracy. Numerical results are presented in confirmation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some results on an inverse problem in multivariable systems

    Page(s): 99 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    A systems property of great interest to the control engineer concerns the system's ability to exhibit those responses which are asked of it. This capability is governed by the nature of the system equations and the set of inputs which are available to 'force' the system. Given the set of desired system responses Sy, and the system equations, this paper is concerned with characterizing the set of inputs which suffice to generate Sy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on problems with periodic target sets

    Page(s): 104 - 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (175 KB)  

    For certain types of optimization problems, the target set may consist of an infinite number of points uniformly spaced along one or more of the state variable axes. This is true, for example, in the case of time optimal single-axis attitude control. Since the target set is noncompact, the maximum principle cannot be applied directly. However, a new set of state variables can be defined ahich eliminates this problem. The single-axis attitude control problem will be solved to demonstrate the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering.  Two types of contributions are regularly considered

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
P. J. Antsaklis
Dept. Electrical Engineering
University of Notre Dame