Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Automatic Control, IEEE Transactions on

Issue 5 • Date October 1969

Filter Results

Displaying Results 1 - 25 of 54
  • [Front cover and table of contents]

    Publication Year: 1969 , Page(s): 0
    Save to Project icon | Request Permissions | PDF file iconPDF (162 KB)  
    Freely Available from IEEE
  • Comments on "Numerical application of Szegós method for constructing Liapunov functions"

    Publication Year: 1969 , Page(s): 602
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (134 KB)  

    It is pointed out that the algorithmic method recently suggested by Hewit and Storey for the construction of Lyapunov functions yields a result which is expressible in closed form. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • [Back cover]

    Publication Year: 1969 , Page(s): 0
    Save to Project icon | Request Permissions | PDF file iconPDF (132 KB)  
    Freely Available from IEEE
  • Computational aspects of the linear optimal regulator problem

    Publication Year: 1969 , Page(s): 547 - 550
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB)  

    This paper considers computational problems arising in the solution of the linear optimal regulator problem. The proposed solution for the constant feedback gain matrix is an adaption of the eigenvector solution proposed by many authors. Techniques are given which are numerically stable and do not require complex arithmetic. These techniques offer considerable savings in computation time and eliminate many of the problems that plague more conventional methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical servomechanism design

    Publication Year: 1969 , Page(s): 604 - 606
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A transformation technique for optimal control problems with a state variable inequality constraint

    Publication Year: 1969 , Page(s): 457 - 464
    Cited by:  Papers (27)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB)  

    A slack variable is used to transform an optimal control problem with a scalar control and a scalar inequality constraint on the state variables into an unconstrained problem of higher dimension. It is shown that, for a p th order constraint, the p th time derivative of the slack variable becomes the new control variable. The usual Pontryagin principle or Lagrange multiplier rule gives necessary conditions of optimality. There are no discontinuities in the adjoint variables. A feature of the transformed problem is that any nominal control function produces a feasible trajectory. The optimal trajectory of the transformed problem exhibits singular arcs which correspond, in the original constrained problem, to arcs which lie along the constraint boundary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control of an amplifying wave on an infinite continuum

    Publication Year: 1969 , Page(s): 536 - 539
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    Previous work on the space-sampled feedback control of continuum instabilities in flowing systems is extended. When the continuum over which control is desired is many wavelengths long, it is often convenient to construct the control system of many sampling stations, each approximately one wavelength in size. The stability of such a system is discussed, and a new type of instability is found which does not appear when the system is small in terms of a wavelength. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Path integrals and Lyapunov functionals

    Publication Year: 1969 , Page(s): 465 - 475
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1736 KB)  

    A method for generating Lyapunov functionals for time-delay systems by means of path integrals in state space is given. The method is derived by making use of a new description of such systems in terms of convolution equations involving distributions with compact support. The important properties of these equations are discussed and it is shown that a suitable state space can be defined. Path integrals in this state space are defined and conditions for path independence are derived. With the aid of some results dealing with the spectral factorization of entire functions of exponential order, it is shown that these path integrals can be used to define Lyapunov functionals for time-delay systems. The method given represents an extension to infinite-dimensional systems of a technique developed by Brockett for systems described by ordinary differential equations. While the present approach differs fundamentally from that used for finite-dimensional systems, the results given here are similar to, and in the special case of finite-dimensional systems reduce to, the results given by Brockett. Hence the method given can be successfully applied even without a deep understanding of either distributions or distributional convolution equations. This is illustrated by a number of examples which show the application of the results to stability analysis as well as to a class of quadratic minimiization problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new algorithm for suboptimal stochastic control

    Publication Year: 1969 , Page(s): 533 - 536
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    An apparently new stochastic control algorithm, called M -measurement-optimal feedback control, is described for discrete-time systems. This scheme incorporates M future measurements into the control computations: when M is zero,it reduces to the well-known open-loop-optimal feedback control; when M is the actual number of measurements remaining in the problem, it becomes the truly optimal stochastic control. This new algorithm may also be used to simplify computations when the plant is nonlinear, when the controls are constrained, or when the cost is nonquadratic. Simulation results are presented which show the superiority of the new algorithm over the open-loop-optimal feedback control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification of impulse response from normal operating data using the delay line synthesizer principle

    Publication Year: 1969 , Page(s): 580 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    A computational technique is presented here for identifying the impulse response of a linear system from normal operating noisy data. No assumption, however, is made regarding the nature of the noise. The technique derives its idea from the delay line synthesizer (DLS) though in this case the DLS coefficients which discretely represent the weighting function are computed automatically employing the steepest descent method. The method has been tried out on a first-order as well as a second-order system simulated on a digital computer; the estimated impulse response is found to be very close to the actual one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control of unknown plants in reduced state space

    Publication Year: 1969 , Page(s): 489 - 496
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1048 KB)  

    A method is proposed in this paper for the synthesis of an adaptive controller for a class of model reference systems in which the plant is not known exactly, but which is of the following type: single variable, time varying, either linear or nonlinear, of n th order, and capable of m th order input differentiation. The model is linear, stable, and of n 'th order, where (n - m) \leq n' \leq n . The only knowledge of the plant that is required in this synthesis procedure is the form of the plant equation and the bounds of b_{m}(t) , the coefficient of the m th order plant input derivative. The synthesis procedure makes use of an unique function, called the characteristic variable, and Lyapunov type synthesis. The introduction of the characteristic variable reduces the synthesis problem to one that involves a known, linear time-invariant lower order plant. The control signal is generated by measuring the plant and model outputs, and their first (n - m) derivative signals. This ensures that the norm of the (n - m)- dimensional error vector is ultimately bounded by ε, an arbitrarily small positive number provided \xi(t) , the characteristic variable, is bounded. Two nontrivial simulation examples are included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pulse-frequency modulation and dynamic programming

    Publication Year: 1969 , Page(s): 558 - 561
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    The application of dynamic programming to pulse-frequency modulated (PFM) control systems is discussed. In the PFM systems considered the control function consists of a series of standard pulses. Because, in general, the peformance index is not Markovian, which is a requirement for the application of dynamic programming, a restriction is imposed upon the control function. This restriction, which specifies the allowable instants of time where pulses may occur, may cause the resulting control function to yield a less desirable performance index than the one obtained with a control function derived by means which do not impose this restriction, i.e., modified maximum principle [8], [9]. An example belonging to a class of systems in which the equations are separable and linear with respect to the state variables, with no final value constraints is presented. It is shown that for this class of systems the optimum control function is independent of the initial values of the state variables. Consequently, the search for the optimum control function is simplified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral properties and optimum reconstruction of randomly gated stationary random signals

    Publication Year: 1969 , Page(s): 564 - 567
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    The mathematical description of random samplers is generalized to cover randomly gated stationary random signals. It is shown that the spectral density of randomly gated signals is the sum of a complex convolution integral and its paraconjugate. When a random gating signal is generated by a parent gating signal, relationships between the characteristic functions associated with a random gate and those associated with a parent gate are given. The application of Wiener-Hopf theory to the design of optimum filters for the reconstruction of randomly gated signals is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On optimal terminal control

    Publication Year: 1969 , Page(s): 443 - 448
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    This paper is concerned with the behavior, near the terminal time, of a fixed time fixed endpoint linear optimal control problem. The optimal control is represented in feedback form by a time-varying gain matrix operating on the present state of the system. The gain matrix is shown to have a pole at the final time. Furthermore, the limiting behavior of the gain matrix is shown to be independent of most of the parameters of the system and to depend solely on the orders of the matrices involved. Thus, the terminal behavior for this entire class of systems is identical to the behavior of a single limiting system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A technique for suboptimal feedback control of nonlinear systems

    Publication Year: 1969 , Page(s): 530 - 533
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    A simple easily implemented method is developed for obtaining a suboptimal control law for the optimization problem associated with minimizing a quadratic cost functional for nonlinear systems. The suboptimal control law is derived using a Taylor's series representation for the feedback gain matrix after modeling the nonlinear system by a linear system at each instant of time. The resultant control law is of feedback form and is nonlinear in state. The suboptimal control is obtained without using iterative techniques or any true optimal solutions. A second-order numerical example illustrates the effectiveness of the method and gives a comparison to the results of previous methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Issue in brief

    Publication Year: 1969 , Page(s): 441 - 442
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Separation theorem for nonlinear measurements

    Publication Year: 1969 , Page(s): 561 - 564
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    General solutions to the optimal stochastic control problem, or the combined estimation and control problem, are extremely difficult to compute since dynamic programming is required. However, if the system is linear, if the measurements are linear, and if the cost is quadratic, then the optimal stochastic controller is separated into 1) a filter to generate the conditional mean of the state, and 2) the optimum (linear) controller that results when all uncertainties are neglected. By altering the system configuration a new separation theorem is derived for arbitrary nonlinear measurements, discrete-time linear systems, and a quadratic cost. If a feedback loop is placed around the nonlinear measurement device (e.g., an analog-to-digital converter), then the stochastic control can be found without dynamic programming and is computed by cascading a nonlinear filter and the optimum (linear) controller. The primary advantage is the significant saving in computation. The performance of this new system configuration relative to the system without feedback depends on the nonlinearity, and it is not necessarily superior. A numerical example is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Policies and controller design for a pursuing vehicle

    Publication Year: 1969 , Page(s): 482 - 488
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB)  

    Policies are developed for two pursuit-evasion differential games. In the first, the initial state is such that capture of the evader can be guaranteed. In the second, capture is not guaranteed, but the evader desires to approach a target as closely as possible. In both games, consideration is given to pursuer policies appropriate when the evader acts nonoptimally. A method is presented for designing simple state feedback controllers to realize the chosen policy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Open- versus closed-loop implementation of optimal control

    Publication Year: 1969 , Page(s): 570 - 572
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    The problem of the implementation of optimal control for systems affected by parametric variations is discussed. Some conditions are given which enable the comparison between open-loop and closed-loop implementation without requiring any knowledge of the probability density function of the values of the parameters. The consideration of the simplest linear system allows some interesting conclusions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the characteristics of the parameter-perturbation process dynamics

    Publication Year: 1969 , Page(s): 540 - 542
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    This paper investigates the characteristics of the dynamics of a parameter-perturbation process in a continuous linear system. These characteristics are studied in the performance index-parameter space for a positive definite performance index function. In particular, the relationship between the static sensitivity function and the process quasilinear dynamics is established, and the minimum and nonminimum phase phenomena of the dynamics are examined. It is shown that the process quasilinear dynamics, from parameter variations to the dynamic response in the error-criterion function, are the nonminimum phase type if the quiescent value of the parameter lies on the negative slope side of the static error-transfer characteristics and derivative type at the optimum value of the parameter. Theoretical and analog simulation results are presented for a particular system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stability criterion of nonlinear control systems

    Publication Year: 1969 , Page(s): 601 - 602
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    In this correspondence an alternative form of the frequency domain stability criterion of nonlinear control systems is shown. A graphical method of performing this criterion using the gain-phase plane is also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimum step size control for Newton-Raphson solution of nonlinear vector equations

    Publication Year: 1969 , Page(s): 572 - 574
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    The problem of solving a non-linear vector equation of the form x = f(\theta) for a \theta which corresponds to a given x is attacked by Newton-Raphson. To keep the lengthy evaluations of partial derivatives at a minimum, each step is optimized to get the search as close to the solution as possible. Substantial savings in computation time are realized and solutions can be obtained efficiently even when the initial guess is not close to the ultimate answer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive control algorithm for linear systems

    Publication Year: 1969 , Page(s): 497 - 503
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB)  

    A modified gradient procedure is presented for adjusting parameters in a linear control system in the absence of complete knowledge of the plant dynamic characteristics. The algorithm operates to make discrete-time changes in the adjustable parameters during the normal course of system operation and incorporates the best available information on the unknown quantities. Sufficient conditions for the error corrective properties of the algorithm are derived, and the results of a simulation study are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on the estimation of state variables and unknown parameters of a nonlinear system

    Publication Year: 1969 , Page(s): 585 - 587
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    The problem of estimating state variables and unknown parameters of a nonlinear system from noisy measurements is presented. The estimate to maximize the a posteriori probability density function of state variables and unknown parameters conditioned upon noisy measurements is approximately computed by using the recursive and the corrective formulas. The result for a simple system is presented in which the estimates for the linearized recursive method and the proposed one are compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the use of Bernoulli's equation as a comparison equation in stability problems

    Publication Year: 1969 , Page(s): 597 - 599
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    Sufficient conditions for finding stability regions are derived. The stability approach is based on the comparison theorems using Bernoulli's scalar comparison equation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering.  Two types of contributions are regularly considered

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
P. J. Antsaklis
Dept. Electrical Engineering
University of Notre Dame