By Topic

Systems Science and Cybernetics, IEEE Transactions on

Popular Articles (January 2015)

Includes the top 50 most frequently downloaded documents for this publication according to the most recent monthly usage statistics.
  • 1. A Formal Basis for the Heuristic Determination of Minimum Cost Paths

    Publication Year: 1968 , Page(s): 100 - 107
    Cited by:  Papers (612)  |  Patents (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2391 KB)  

    Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2. Feature Selection in Pattern Recognition

    Publication Year: 1970 , Page(s): 33 - 39
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2076 KB)  

    The problem of feature selection in pattern recognition is briefly reviewed. Feature selection techniques discussed include 1) information theoretic approach, 2) direct estimation of error probability, 3) feature-space transformation, and 4) approach of using stochastic automata model. These techniques are applied to the selection of features in the crop classification problem. Computer similation results are presented and compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3. Information Value Theory

    Publication Year: 1966 , Page(s): 22 - 26
    Cited by:  Papers (45)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (949 KB)  

    The information theory developed by Shannon was designed to place a quantitative measure on the amount of information involved in any communication. The early developers stressed that the information measure was dependent only on the probabilistic structure of the communication process. For example, if losing all your assets in the stock market and having whale steak for supper have the same probability, then the information associated with the occurrence of either event is the same. Attempts to apply Shannon's information theory to problems beyond communications have, in the large, come to grief. The failure of these attempts could have been predicted because no theory that involves just the probabilities of outcomes without considering their consequences could possibly be adequate in describing the importance of uncertainty to a decision maker. It is necessary to be concerned not only with the probabilistic nature of the uncertainties that surround us, but also with the economic impact that these uncertainties will have on us. In this paper the theory of the value of information that arises from considering jointly the probabilistic and economic factors that affect decisions is discussed and illustrated. It is found that numerical values can be assigned to the elimination or reduction of any uncertainty. Furthermore, it is seen that the joint elimination of the uncertainty about a number of even independent factors in a problem can have a value that differs from the sum of the values of eliminating the uncertainty in each factor separately. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 4. Prior Probabilities

    Publication Year: 1968 , Page(s): 227 - 241
    Cited by:  Papers (126)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3452 KB)  

    In decision theory, mathematical analysis shows that once the sampling distribution, loss function, and sample are specified, the only remaining basis for a choice among different admissible decisions lies in the prior probabilities. Therefore, the logical foundations of decision theory cannot be put in fully satisfactory form until the old problem of arbitrariness (sometimes called "subjectiveness") in assigning prior probabilities is resolved. The principle of maximum entropy represents one step in this direction. Its use is illustrated, and a correspondence property between maximum-entropy probabilities and frequencies is demonstrated. The consistency of this principle with the principles of conventional "direct probability" analysis is illustrated by showing that many known results may be derived by either method. However, an ambiguity remains in setting up a prior on a continuous parameter space because the results lack invariance under a change of parameters; thus a further principle is needed. It is shown that in many problems, including some of the most important in practice, this ambiguity can be removed by applying methods of group theoretical reasoning which have long been used in theoretical physics. By finding the group of transformations on the parameter space which convert the problem into an equivalent one, a basic desideratum of consistency can be stated in the form of functional equations which impose conditions on, and in some cases fully determine, an "invariant measure" on the parameter space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 5. A Tutorial Introduction to Decision Theory

    Publication Year: 1968 , Page(s): 200 - 210
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2303 KB)  

    Decision theory provides a rational framework for choosing between alternative courses of action when the consequences resulting from this choice are imperfectly known. Two streams of thought serve as the foundations: utility theory and the inductive use of probability theory. The intent of this paper is to provide a tutorial introduction to this increasingly important area of systems science. The foundations are developed on an axiomatic basis, and a simple example, the "anniversary problem," is used to illustrate decision theory. The concept of the value of information is developed and demonstrated. At times mathematical rigor has been subordinated to provide a clear and readily accessible exposition of the fundamental assumptions and concepts of decision theory. A sampling of the many elegant and rigorous treatments of decision theory is provided among the references. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 6. On the Inverse of Linear Dynamical Systems

    Publication Year: 1969 , Page(s): 43 - 48
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1003 KB)  

    The problem considered is that of determining the inverse of a linear time-invariant dynamical system characterized by a first-order vector differential equation. This problem has application to various problems in control and estimation, where a state space representation is utilized. A necessary and sufficient condition is given for the existence of an inverse system and an algorithm is developed for the inverse system when one exists. The inverse algorithm generates a system composed of a differentiation system cascaded with a dynamical system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 7. The Foundations of Decision Analysis

    Publication Year: 1968 , Page(s): 211 - 219
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1955 KB)  

    Decision analysis has emerged from theory to practice to form a discipline for balancing the many factors that bear upon a decision. Unusual features of the discipline are the treatment of uncertainty through subjective probability and of attitude toward risk through utility theory. Capturing the structure of problem relationships occupies a central position; the process can be visualized in a graphical problem space. These features are combined with other preference measures to produce a useful conceptual model for analyzing decisions, the decision analysis cycle. In its three phases¿deterministic, probabilistic, and informational¿the cycle progressively determines the importance of variables in deterministic, probabilistic, and economic environments. The ability to assign an economic value to the complete or partial elimination of uncertainty through experimentation is a particularly important characteristic. Recent applications in business and government indicate that the increased logical scope afforded by decision analysis offers new opportunities for rationality to those who wish it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 8. Natural Bilinear Control Processes

    Publication Year: 1970 , Page(s): 192 - 197
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1365 KB)  

    A nonlinear class of models for biological and physical processes is surveyed. It is shown that these so-called bilinear systems have a variable dynamical structure that makes them quite controllable. While control systems are classically designed so there are no unstable modes, bilinear systems may utilize appropriately controlled unstable modes of response to enhance controllability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 9. Adaptive Estimation with Mutually Correlated Training Sequences

    Publication Year: 1970 , Page(s): 12 - 19
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1639 KB)  

    The linear least-mean-square error (LMS) estimate of a scalar random variable given an observation of a vector-valued random variable (data) is well know. Computation of the estimate requires knowledge of the data correlation matrix. Algorithms have been proposed by Griffiths [9] and by Widrow [7] for iterative determination of the estimate of each element from a sequence of scalar random variables given an observation of the corresponding element from a sequence of date vectors when the data correlation matrix is not known. These algorithms are easy to implement, require little storage, and are suitable for real-time processing. Past convergence studies of these algorithms have assumed that the data vectors were mutually independent. In this study some asymptotic properties of these and other related algorithms are derived for a sequence of mutually correlated data vectors. A generalized algorithm is defined for analytic purposes. It is demonstrated for this generalized algorithm that excess mean-square error (as defined by Widrow) can be made arbitrarily small for large values of time in the correlated case. The analysis can be applied to a particular estimation scheme of 1) the particular algorithm can be placed in the generalized form, and 2) the given assumptions are satisfied. The analysis of the generalized algorithm requires that the data vectors possess only a few properties; foremost among these are ergodicity and a form of asymptotic independence. This analysis does not assume any particular probability distribution function nor any particular form of mutual correlation for the data vectors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 10. Optimal Estimation in the Presence of Unknown Parameters

    Publication Year: 1969 , Page(s): 38 - 43
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1357 KB)  

    An adaptive approach is presented for optimal estimation of a sampled stochastic process with finite-state unknown parameters. It is shown that, for processes with an implicit generalized Markov property, the optimal (conditional mean) state estimates can be formed from 1) a set of optimal estimates based on known parameters, and 2) a set of "learning" statistics which are recursively updated. The formulation thus provides a separation technique which simplifies the optimal solution of this class of nonlinear estimation problems. Examples of the separation technique are given for prediction of a non-Gaussian Markov process with unknown parameters and for filtering the state of a Gauss-Markov process with unknown parameters. General results are given on the convergence of optimal estimation systems operating in the presence of unknown parameters. Conditions are given under which a Bayes optimal (conditional mean) adaptive estimation system will converge in performance to an optimal system which is "told" the value of unknown parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 11. A Formulation of Fuzzy Automata and Its Application as a Model of Learning Systems

    Publication Year: 1969 , Page(s): 215 - 223
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1592 KB)  

    Based on the concept of fuzzy sets defined by Zadeh, a class of fuzzy automata is formulated similar to Mealy's formulation of finite automata. A fuzzy automaton behaves in a deterministic fashion. However, it has many properties similar to that of stochastic automata. Its application as a model of learning systems is discussed. A nonsupervised learning scheme in automatic control and pattern recognition is proposed with computer simulation results presented. An advantage of employing fuzzy automaton as a learning model is its simplicity in design and computation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 12. Feature Extraction on Binary Patterns

    Publication Year: 1969 , Page(s): 273 - 278
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1359 KB)  

    The objects and methods of automatic feature extraction on binary patterns are briefly reviewed. An intuitive interpretation for geometric features is suggested whereby such a feature is conceived of as a cluster of component vectors in pattern space. A modified version of the Isodata or K-means clustering algorithm is applied to a set of patterns originally proposed by Block, Nilsson, and Duda, and to another artificial alphabet. Results are given in terms of a figure-of-merit which measures the deviation between the original patterns and the patterns reconstructed from the automatically derived feature set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 13. Nonlinear Smoothing Theory

    Publication Year: 1970 , Page(s): 63 - 71
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1775 KB)  

    Differential equations are developed for the smoothing density function and for the smoothed expectation of an arbitrary function of the state. The exact equations developed herein are difficult to solve except in trivially simple cases. Approximations to these equations are developed for the smoothed expectation of the state and the smoothing covariance matrix. For linear systems these equations are shown to reduce to previously derived results. An iterative technique is suggested for even greater accuracy in approximations for severely nonlinear systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 14. The Development of a Corporate Risk Policy for Capital Investment Decisions

    Publication Year: 1968 , Page(s): 279 - 300
    Cited by:  Papers (3)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4030 KB)  

    A corporate utility function plays a key role in the application of decision theory. This paper describes how such a utility function was evolved as a risk policy for capital investment decisions. First, 36 corporate executives were interviewed and their risk attitudes were quantified. From the responses of the interviewees, a mathematical function was developed that could reflect each interviewee's attitude. The fit of the function was tested by checking the reaction of the interviewees to adjusted responses. The functional form that led the interviewees to prefer the adjusted responses to their initial responses was finally accepted. The mathematical form of the function was considered a flexible pattern for a risk policy. The assumption was made that the corporate risk policy would be of this pattern. With the pattern for a risk policy set, it was possible to simplify the method of deriving a particular individual's risk attitude. Using the simplified method, the corporate policy makers were interviewed once more. The results from these interviews were then used as a starting point in two negotiation sessions. As a result of these negotiation sessions, the policy makers agreed on a risk policy for trial purposes. They also agreed to develop a number of major projects using the concepts of risk analysis and the certainty equivalent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 15. A Case Study of Citizen Complaints as Social Indicators

    Publication Year: 1970 , Page(s): 265 - 272
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1533 KB)  

    The purpose is to illustrate the applicability of the approach and the techniques of systems engineering to certain urban problems. Systems engineering can be an effective tool in the design and operation of organizations to accomplish such urban activities as police scheduling, waste disposal, river purification, fire house location, etc. The relatively unploughed ground of applying systems engineering to the quality of urban life is addressed here. The quality of urban life, an elusive but intuitively satisfying concept, is operationally useful to the extent that a city can identify and move toward achieving the goals of its citizenry. Social indicators measure the extent to which these goals have been achieved. For such indicators to be usable on line inputs for determining changes in urban subsystems, they must respond rapidly and sensitively to the citizenry's changing perception of the gap between goals and actual achievements. Indicators aggregated over long intervals of time, large physical areas, or population groups tend to be sluggish and historical. It is shown how unsolicited complaints and comments from the citizenry may help to define such operationally useful social indicators. A conceptual framework emphasizing adaptive urban subsystems is presented, and data are used to illustrate the feasibility of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 16. The Present Status and Trends in Systems Engineering

    Publication Year: 1966 , Page(s): 1 - 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    This paper is intended as an introduction to this issue which is devoted primarily to systems engineering methodology. As such, it explains the design of the issue, cites a few current events, points to the problems and opportunities of the field, and attempts to motivate further development of the field. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 17. Learning Applied to Successive Approximation Algorithms

    Publication Year: 1970 , Page(s): 97 - 103
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1142 KB)  

    A linear reinforcement learning technique is proposed to provide a memory and thus accelerate the convergence of successive approximation algorithms. The learning scheme is used to update weighting coefficients applied to the components of the correction terms of the algorithm. A direction of the search approaching the direction of a "ridge" will result in a gradient peak-seeking method which accelerates considerably the convergence to a neighborhood of the extremum. In a stochastic approximation algorithm the learning scheme provides the required memory to establish a consistent direction or search insensitive to perturbations introduced by the random variables involved. The accelerated algorithms and the respective proofs of convergence are presented. Illustrative examples demonstrate the validity of the proposed algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 18. On How Often the Supervisor Should Sample

    Publication Year: 1970 , Page(s): 140 - 145
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1025 KB)  

    A procedure is presented for specifying how long a supervisor or monitor of a process should wait between input samples to maximize a given value or payoff function, assuming he resets the controls with each sample as a function of the best information he has. The procedure is based upon Bayesian preposterior information analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 19. Pattern Recognition Approach to Medical Diagnosis

    Publication Year: 1970 , Page(s): 173 - 178
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1466 KB)  

    A sequential method of pattern recognition was used to recognize hyperthyroidism in a sample of 2208 patients being treated at the Straub Clinic in Honolulu, Hawaii. For this, the method of class featuring information compression (CLAFIC) [1] was used, introducing some significant improvements in computer medical diagnosis, which by its very nature is a pattern recognition problem. A unique subspace characterizes each class at every decision stage, and the most prominent class features are selected. Thus the symptoms which best distinguish hyperthyroidism are extracted at every step and the number of tests required to reach a diagnosis is reduced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 20. Systems Analysis as a Tool for Urban Planning

    Publication Year: 1970 , Page(s): 258 - 265
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1834 KB)  

    New ways are becoming available for analyzing our social systems. These permit the design of revised policies to improve the behavior of the systems within which we live. Industrial dynamics relates system structure to behavior. Industrial dynamics belongs to the same general subject area as feedback systems, servomechanisms theory, and cybernetics. Industrial dynamics is the study of how the feedback loop structure of a system produces the dynamic behavior of that system. In managerial terms industrial dynamics makes possible the structuring of the components and policies of a system to show how the resulting dynamic behavior is produced. In terms of social systems it deals with the forces that arise within a system to cause changes through time. The structure of an urban area has been organized into a system model to show the life cycle dynamics of urban growth and decay. The results suggest that most of today's popular urban policies lie between neutral and detrimental in their effectiveness. Quite different policies are suggested when one comes to an understanding of why urban areas evolve as they do. The city shows the general characteristics of a complex system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 21. A stochastic automaton model for the synthesis of learning systems

    Publication Year: 1966 , Page(s): 109 - 114
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1383 KB)  

    A class of stochastic automaton models for the synthesis of a learning system to operate in a random environment is proposed. These models are based on defining a learning algorithm which relates the probability distribution of the response and the corresponding performance of the system. For different forms of the learning algorithm which satisfy specified requirements, with particular emphasis on a linear algorithm, the following desired learning behavior is shown to hold. 1) The mean performance converges monotonically to an extreme value, and 2) a criterion is available for determining the best response in the time limit. The learning models provide the desired learning behavior in an on-line manner while requiring little a priori knowledge and/or assumptions concerning the environment. Some applications of the learning models to engineering systems are considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 22. On the Modeling of Police Patrol Operations

    Publication Year: 1970 , Page(s): 276 - 281
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1256 KB)  

    An analytical approach to several operational problems of urban police departments is introduced. In particular, the focus is on urban police patrol forces and the two important activities of a patrol car (or patrol unit) 1) answering a call for police service, 2) performing crime preventive patrol. After reviewing the traditional allocation method, a patrol travel time model and a preventive patrol model are developed. The first depicts the time required for a patrol unit to travel from its position at time of dispatch until arrival at the scene of the incident. The second relates the frequency of patrols to physical parameters and can be used to estimate the probability that a patrolling unit will intercept a crime while in progress. Applications of the models are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 23. Systems Theory from an Operations Research Point of View

    Publication Year: 1965 , Page(s): 9 - 13
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1306 KB)  

    Operations Research's interest in systems is confined to normative aspects of systems which are at least partially self-controllable. A mathematical scheme is developed for representing the structure and behavior of such systems. By use of this scheme it is possible to isolate and measure inefficiencies caused by structure, communications, decision making, and by any combination of these. The effect of structure on inefficiency due to communications and decision making is also subject to analysis. Finally, two methods of centralized control of "less-than-optimally" structured systems are described: adjustment of parameters and the use of constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 24. Bayesian Decision Models for System Engineering

    Publication Year: 1965 , Page(s): 36 - 40
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1263 KB)  

    This paper shows how modern developments in statistical decision theory can be applied to a typical systems engineering problem. The problem is how to design an experiment to evaluate a reliability parameter for a device and then make a decision about whether to accept a contract for the development and maintenance of a system of these devices. We introduce the concept of subjective probability distribution to permit encoding prior knowledge about the uncertainty in the process. The expected value of clairvoyance is computed as an upper bound to the value of any experimental program. The structure of decision trees serves as a means for establishing the optimum size and type of experimentation and for acting on the basis of experimental results. The subjective probability approach to decision processes allows us to consider and solve problems that previously we could not even formulate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 25. A State-Space Model for Resource Allocation in Higher Education

    Publication Year: 1968 , Page(s): 108 - 118
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2873 KB)  

    A state-space model describes the behavioral characteristics of a system as a set of relationships among time functions representing its inputs, outputs, and internal state. The model presented describes the utilization of a university's basic resources of personnel, space, and technological equipment in the production of degree programs, research, and public or technical services. It is intended as an aid in achieving an optimal allocation of resources in higher education and in predicting future needs. The internal state of the system is defined as the distribution of students into levels and fields of study, with associated unit "costs" of education received. The model is developed by interconnecting, with appropriate constraints, independent submodels of major functional segments of university activity. The development of computer programs for estimation of parameters with continual updating and for simulation of the system behavior is described. This description includes a review of machine-addressable data files needed to implement the programs. The state model provides a natural form for approaching problems of system optimization and control. The paper discusses the question of control inputs and the feasibility of developing a formal optimal control policy for a university with essentially "open door" admissions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 26. Probabilistic Information Processing Systems: Design and Evaluation

    Publication Year: 1968 , Page(s): 248 - 265
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4699 KB)  

    A Probabilistic Information Processing System (PIP) uses men and machines in a novel way to perform diagnostic information processing. Men estimate likelihood ratios for each datum and each pair of hypotheses under consideration or a sufficient subset of these pairs. A computer aggregates these estimates by means of Bayes' theorem of probability theory into a posterior distribution that reflects the impact of all available data on all hypotheses being considered. Such a system circumvents human conservatism in information processing, the inability of men to aggregate information in such a way as to modify their opinions as much as the available data justify. It also fragments the job of evaluating diagnostic information into small separable tasks. The posterior distributions that are a PIP's output may be used as a guide to human decision making or may be combined with a payoff matrix to make decisions by means of the principle of maximizing expected value. A large simulation-type experiment compared a PIP with three other information processing systems in a simulated strategic war setting of the 1970's. The difference between PIP and its competitors was that in PIP the information was aggregated by computer, while in the other three systems, the operators aggregated the information in their heads. PIP processed the information dramatically more efficiently than did any competitor. Data that would lead PIP to give 99:1 odds in favor of a hypothesis led the next best system to give 4¿: 1 odds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 27. Decision Analysis for Product Development

    Publication Year: 1968 , Page(s): 342 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2013 KB)  

    The decision analysis described considers four major installation development alternatives upon which management decisions were required. The analysis has a number of phases. The technical-economic system is first modeled deterministically to describe the characteristics of the business. The deterministic model, or expected value business model, is then exercised in a sensitivity phase to determine what parameters are most influential to the outcomes. The analysis uses present worth methods for the time preference for money. Pricing strategies and market feedback capability are included. Adjunct businesses that develop as a result of the original business have been modeled in the deterministic model. The deterministic or nominal value outcomes show a clear progression of improvement over a 20 year period for the more technically advanced installation developments. The cost of development is an influential contributor to deterministic outcomes. The influence on near term outcomes of development costs was sufficient to make the issue of time value of outcomes very significant. The value of identifying the short term and long term outcomes on the basis of the time value of outcomes becomes identifiable as a significant contribution to decision making. The thinking which contributed to choosing the parameters used in the uncertainty analysis proved to be an important means of clarifying the issues surrounding what uncertainty analysis of a given development alternative venture really consists of. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 28. Three-Dimensional Morphology of Systems Engineering

    Publication Year: 1969 , Page(s): 156 - 160
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2044 KB)  

    A study of the structure and form of systems engineering using the technique of morphological analysis is presented. The result is a model of the field of systems engineering that may be rich in applications. Three uses given for illustration are in taxonomy, discovery of new sets of activities, and systems science curriculum design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 29. Visual Feature Extraction by a Multilayered Network of Analog Threshold Elements

    Publication Year: 1969 , Page(s): 322 - 333
    Cited by:  Papers (15)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5300 KB)  

    A new type of visual feature extracting network has been synthesized, and the response of the network has been simulated on a digital computer. This research has been done as a first step towards the realization of a recognizer of handwritten characters. The design of the network was suggested by biological systems, especially, the visual systems of cat and monkey. The network is composed of analog threshold elements connected in layers. Each analog threshold element receives inputs from a large number of elements in the neighbouring layers and performs its own special functions. It takes care of one restricted part of the photoreceptor layer, on which an input pattem is presented, and it responds to one particular feature of the input pattem, such as brightness contrast, a dot in the pattern, a line segment of a particular orientation, or an end of the line. This means that the network performs parallel processing of the information. With the propagation of the information through the layered network, the input pattern is successively decomposed into dots, groups of line segments of the same orientation, and the ends of these line segments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 30. On the Extraction of Pattern Features from Continuous Measurements

    Publication Year: 1970 , Page(s): 110 - 115
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (914 KB)  

    A suboptimum method of extracting features, by linear operations, from continuous data belonging to M pattern classes is presented. The set of features selected minimizes bounds on the probability of error obtained from the Bhattacharyya distance and the Hajek divergence. The random processes associated with the pattern classes are assumed to be Gaussian with different means and covariance functions. For M=2, in the two special cases in which, respectively, the means and the covariance functions are the same, both the above distance measures yield the same answer. The results obtained represent an extension of the existing results for two pattern classes with the same means and different covariance functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 31. Systems Engineering in the Steel Industry

    Publication Year: 1966 , Page(s): 9 - 15
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3111 KB)  

    The American steel industry is constantly facing challenges presented by new competitive materials, rising foreign imports, increasing labor costs, and new more complex technology. Steel industry customers are demanding and receiving tighter tolerances on their steel strip, sheet, and plate products. Steel is an ancient industry, when compared to today's space industry. Simple, cheap solutions are pretty well exhausted. Plant processes are both extremely expensive and productive. While change has been a way of life for years in the steel industry, the opportunities for profitable change today are fantastic compared with five years ago, due mostly to the digital computers. The self-regulation and tighter control achievable with automatic feedback, in addition to the unifying concepts of systems engineering, provide a proved technical approach to the solution of today's steel plant manufacturing and production control problems. The logic and discipline of critical task network planning are being used to assure profitable on-schedule implementation of automation systems that fully utilize the potentialities of both the present day digital computers and the current engineering technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 32. A Recognition Algorithm for Handprinted Arabic Numerals

    Publication Year: 1970 , Page(s): 246 - 250
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1018 KB)  

    A recognition algorithm for handprinted Arabic numerals is proposed. The algorithm is applied to a set of test samples and the test results are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 33. System Study of Emission Control for Passenger Cars

    Publication Year: 1970 , Page(s): 311 - 321
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1589 KB)  

    The results of a total system analysis leading to the optimal design of an automotive exhaust air pollution control device are presented. The purpose is to demonstrate a capability for effectively optimizing design parameters of a subsystem in an operating environment which are subject to three optimality criteria: minimum cost for fixed performance; maximum effectiveness (emission reduction) per total cost dollar; and maximum society benefit per total cost dollar. The cost and performance models of the emission control device are developed. Controlled emissions of a representative vehicle are compared to federal standard levels and extrapolated to the population of vehicles assuming an inspection and maintenance policy. Total emissions are distributed in the air shed, and ground level concentrations are determined. These concentrations are compared with those of an uncontrolled population, and the average benefits of control are determined. The total costs to society of providing maintained control devices are compared to the benefits. Design optimization is performed based on the costs and benefits subject to objective function constraints using a general-purpose optimization language and computer executive SLANG/CUE. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 34. The Use of Bayesian Inference in the Design of an Endpoint Control System for the Basic Oxygen Steel Furnace

    Publication Year: 1970 , Page(s): 339 - 348
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1565 KB)  

    A digital simulation of the basic oxygen steel furnace was previously developed, and its output was compared with the available data taken from the literature. The output concentrations were within 10 percent of the literature data, while the simulated temperature was within 0.5 percent. The simulation is used as an off-line model of the process to design an endpoint control system which makes use of the available feedback from the process. Feedback consists of previously existing instrumentation for effluent gas analysis and an instrument designed for quick carbon analysis. The same instrument, independently conceived by Bethlehem Steel, has been proved effective by them. The control system uses Bayesian inference to evaluate process feedback optimally. Equations have been developed and a computational algorithm designed enabling real-time calculation of the probability of a carbon-temperature state given any control action and imperfect measurements. Because the objective function is almost symmetric and the cost of control is minimal compared to the value of an endpoint state, optimal control drives the expected state vector to the center of the tolerance region. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 35. On a Simple Minkowski Metric Classifier

    Publication Year: 1970 , Page(s): 360 - 362
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (514 KB)  

    A classifier which, in general, implements a nonlinear decision boundary is shown to be equivalent to a linear discriminant function when the measurements are binary valued; its relation to the Bayes classifier is derived. The classifier requires less computation than a similar one based on the Euclidean distance and can perform equally well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 36. Systems Engineering from an Engineering Viewpoint

    Publication Year: 1965 , Page(s): 4 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1359 KB)  

    This paper considers various ways of defining systems engineering and surveys the systems engineering process. Part of the process is then applied to the problem of the Systems Science Committee. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 37. General Systems and Systems Engineering

    Publication Year: 1966 , Page(s): 3 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1330 KB)  

    One of the main lines of thought in General Systems is the transfer of system concepts from one field to another. There has been fruitful application of such theoretical concepts from engineering to other fields such as biology. In this paper, an attempt is made to apply concepts from biology to the practice of systems engineering. The relation of systems engineering to other fields is discussed, utilizing an intellectual framework based on the concepts of speciation and competition between species. The internal social structure of the profession and of individual organizations is considered, using the concept of competition within a species. Examples are drawn from biology to illustrate the points at issue. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 38. Decomposition, Coordination, and Multilevel Systems

    Publication Year: 1966 , Page(s): 36 - 40
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1253 KB)  

    General approaches to the design of hierarchical systems are indicated which are particularly relevant to problems of optimal control of discrete systems. A decomposition technique for interacting linear dynamic systems is shown to lead to an optimum 2-level technique having equivalent performance. Comments on computational algorithms for realizing this technique are included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 39. An Experimental Investigation of Process Identification by Competitive Evolution

    Publication Year: 1967 , Page(s): 11 - 16
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1002 KB)  

    The feasibility of using evolutionary techniques to construct a model for a given process was investigated. Programs were written to synthesize six competing models and to adjust, combine, and eliminate these according to their performance with respect to the actual process. Results show that although the use of evolutionary techniques is promising for identifying processing having nonanalytic properties, it is also, at the same time, very costly with respect to computer time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 40. A Stochastic Differential Game with Controllable Statistical Parameters

    Publication Year: 1967 , Page(s): 17 - 20
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB)  

    A differential game is described where the opponents are a missile and a radar. The problem is formulated so that linear theory can be used in finding the optimal strategies of both opponents. The missile uses a mixed strategy by adding white noise into its controller. The statistics of this noise (the covariance completely describes the noise) can be controlled. In finding the optimal strategy for the covariance of the controller noise, the solution by means of the calculus of variations involves a singular arc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 41. Systems Engineering Problems in Computer-Driven CRT Displays for Man-Machine Communication

    Publication Year: 1967 , Page(s): 47 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3881 KB)  

    Computer-driven cathode-ray tube (CRT) displays are becoming an important means of on-line man-machine communication, particularly for graphical input/output in laboratory investigations of computer-aided design techniques. Their operation, however, often requires so much of the computational resources of the associated computer that they are not yet considered economic or practical for general industrial use. This paper discusses the systems engineering problems in designing and using display systems, with emphasis on the hardware-software tradeoffs. As an example, a display specifically developed for computer-aided design applications is described which has unusual special-purpose computing capabilities for dynamic picture manipulations, including rotation, scaling, and translation of 3-dimensional images. it is concluded that there is much work ahead, and that the proper hardware-software organization for these complexes of computers, communication links, terminals, and men is a fertile field for the systems engineer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 42. Extremization of Constrained Multivariable Function: Structural Programming

    Publication Year: 1967 , Page(s): 105 - 111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1628 KB)  

    This paper describes how to find the extremum of a constrained multivariable function by taking advantage of its structure. Three steps lead to the solution. The first step discusses restructuring a given problem¿reformulating it in a way which makes possible the calculations performed in the next two steps. The second step presents the conversion of the restructured problem into a block diagram. The last step presents the problem solution from the block diagram obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 43. On the Efficiency of Learning Machines

    Publication Year: 1967 , Page(s): 111 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1348 KB)  

    The efficiency of the learning process is investigated in terms of the efficiency of abstract learning machines. The measure of efficiency defined gives an estimate of the minimum period for a system to absorb a given quantity of information, makes possible a quantitative comparison of different learning systems, and establishes a quantity which designers of adaptive machines should attempt to maximize. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 44. Two Viewpoints of k-Tuple Pattern Recognition

    Publication Year: 1967 , Page(s): 117 - 120
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (965 KB)  

    This paper presents two viewpoints of the k-tuple pattern recognition scheme proposed by Browning and Bledsoe. The first shows that k-tuple pattern recognition is a statistical approximation technique. In effect, the recognition is accomplished by approximating a higher order probability distribution by use of the first-order distributions. Using this viewpoint, and Lewis' measure of characteristic selection, several alternative approximations are offered. The second viewpoint is that recognition is a special case, or subclass, of a ¿ learning machine. It can be shown that if the input pattern vector X is first processed by a ¿-processor (in this case a kth order polynomial) and then certain terms discarded, the resulting learning machine is identical to a k-tuple pattern recognition machine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 45. Learning Games through Pattern Recognition

    Publication Year: 1968 , Page(s): 12 - 16
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1291 KB)  

    The objective of this research was to investigate a technique for machine learning that would be useful in solving problems involving forcing states. In games or control problems a forcing state is one from which the final goal can always be reached, regardless of what disturbances may arise. A program that learns forcing states in a class of games (in a game-independent format) by working backwards from a previous loss has been written. The class of positions that ultimately results in the opponent's win is learned by the program (using a specially designed description language) and stored in its memory together with the correct move to be made when this pattern reoccurs. These patterns are searched for during future plays of the game. If they are formed by the opponent, the learning program blocks them before the opponent's win sequence can begin. If it forms the patterns first, the learning program initiates the win sequence. The class of games for which the program is effective includes Qubic, Go-Moku, Hex, and the Shannon network games, including Bridge-it. The description language enables the learning program to generalize from one example of a forcing state to all other configurations that are strategically equivalent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 46. A Feature-Detection Program for Patterns with Overlapping Cells

    Publication Year: 1968 , Page(s): 16 - 23
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2238 KB)  

    An attempt is made to extract feature informations automatically from patterns which may consist of open lines, partially overlapping cells, and cells that may lie entirely inside another cell. The usual pattern-recognition techniques, such as the linear threshold logic technique and the masking or template technique, are not practical here, if not entirely impossible. In this paper, a direct-search computer program using a heuristic approach is described. A test pattern is used to illustrate the capability of the program. The subject should be of general interest to those in the field of automation and cybernetics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 47. Testing the NORAD Command and Control System

    Publication Year: 1968 , Page(s): 47 - 51
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2404 KB)  

    This paper describes closed-loop testing of the NORAD Command and Control System's ability to perform its assigned task: provide support for CINCNORAD in directing the air defense of North America. Special techniques were required to perform this testing since human beings were an integral part of the feedback loop. System test design and results are treated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 48. On Expediency and Convergence in Variable-Structure Automata

    Publication Year: 1968 , Page(s): 52 - 60
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2324 KB)  

    A stochastic automaton responds to the penalties from a random environment through a reinforcement scheme by changing its state probability distribution in such a way as to reduce the average penalty received. In this manner the automaton is said to possess a variable structure and the ability to learn. This paper discusses the efficiency of learning for an m-state automaton in terms of expediency and convergence, under two distinct types of reinforcement schemes: one based on penalty probabilities and the other on penalty strengths. The functional relationship between the successive probabilities in the reinforcement scheme may be either linear or nonlinear. The stability of the asymptotic expected values of the state probability is discussed in detail. The conditions for optimal and expedient behavior of the automaton are derived. Reduction of the probability of suboptimal performance by adopting the Beta model of the mathematical learning theory is discussed. Convergence is discussed in the light of variance analysis. The initial learning rate is used as a measure of the overall convergence rate. Learning curves can be obtained by solving nonlinear difference equations relating the successive expected values. An analytic expression concerning the convergence behavior of the linear case is derived. It is shown that by a suitable choice of the reinforcement scheme it is possible to increase the separation of asymptotic state probabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 49. Duality and Decomposition in Mathematical Programming

    Publication Year: 1968 , Page(s): 86 - 100
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3384 KB)  

    The problem considered is that of obtaining solutions to large nonlinear mathematical programs by coordinated solution of smaller subproblems. If all functions in the original problem are additively separable, this can be done by finding a saddle point for the associated Lagrangian function. Coordination is then accomplished by shadow prices, with these prices chosen to solve a dual program. Characteristics of the dual program are investigated, and an algorithm is proposed in which subproblems are solved for given shadow prices. These solutions provide the value and gradient of the dual function, and this information is used to update the shadow prices so that the dual problem is brought closer to solution. Application to two classes of problems is given. The first class is one whose constraints describe a system of coupled subsystems; the second is a class of multi-item inventory problems whose decision variables may be discrete. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 50. The Widget Problem Revisited

    Publication Year: 1968 , Page(s): 241 - 248
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1264 KB)  

    The Jaynes "widget problem" is reviewed as an example of an application of the principle of maximum entropy in the making of decisions. The exact solution yields an unusual probability distribution. The problem illustrates why some kinds of decisions can be made intuitively and accurately, but would be difficult to rationalize without the principle of maximum entropy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.