# 1999 Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470)

## Filter Results

Displaying Results 1 - 25 of 90
• ### Recurrent learning of input-output stable behaviour in function space: A case study with the Roessler attractor

Publication Year: 1999, Page(s):761 - 766 vol.2
Cited by:  Papers (1)
| | PDF (404 KB)

We analyse the stability of the input-output behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. Using previously developed methods we show that the trained network is input-output stable and comput... View full abstract»

• ### One sensor learning from another

Publication Year: 1999, Page(s):755 - 760 vol.2
| | PDF (536 KB)

Sensor interpretation in mobile robots often involves an inverse sensor model of the sensors used. Building inverse sensor models for sonar sensor assemblies is a particularly difficult problem that has received much attention in past years. A common solution is to train neural networks using supervised learning. However, large amounts of training data are typically needed, consisting, for example... View full abstract»

• ### An analysis of initial state dependence in generalized LVQ

Publication Year: 1999, Page(s):928 - 933 vol.2
Cited by:  Papers (5)
| | PDF (348 KB)

The author proposed a new formulation of learning vector quantisation (LVQ) called generalized LVQ (GLVQ) based on the minimum classification error criterion. In this paper, the initial state dependence in GLVQ is discussed, and it is clarified that the learning rule should be modified to make it insensitive to the initial values of reference vectors. The robustness of the modified GLVQ for the in... View full abstract»

• ### Selecting features in neurofuzzy modelling by multiobjective genetic algorithms

Publication Year: 1999, Page(s):749 - 754 vol.2
Cited by:  Papers (3)  |  Patents (1)
| | PDF (476 KB)

Empirical modelling in high dimensional spaces is usually preceded by a feature selection stage. Irrelevant or noisy features unnecessarily increase the complexity of the problem and can degrade modelling performance. Here, multiobjective genetic algorithms are proposed as effective means of evolving a diverse population of alternative feature sets with various accuracy/complexity trade-offs. They... View full abstract»

• ### Active topographic mapping of proximities

Publication Year: 1999, Page(s):952 - 957 vol.2
Cited by:  Papers (5)
| | PDF (452 KB)

We deal with the question of how to reduce the computational costs of obtaining and clustering dissimilarity data. We show that for pairwise clustering, a large portion of the dissimilarity data can be neglected without incurring a serious deterioration of the clustering solution. This fact can be exploited by selecting the dissimilarity values that are supposed to be most relevant in a well-direc... View full abstract»

• ### Nonlinear dimensionality reduction with input distances preservation

Publication Year: 1999, Page(s):922 - 927 vol.2
| | PDF (300 KB)

A new error term for dimensionality reduction, which clearly improves the quality of nonlinear principal component analysis neural networks, is introduced, and some illustrative examples are given. The method maintains the original data structure by preserving the distances between data points View full abstract»

• ### Learning error-correcting output codes from data

Publication Year: 1999, Page(s):743 - 748 vol.2
Cited by:  Papers (6)
| | PDF (392 KB)

A polychotomizer which assigns the input to one of K⩾3 classes is constructed using a set of dichotomizers which assign the input to one of two classes. Defining classes in terms of the dichotomizers is the binary decomposition matrix of size K×L where each of the K⩾3 classes is written as error-correcting output codes (ECOC), i.e., an array of the responses of binary decisions made ... View full abstract»

• ### Optimization of surface component mounting on the printed circuit board using SOM-TSP method

Publication Year: 1999, Page(s):643 - 648 vol.2
| | PDF (508 KB)

We propose the application of self organising maps-travelling salesman problem (SOM-TSP) method in optimizing the efficiency of surface mounting of electronic parts on the printed circuit board. From the numerical experiment, it was found that the required time for mounting electronic parts can be decreased by our proposed method compared to the built-in method on the mounting-system. As a result,... View full abstract»

• ### Recognition of gene regulatory sequences by bagging of neural networks

Publication Year: 1999, Page(s):988 - 993 vol.2
| | PDF (500 KB)

The authors use an ensemble of multilayer perceptrons to build a model for a type of gene regulatory sequence called a G-box. A variant of the bagging method (bootstrap-and-aggregate) improves the performance of the ensemble over that of a single network. Through a decomposition of the generalization error of the ensemble into bias and variance components, the authors estimate this error from the ... View full abstract»

• ### Local gain adaptation in stochastic gradient descent

Publication Year: 1999, Page(s):569 - 574 vol.2
Cited by:  Papers (12)
| | PDF (444 KB)

Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton's work on linear systems (1992) to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration te... View full abstract»

• ### Recurrent neural network learning for text routing

Publication Year: 1999, Page(s):898 - 903 vol.2
Cited by:  Papers (6)  |  Patents (1)
| | PDF (444 KB)

Describes recurrent plausibility networks with internal recurrent hysteresis connections. These recurrent connections in multiple layers encode the sequential context of word sequences. We show how these networks can support text routing of noisy newswire titles according to different given categories. We demonstrate the potential of these networks using an 82 339 word corpus from the Reuters news... View full abstract»

• ### A self-organizing map for clustering probabilistic models

Publication Year: 1999, Page(s):946 - 951 vol.2
Cited by:  Papers (1)  |  Patents (2)
| | PDF (444 KB)

We present a general framework for self-organizing maps, which store probabilistic models in map units. We introduce the negative log probability of the data sample as the error function and motivate its use by showing its correspondence to the Kullback-Leibler distance between the unknown true distribution of data and our empirical models. We present a general winner search procedure based on thi... View full abstract»

• ### A constructivist neural network model of German verb inflection in agrammatic aphasia

Publication Year: 1999, Page(s):916 - 921 vol.2
| | PDF (564 KB)

We present a constructivist neural network that closely models the performance of agrammatic aphasics on German participle inflection. The network constructs a modular architecture leading to a double dissociation between regular and irregular verbs, and lesioning the trained network accounts for data obtained from aphasic subjects. We analyze the internal structure of the network with respect to ... View full abstract»

• ### Covariance-based weighting for optimal combination of model predictions

Publication Year: 1999, Page(s):826 - 831 vol.2
Cited by:  Papers (1)
| | PDF (384 KB)

This paper introduces a method for calculating the covariance between different neural network solutions. It is based on a generalisation of the delta method for calculating the network Hessian and generates what we call the cross-covariance' matrix (its inverse is the cross-Hessian'). Using this matrix we are able to estimate the covariance between network predictions at each point in input spa... View full abstract»

• ### Feature extraction algorithms for pattern classification

Publication Year: 1999, Page(s):738 - 742 vol.2
Cited by:  Patents (1)
| | PDF (320 KB)

Feature extraction is often an important preprocessing step in classifier design, in order to overcome the problems associated with having a large input space. A common way of doing this is to use principle component analysis to find the most important features. However, it has been recognised that this may not produce an optimal set of features in some problems since the method relies on the seco... View full abstract»

• ### A new adaptive architecture: Analogue synthesiser of orthogonal functions

Publication Year: 1999, Page(s):714 - 719 vol.2
Cited by:  Papers (1)  |  Patents (1)
| | PDF (312 KB)

A new adaptive nonlinear (neural-like) architecture, an analogue synthesiser of orthogonal functions which is able to produce a plurality of mutually orthogonal signals as functions of time such as Legendre, Chebyshev and Hermite polynomials, cosine basis of functions, smoothed cosine basis, etc., is proposed. A proof-of-concept breadboard version of the analogue synthesiser is described. The devi... View full abstract»

• ### Application of a reduced Hopfield neural net on dynamic routing in real time communication network

Publication Year: 1999, Page(s):637 - 642 vol.2
Cited by:  Papers (2)
| | PDF (360 KB)

We propose a virtue token algorithm for finding an optimal route in real time communication network, and use a Hopfield neural net to implement it. Our Hopfield neural net routing method cannot only satisfy the routing requirement of a dynamic communication network, but can also be implemented into hardware. Moreover, our Hopfield neural net can be reduced to a much smaller scale, thus making the ... View full abstract»

• ### A neural network for scene segmentation based on compact astable oscillators

Publication Year: 1999, Page(s):690 - 695 vol.2
Cited by:  Papers (2)
| | PDF (404 KB)

We show the feasibility of building a neural network for scene segmentation made of astable oscillators. The network is based on Wang and Terman's algorithm, LEGION . However, much simpler astable circuits have substituted the original oscillators so they meet analog microelectronic requirements and can achieve a high integration level. The correct behavior of the modified network and some of its ... View full abstract»

• ### KBANNs and the classification of 31P MRS of malignant mammary tissues

Publication Year: 1999, Page(s):982 - 987 vol.2
| | PDF (468 KB)

Knowledge-based artificial neural networks (KBANNs) is a hybrid methodology that combines knowledge of a domain in the form of simple rules with connectionist learning. This combination allows the use of small sets of data (typical of medical diagnosis tasks) to train the network. The initial structure is set from the dependencies of a set of rules and it is only necessary to refine these rules by... View full abstract»

• ### A spatio-temporal neural network applied to visual speech recognition

Publication Year: 1999, Page(s):797 - 802 vol.2
Cited by:  Papers (3)
| | PDF (424 KB)

We present a new neural architecture called spatio-temporal neural network (STNN). In this work, we have utilised the Hermitian distance as the basis of spatio-temporal data comparison to adapt a supervised (RCE) and an unsupervised (K-means) learning algorithms for training the STNN weights. A visual speech recognition (automatic lip-reading) system based on STNN is developed and the results obta... View full abstract»

• ### VC dimension bounds for higher-order neurons

Publication Year: 1999, Page(s):563 - 568 vol.2
Cited by:  Papers (4)
| | PDF (448 KB)

We investigate the sample complexity for learning using higher-order neurons. We calculate upper and lower bounds on the Vapnik-Chervonenkis dimension and the pseudo dimension for higher-order neurons that allow unrestricted interactions among the input variables. In particular, we show that the degree of interaction is irrelevant for the VC dimension and that the individual degree of the variable... View full abstract»

• ### Rule extraction from binary neural networks

Publication Year: 1999, Page(s):515 - 520 vol.2
| | PDF (408 KB)

A new constructive learning algorithm, called Hamming clustering (HC), for binary neural networks is proposed. It is able to generate a set of rules in if-then form underlying an unknown classification problem starting from a training set of samples. The performance of HC has been evaluated through a variety of artificial and real-world benchmarks. In particular, its application in the diagnosis o... View full abstract»

• ### Stochastic models for surface information extraction in texts

Publication Year: 1999, Page(s):892 - 897 vol.2
Cited by:  Papers (1)
| | PDF (532 KB)

We describe the application of numerical machine learning techniques to the extraction of information from a collection of textual data. More precisely, we consider the modeling of text sequences with hidden Markov models and multilayer perceptrons and show how these models can be used to perform specific surface extraction tasks (i.e. tasks which do not need in depth syntactic or semantic analysi... View full abstract»

• ### Fast winner search for SOM-based monitoring and retrieval of high-dimensional data

Publication Year: 1999, Page(s):940 - 945 vol.2
Cited by:  Papers (7)  |  Patents (1)
| | PDF (432 KB)

Self-organizing maps (SOMs) are widely used in engineering and data-analysis tasks, but so far rarely in very large-scale problems. The reason is the amount of computation. Winner search, finding the position of a data sample on the map, is the computational bottleneck: comparison between the data vector and all of the model vectors of the map is required. In this paper a method is proposed for re... View full abstract»

• ### Maximizing information about a noisy signal with a single non-linear neuron

Publication Year: 1999, Page(s):581 - 586 vol.2
Cited by:  Papers (1)
| | PDF (484 KB)

For noise-free information maximization, the output signal entropy must be maximized. This is not true for a noisy input: rather, it must be the difference between this entropy and the residual output uncertainty. A definition of information density is introduced, which provides a discrete local measure of bandwidth efficiency. Novel training rules are proposed which enforce a uniformity of this d... View full abstract»