Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Neural Networks, IEEE Transactions on

Issue 3 • Date May 1999

Filter Results

Displaying Results 1 - 25 of 28
  • Guest Editorial Overview Of Pulse Coupled Neural Network (PCNN) Special Issue

    Publication Year: 1999 , Page(s): 461 - 463
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB)  
    Freely Available from IEEE
  • Call for papers

    Publication Year: 1999 , Page(s): 722
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • Microcode optimization with neural networks

    Publication Year: 1999 , Page(s): 698 - 703
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Microcode optimization is an NP-complete combinatorial optimization problem. This paper proposes a new method based on the Hopfield neural network for optimizing the wordwidth in the control memory of a microprogrammed digital computer. We present two methodologies, viz., the maximum clique approach, and a cost function based method to minimize an objective function. The maximum clique approach albeit being near O(1) in complexity, is limited in its use for small problem sizes, since it only partitions the data based on the compatibility between the microoperations, and does not minimize the cost function. We thereby use this approach to condition the data initially (to form compatibility classes), and then use the proposed second method to optimize the cost function. The latter method is then able to discover better solutions than other schemes for the benchmark data set View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An accelerator for neural networks with pulse-coded model neurons

    Publication Year: 1999 , Page(s): 527 - 538
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    The labeling of features by synchronization of spikes seems to be a very efficient encoding scheme for a visual system. Simulation of a vision system with millions of pulse-coded model neurons, however, is almost impossible on the basis of available processors including parallel processors and neurocomputers. A “one-to-one” silicon implementation of pulse-coded model neurons suffers from communication problems and low flexibility. On the other hand, acceleration of the simulation algorithm of pulse-coded leaky integrator neurons has proved to be straightforward, flexible, and very efficient. Thus we decided to develop an accelerator for a special version of the French and Stein (1970) neurons with modulatory inputs which are advantageous for simulation of synchronization mechanisms. Moreover, our accelerator also provides a Hebbian-like learning rule and supports adaptivity. Up to 128 K neurons with a total number of 16 M freely allocatable synapses are simulated within one system. The size of networks, however, is not at all limited by these numbers as the system may be arbitrarily expanded. Simulation speed obviously depends on the number of interconnections and on the average activity within the network. In the case of locally interconnected networks for simulation of vision mechanisms there is only a very low percentage of simultaneously active neurons: stimuli are not simultaneously presented in all orientations and at all positions of the visual field. In these cases our accelerator provides close to real-time behavior if one second of a biological neuron is simulated by 1000 time slots View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilayer feedforward networks with adaptive spline activation function

    Publication Year: 1999 , Page(s): 672 - 683
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    In this paper, a new adaptive spline activation function neural network (ASNN) is presented. Due to the ASNN's high representation capabilities, networks with a small number of interconnections can be trained to solve both pattern recognition and data processing real-time problems. The main idea is to use a Catmull-Rom cubic spline as the neuron's activation function, which ensures a simple structure suitable for both software and hardware implementation. Experimental results demonstrate improvements in terms of generalization capability and of learning speed in both pattern recognition and data processing tasks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamics of selective recall in an associative memory model with one-to-many associations

    Publication Year: 1999 , Page(s): 704 - 713
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    The dynamics of selective recall in an associative memory model are analyzed in the scenario of one-to-many association. The present model, which can deal with one-to-many association, consists of a heteroassociative network and an autoassociative network. In the heteroassociative network, a mixture of associative items in one-to-many association is recalled by a key item. In the autoassociative network, the selective recall of one of the associative items is examined by providing a seed of a target item either to the heteroassociative network (Model 1) or to the autoassociative network (Model 2). We show that the critical similarity of Model 2 is not sensitive to the change in the dimension ratio of key vectors to associative vectors, and it has smaller critical similarity than Model 1 for a large initial overlap. On the other hand, we show that Model 1 has smaller critical similarity for a small initial overlap. We also show that unreachable equilibrium states exist in the proposed model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Class 1 neural excitability, conventional synapses, weakly connected networks, and mathematical foundations of pulse-coupled models

    Publication Year: 1999 , Page(s): 499 - 507
    Cited by:  Papers (47)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    Many scientists believe that all pulse-coupled neural networks are toy models that are far away from the biological reality. We show, however, that a huge class of biophysically detailed and biologically plausible neural-network models can be transformed into a canonical pulse-coupled form by a piece-wise continuous, possibly noninvertible, change of variables. Such transformations exist when a network satisfies a number of conditions; e,g., it is weakly connected; the neurons are Class 1 excitable (i.e., they can generate action potentials with an arbitrary small frequency); and the synapses between neurons are conventional (i.e., axo-dendritic and axe-somatic). Thus, the difference between studying the pulse-coupled model and Hodgkin-Huxley-type neural networks is just a matter of a coordinate change. Therefore, any piece of information about the pulse-coupled model is valuable since it tells something about all weakly connected networks of Class 1 neurons. For example, we show that the pulse-coupled network of identical neurons does not synchronize in-phase. This confirms Ermentrout's (1996) result that weakly connected Class 1 neurons are difficult to synchronize, regardless of the equations that describe dynamics of each cell View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weakly pulse-coupled oscillators, FM interactions, synchronization, and oscillatory associative memory

    Publication Year: 1999 , Page(s): 508 - 526
    Cited by:  Papers (73)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    We study pulse-coupled neural networks that satisfy only two assumptions: each isolated neuron fires periodically, and the neurons are weakly connected. Each such network can be transformed by a piece-wise continuous change of variables into a phase model, whose synchronization behavior and oscillatory associative properties are easier to analyze and understand. Using the phase model, we can predict whether a given pulse-coupled network has oscillatory associative memory, or what minimal adjustments should be made so that it can acquire memory. In the search for such minimal adjustments we obtain a large class of simple pulse-coupled neural networks that ran memorize and reproduce synchronized temporal patterns the same way a Hopfield network does with static patterns. The learning occurs via modification of synaptic weights and/or synaptic transmission delays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Range image segmentation using a relaxation oscillator network

    Publication Year: 1999 , Page(s): 564 - 573
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    A locally excitatory globally inhibitory oscillator network (LEGION) is constructed and applied to range image segmentation, where each oscillator has excitatory lateral connections to the oscillators in its local neighborhood as well as a connection with a global inhibitor. A feature vector, consisting of depth, surface normal, and mean and Gaussian curvatures, is associated with each oscillator and is estimated from local windows at its corresponding pixel location. A context-sensitive method is applied in order to obtain more reliable and accurate estimations. The lateral connection between two oscillators is established based on a similarity measure of their feature vectors. The emergent behavior of the LEGION network gives rise to segmentation. Due to the flexible representation through phases, our method needs no assumption about the underlying structures in image data and no prior knowledge regarding the number of regions. More importantly, the network is guaranteed to converge rapidly under general conditions. These unique properties may lead to a real-time approach for range image segmentation in machine perception View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Separation of speech from interfering sounds based on oscillatory correlation

    Publication Year: 1999 , Page(s): 684 - 697
    Cited by:  Papers (83)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    A multistage neural model is proposed for an auditory scene analysis task-segregating speech from interfering sound sources. The core of the model is a two-layer oscillator network that performs stream segregation on the basis of oscillatory correlation. In the oscillatory correlation framework, a stream is represented by a population of synchronized relaxation oscillators, each of which corresponds to an auditory feature, and different streams are represented by desynchronized oscillator populations. Lateral connections between oscillators encode harmonicity, and proximity in frequency and time. Prior to the oscillator network are a model of the auditory periphery and a stage in which mid-level auditory representations are formed. The model has been systematically evaluated using a corpus of voiced speech mixed with interfering sounds, and produces improvements in terms of signal-to-noise ratio for every mixture. A number of issues including biological plausibility and real-time implementation are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PCNN models and applications

    Publication Year: 1999 , Page(s): 480 - 498
    Cited by:  Papers (116)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    Pulse coupled neural network (PCNN) models are described. The linking field modulation term is shown to be a universal feature of any biologically grounded dendritic model. Applications and implementations of PCNNs are reviewed. Application based variations and simplifications are summarized. The PCNN image decomposition (factoring) model is described in detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perfect image segmentation using pulse coupled neural networks

    Publication Year: 1999 , Page(s): 591 - 598
    Cited by:  Papers (86)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    This paper describes a method for segmenting digital images using pulse coupled neural networks (PCNN). The pulse coupled neuron (PCN) model used in PCNN is a modification of the cortical neuron model of Eckhorn et al. (1990). A single layered laterally connected PCNN is capable of perfectly segmenting digital images even when there is a considerable overlap in the intensity ranges of adjacent regions. Conditions for perfect image segmentation are derived. It is also shown that addition of an inhibition receptive field to the neuron model increases the possibility of perfect segmentation. The inhibition input reduces the overlap of intensity ranges of adjacent regions by effectively compressing the intensity range of each region View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Physiologically motivated image fusion for object detection using a pulse coupled neural network

    Publication Year: 1999 , Page(s): 554 - 563
    Cited by:  Papers (50)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    This paper presents the first physiologically motivated pulse coupled neural network (PCNN)-based image fusion network for object detection. Primate vision processing principles, such as expectation driven filtering, state dependent modulation, temporal synchronization, and multiple processing paths are applied to create a physiologically motivated image fusion network. PCNN are used to fuse the results of several object detection techniques to improve object detection accuracy. Image processing techniques (wavelets, morphological, etc.) are used to extract target features and PCNN are used to focus attention by segmenting and fusing the information. The object detection property of the resulting image fusion network is demonstrated on mammograms and forward-looking infrared radar (FLIR) images. The network removed 94% of the false detections without removing any true detections in the FLIR images and removed 46% of the false detections while removing only 7% of the true detections in the mammograms. The model exceeded the accuracy obtained by any individual filtering methods or by logical ANDing the individual object detection technique results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog implementation of pulse-coupled neural networks

    Publication Year: 1999 , Page(s): 539 - 544
    Cited by:  Papers (18)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    This paper presents a compact architecture for analog CMOS hardware implementation of voltage-mode pulse-coupled neural networks (PCNN). The hardware implementation methods shows inherent fault tolerance specialties and high speed, which is usually more than an order of magnitude over the software counterpart. A computational style described in this article mimics a biological neural network using pulse-stream signaling and analog summation and multiplication, pulse-stream encoding technique uses pulse streams to carry information and control analog circuitry, while storing further analog information on the time axis. The main feature of the proposed neuron circuit is that the structure is compact, yet exhibiting all the basic properties of natural biological neurons. Functional and structural forms of neural and synaptic functions are presented along with simulation results. Finally, the proposed design is applied to image processing to demonstrate successful restoration of images and their features View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding the shortest path in the shortest time using PCNN's

    Publication Year: 1999 , Page(s): 604 - 606
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (32 KB)  

    A pulse coupled neural network (PCNN) can run mazes nondeterministically (taking all possible paths) with constant time per step. Thus, when a signal emerges, it has taken the shortest path in the shortest time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reformulated radial basis neural networks trained by gradient descent

    Publication Year: 1999 , Page(s): 657 - 671
    Cited by:  Papers (80)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBFs. The form of the RBFs is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBFs. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBFs with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Direct adaptive control of partially known nonlinear systems

    Publication Year: 1999 , Page(s): 714 - 721
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB)  

    A direct adaptive control strategy for a class of single-input/single-output nonlinear systems is presented. The major advantage of the proposed method is that a detailed dynamic nonlinear model is not required for controller design. The only information required about the plant is measurements of the state variables, the relative degree, and the sign of a Lie derivative which appears in the associated input-output linearizing control law. Unknown controller functions are approximated using locally supported radial basis functions that are introduced only in regions of the state space where the closed-loop system actually evolves. Lyapunov stability analysis is used to derive parameter update laws which ensure (under certain assumptions) the state vector remains bounded and the plant output asymptotically tracks the output of a linear reference model. The technique is successfully applied to a nonlinear biochemical reactor model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smart adaptive optic systems using spatial light modulators

    Publication Year: 1999 , Page(s): 599 - 603
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    Many factors contribute to the aberrations induced in an optical system. Atmospheric turbulence between the object and the imaging system, physical or thermal perturbations in optical elements degrade the system's point spread function, and misaligned optics are the primary sources of aberrations that affect image quality. The design of a nonconventional real-time adaptive optic system using a micro-mirror device for wavefront correction is presented. The unconventional compensated imaging system presented offers advantages in speed, cost, power consumption, and weight. A pulsed-coupled neural network is used to as a preprocessor to enhance the performance of the wavefront sensor for low-light applications. Modeling results that characterize the system performance are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A retina with parallel input and pulsed output, extracting high-resolution information

    Publication Year: 1999 , Page(s): 574 - 583
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    Animal eyes resolve images 10-100 times better than either the acceptance angle of a single photoreceptor or the center-to-center distance between neighboring photoreceptors. A new model of the fly's visual system emulates this improved performance, offering a different approach to subpixel resolution. That an animal without a cortex is capable of this performance suggests that high level computation is not involved. The model takes advantage of a photoreceptor cell's internal structure for capturing and transducing light. This organelle is a waveguide. Neurocircuitry exploits the waveguide's optical nonlinearities, namely the shoulder region of its gaussian, angular-sensitivity profile, to extract high resolution information from the visual scene. The receptive fields of optically disparate inputs overlap in space. Photoreceptor input is continuous rather than discretely sampled. The output of the integrating module is a signal proportional to the position of the target within the detector array. Input imbalance at the level of the photodiode modules is detected by circuitry connecting neighboring visual elements. A pulsed network of these connections forms a parallel array that segments edges of an object and continuously reports its position to the underlying layer of feature extractors, offering a new approach to real time processing with high resolution and reduced computational load View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of pulse-coupled neural networks in a CNAPS environment

    Publication Year: 1999 , Page(s): 584 - 590
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    Pulse coupled neural networks (PCNN) are biologically inspired algorithms very well suited for image/signal preprocessing. While several analog implementations are proposed we suggest a digital implementation in an existing environment, the connected network of adapted processors system (CNAPS). The reason for this is two fold. First, CNAPS is a commercially available chip which has been used for several neural-network implementations. Second, the PCNN is, in almost all applications, a very efficient component of a system requiring subsequent and additional processing. This may include gating, Fourier transforms, neural classifiers, data mining, etc, with or without feedback to the PCNN View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Foveation by a pulse-coupled neural network

    Publication Year: 1999 , Page(s): 621 - 625
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    Humans do not stare at an image, they foveate. Their eyes move about points of interest within the image collecting clues as to the content of the image. Object shape is one of the driving forces of foveation. These foveation points are generally corners and, to a lesser extent, the edges. The pulse-coupled neural network (PCNN) has the inherent ability to segment an image. The corners and edges of the PCNN segments are similar to the foveation points. Thus, it is a natural extension of PCNN technology to use it as a foveation engine. The paper presents theory and examples of foveation through the use of a PCNN, and also demonstrates that it can be quite useful in image recognition View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inherent features of wavelets and pulse coupled networks

    Publication Year: 1999 , Page(s): 607 - 614
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    Biologically inspired image/signal processing like the pulse coupled neural network (PCNN) and the wavelet (packet) transforms are described. The two methods are applied to two-dimensional data in order to demonstrate the features of each method, pinpoint differences as well as similarities. The inherent properties (with respect to filtering, segmentation, etc.) of the two approaches with respect to detectors for physics experiments as well as remote sensing are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frequency-based multilayer neural network with on-chip learning and enhanced neuron characteristics

    Publication Year: 1999 , Page(s): 545 - 553
    Cited by:  Papers (34)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    A new digital architecture of the frequency-based multilayer neural network (MNN) with on-chip learning is proposed. As the signal level is expressed by the frequency, the multiplier is replaced by a simple frequency converter, and the neuron unit uses the voting circuit as the nonlinear adder to improve the nonlinear characteristic. In addition, the pulse multiplier is employed to enhance the neuron characteristics. The backpropagation algorithm is modified for the on-chip learning. The proposed MNN architecture is implemented on field programmable gate arrays (FPGA) and the various experiments are conducted to test the performance of the system. The experimental results show that the proposed neuron has a very good nonlinear function owing to the voting circuit. The learning behavior of the MNN with on-chip learning is also tested by experiments, which show that the proposed MNN has good learning and generalization capabilities. Simple and modular structure of the proposed MNN leads to a massive parallel and flexible network architecture, which is well suited for VLSI implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural mechanisms of scene segmentation: recordings from the visual cortex suggest basic circuits for linking field models

    Publication Year: 1999 , Page(s): 464 - 479
    Cited by:  Papers (50)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    Synchronization of neural activity has been proposed to code feature linking. This was supported by the discovery of synchronized neural activities. In cat and monkey visual cortex which occurred stimulus dependent either oscillatory (30-100 Hz) or nonrhythmical, internally generated or stimulus dominated. The area in visual space covered by receptive fields of an actually synchronized assembly of neurons was termed the “linking field”. The present paper aims at relating signals of stimulus dependent synchronization and desynchronization, observed by the authors in the visual cortex of monkeys, with models of basic neural circuits explaining the measured signals and extending the authors' former linking field model. The circuits include: (1) a model neuron with the capability of fast mutual spike linking and decoupling which does not degrade the receptive field properties; (2) linking connections for fast synchronization in neighboring assemblies driven by the same stimulus; (3) feedback inhibition in local assemblies via a common interneuron subserving synchronization, desynchronization, and suppression of uncorrelated signals; and (4) common-input connectivity among members of local and distant assemblies supporting zero-delay phase difference in distributed assemblies. Other recently observed cortical effects that potentially support scene segmentation are shortly reviewed to stimulate further ideas for models. Finally, the linking field hypothesis is critically discussed, including contradictory psychophysical work and new supportive neurophysiological evidence View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost functions to estimate a posteriori probabilities in multiclass problems

    Publication Year: 1999 , Page(s): 645 - 656
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions which verify two common properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Neural Networks is devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware.

 

This Transactions ceased production in 2011. The current retitled publication is IEEE Transactions on Neural Networks and Learning Systems.

Full Aims & Scope