By Topic

Bio-Inspired Networks of Visual Sensors, Neurons, and Oscillators

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Bijoy K. Ghosh ; Dept. of Math. & Stat., Texas Tech. Univ., Lubbock, TX ; Ashoka D. Polpitiya ; Wenxue Wang

Animals routinely rely on their eyes to localize fixed and moving targets. Such a localization process might include prediction of future target location, recalling a sequence of previously visited places or, for the motor control circuit, actuating a successful movement. Typically, target localization is carried out by fusing images from two eyes, in the case of binocular vision, wherein the challenge is to have the images calibrated before fusion. In the field of machine vision, a typical problem of interest is to localize the position and orientation of a network of mobile cameras (sensor network) that are distributed in space and are simultaneously tracking a target. Inspired by the animal visual circuit, we study the problem of binocular image fusion for the purpose of localizing an unknown target in space. Guided by the dynamics of "eye rotation", we introduce control strategies that could be used to build machines with multiple sensors. In particular, we address the problem of how a group of visual sensors can be optimally controlled in a formation. We also address how images from multiple sensors are encoded using a set of basis functions, choosing a "larger than minimum" number of basis functions so that the resulting code that represents the image is sparse. We address the problem of how a sparsely encoded visual data stream is internally represented by a pattern of neural activity. In addition to the control mechanism, the synaptic interaction between cells is also subjected to "adaptation" that enables the activity waves to respond with greater sensitivity to visual input. We study how the rat hippocampal place cells are used to form a cognitive map of the environment so that the animal's location can be determined from its place cell activity. Finally, we study the problem of "decoding" location of moving targets from the neural activity wave in the cortex

Published in:

Proceedings of the IEEE  (Volume:95 ,  Issue: 1 )