By Topic

Verification of the nonparametric characteristics of backpropagation neural networks for image classification

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Weiyang Zhou ; GDE Syst. Inc., San Diego, CA, USA

Experiments have been conducted with backpropagation neural networks for Landsat thematic mapper (TM) image classification. Specifically, two nonparametric characteristics of the neural networks were tested. The first test demonstrated the flexibility of the networks by comparing the results from three classifications with different schemes of target classes. Within each classification scheme, target classes with different separability from the others were defined using pixels with different degrees of homogeneity (or purity, compactness, and similarity) in terms of their distribution in the spectral bands. On one hand, neural networks' performance on pixels that were well represented by training pixels was consistently satisfactory, as indicated by the high-average classification accuracy for both training and testing pixels. On the other hand, however, with different training pixel sets, the neural networks performed inconsistently on other pixels that were not well represented by the training pixels. Only a small portion of pixels were classified into the same category under all three classification schemes. For the second test, additional input bands with known characteristics were classified with the TM bands. When a new method was used for interpreting the weights of a trained network, it was proven that the neural networks are able to adjust their weights in accordance with the importance of the role each input data source plays during the classification. In other words, when data of different sources are used for classification, it is not necessary to know their relative importance in advance. Instead, by interpreting the weights after training, the importance of each data source can be ranked based on its contribution to the classification so that the one that made the least contribution can be left out in future classification processes to save computation time

Published in:

Geoscience and Remote Sensing, IEEE Transactions on  (Volume:37 ,  Issue: 2 )