By Topic

Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of

Issue 2  Part 2 • Date April 2013

Filter Results

Displaying Results 1 - 25 of 37
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (348 KB)  
    Freely Available from IEEE
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (137 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 457 - 458
    Save to Project icon | Request Permissions | PDF file iconPDF (172 KB)  
    Freely Available from IEEE
  • Foreword to the special issue on hyperspectral remote sensing: Theory, methods, and applications

    Page(s): 459 - 465
    Save to Project icon | Request Permissions | PDF file iconPDF (1245 KB)  
    Freely Available from IEEE
  • Reconstruction From Random Projections of Hyperspectral Imagery With Spectral and Spatial Partitioning

    Page(s): 466 - 472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1285 KB) |  | HTML iconHTML  

    Random projections have recently been proposed to enable dimensionality reduction in resource-constrained sensor devices such that the computational burden is shifted to the receiver side of the system in the form of a reconstruction process. While a number compressed-sensing algorithms can provide such reconstruction, the principal-component based compressive-projection principal component analysis (CPPCA) algorithm has been shown to offer better performance for hyperspectral imagery. CPPCA is extended to incorporate both spectral and spatial partitioning of the hyperspectral dataset with experimental results evaluating reconstruction quality not only in terms of squared-error and spectral-angle fidelity but also via performance of the reconstructed data in classification and unmixing tasks. While experimental results demonstrate that either form of partitioning yields significantly better reconstruction than the original, non-partitioned algorithm, CPPCA using both spectral and spatial partitioning together outperforms either of the two used alone. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of Denoising in Band Selection for Regression Tasks in Hyperspectral Datasets

    Page(s): 473 - 481
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1878 KB) |  | HTML iconHTML  

    This paper presents a comparative analysis of six band selection methods applied to hyperspectral datasets for biophysical variable estimation problems, where the effect of denoising on band selection performance has also been analyzed. In particular, we consider four hyperspectral datasets and three regressors of different nature (ε-SVR, Regression Trees, and Kernel Ridge Regression). Results show that the denoising approach improves the band selection quality of all the tested methods. We show that noise filtering is more beneficial for the selection methods that use an estimator based on the whole dataset for the prediction of the output than for methods that use strategies based on local information (neighboring points). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Effect of Correlation on Determining the Intrinsic Dimension of a Hyperspectral Image

    Page(s): 482 - 487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (913 KB) |  | HTML iconHTML  

    Determining the intrinsic dimension of a hyperspectral image is an important step in the spectral unmixing process and under- or over-estimation of this number may lead to incorrect unmixing for unsupervised methods. Most methods for estimating the intrinsic dimension require an estimate of the noise in the image, and noise estimates are often inaccurate in the presence of spectrally correlated noise. Since hyperspectral images are known to contain such correlated noise, intrinsic dimension estimations may be overestimated. In this paper we discuss the effect of correlation, as well as possible methods for overcoming such limitations. For instance, correlated bands may be removed prior to noise estimation, or spatially-based noise approximation methods may be used in place of statistical methods. These suggestions are implemented on synthetic and real images, including images acquired by AVIRIS, Hyperion and SpecTIR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparative Study on Linear Regression-Based Noise Estimation for Hyperspectral Imagery

    Page(s): 488 - 498
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2265 KB) |  | HTML iconHTML  

    In the traditional signal model, signal is assumed to be deterministic, and noise is assumed to be random, additive and uncorrelated to the signal component. A hyperspectral image has high spatial and spectral correlation, and a pixel can be well predicted using its spatial and/or spectral neighbors; any prediction error can be considered from noise. Using this concept, several algorithms have been developed for noise estimation for hyperspectral images. However, these algorithms have not been rigorously analyzed with a unified scheme. In this paper, we conduct a comparative study for such linear regression-based algorithms using simulated images with different signal-to-noise ratio (SNR) and real images with different land cover types. Based on experimental results, instructive guidance is concluded for their practical applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hyperspectral Imagery Restoration Using Nonlocal Spectral-Spatial Structured Sparse Representation With Noise Estimation

    Page(s): 499 - 515
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10962 KB) |  | HTML iconHTML  

    Noise reduction is an active research area in image processing due to its importance in improving the quality of image for object detection and classification. In this paper, we develop a sparse representation based noise reduction method for hyperspectral imagery, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property. The main contribution of the paper is in the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. Non-locality means the self-similarity of image, by which a whole image can be partitioned into some groups containing similar patches. The similar patches in each group are sparsely represented with a shared subset of atoms in a dictionary making true signal and noise more easily separated. Sparse representation with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imagery by using 3-D blocks instead of 2-D patches for sparse coding, which also makes true signal and noise more distinguished. Moreover, hyperspectral imagery has both signal-independent and signal-dependent noises, so a mixed Poisson and Gaussian noise model is used. In order to make sparse representation be insensitive to the various noise distribution in different blocks, a variance-stabilizing transformation (VST) is used to make their variance comparable. The advantages of the proposed methods are validated on both synthetic and real hyperspectral remote sensing data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Implementation of Maximum Simplex Volume-Based Endmember Extraction in Original Hyperspectral Data Space

    Page(s): 516 - 521
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (925 KB) |  | HTML iconHTML  

    Endmember extraction (EE) is a prerequisite task for spectral analysis of hyperspectral imagery. In all kinds of EE algorithms, maximum simplex volume-based ones, such as simplex growing algorithm (SGA) and N-FINDR algorithm, have been widely used for their fully automated and efficient performance. However, implementation of the algorithms needs dimension reduction of original data, and the algorithms include innumerable volume calculation. This leads to a low speed of the algorithms and thus becomes a limitation to their applications. In this paper, a simple distance measure is presented, and then, fast SGA and fast N-FINDR algorithm are constructed based on a proposed distance measure, which is free of dimension reduction and makes use of distance measure instead of volume evaluation to speed up the algorithm. The complexity of the proposed methods is compared with the original algorithms by theoretical analysis. Experiments show that the implementation of the two improved EE algorithms is much faster than that of the two original maximum simplex volume-based EE algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements in the Ant Colony Optimization Algorithm for Endmember Extraction From Hyperspectral Images

    Page(s): 522 - 530
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1833 KB) |  | HTML iconHTML  

    Endmember extraction is a vital step in spectral unmixing of hyperspectral images. The Ant Colony Optimization (ACO) algorithm has been recently developed for endmember extraction from hyperspectral data. However, this algorithm may result in a local optimal solution for some hyperspectral images without prescient information, and also has limitation in computational performance. Therefore, in this paper, we proposed several new methods to improve the ACO algorithm for endmember extraction (ACOEE). Firstly, the heuristic information was optimized to improve the algorithm accuracy. In the improved ACOEE, only the pheromones were adopted as the heuristic information when there was no prescient information about hyperspectral data. Then, to enhance algorithm performance, an elitist strategy was proposed to lessen the iteration numbers without reducing the accuracy, and the parallel implementation of ACOEE on graphics processing units (GPUs) also was utilized to shorten the computational time per iteration. The experiment for real hyperspectral data demonstrated that both the endmember extraction accuracy and the computational performance of ACOEE benefited from these methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel-Based Weighted Abundance Constrained Linear Spectral Mixture Analysis for Remotely Sensed Images

    Page(s): 531 - 553
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (21709 KB) |  | HTML iconHTML  

    Linear spectral mixture analysis (LSMA) is a theory that can be used to perform spectral unmixing where three major LSMA techniques, least squares orthogonal subspace projection (LSOSP), non-negativity constrained least squares (NCLS) and fully constrained least squares (FCLS) have been developed for this purpose. Subsequently, these three techniques were further extended to Fisher's LSMA (FLSMA), weighted abundance constrained LSMA (WAC-LSMA) and kernel-based LSMA (K-LSMA). This paper combines both approaches of KLSMA and WAC-LSMA to derive a most general version of LSMA, kernel-based WACLSMA (KWAC-LSMA), which includes all the above-mentioned LSMA as its special cases. In particular, a new version of kernelizing FLSMA, referred to as kernel FLSMA (K-FLSMA) can be also developed to enhance the FLSMA performance by replacing the weighting matrix used in WAC-LSMA with a matrix specified by the within-class scatter matrix. The utility of the KWAC-LSMA is further demonstrated by multispectral and hyperspectral experiments for performance analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Endmember Dissimilarity Constrained Non-Negative Matrix Factorization Method for Hyperspectral Unmixing

    Page(s): 554 - 569
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5224 KB) |  | HTML iconHTML  

    Non-negative matrix factorization (NMF) has been introduced into the field of hyperspectral unmixing in the last ten years. To relieve the non-convex problem of NMF, different constraints are imposed on NMF. In this paper, a new constraint, termed the endmember dissimilarity constraint (EDC), is proposed. The proposed constraint can measure the difference between the signatures as well as constrain the signatures to be smooth. A set of smooth spectra contained in the dataset space with the largest differences can be obtained, as far as is possible, which can be seen as endmembers. The experimental performances of our method and other state-of-the-art constrained NMF algorithms were obtained and analyzed, proving that the proposed method outperforms other NMF unmixing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hyperspectral Intrinsic Dimensionality Estimation With Nearest-Neighbor Distance Ratios

    Page(s): 570 - 579
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1378 KB) |  | HTML iconHTML  

    The first task to be performed in most hyperspectral unmixing chains is the estimation of the number of endmembers. Several techniques for this problem have already been proposed, but the class of fractal techniques for intrinsic dimensionality estimation is often overlooked. In this paper, we study an intrinsic dimensionality estimation technique based on the known scaling behavior of nearest-neighbor distance ratios, and its performance on the spectral unmixing problem. We present the relation between intrinsic manifold dimensionality and the number of endmembers in a mixing model, and investigate the effects of denoising and the statistics on the algorithm. The algorithm is compared with several alternative methods, such as Hysime, virtual dimensionality, and several fractal-dimension based techniques, on both artificial and real data sets. Robust behavior in the presence of noise, and independence of the spectral dimensionality, is demonstrated. Furthermore, due to its construction, the algorithm can be used for non-linear mixing models as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sub-Pixel Mapping Based on a MAP Model With Multiple Shifted Hyperspectral Imagery

    Page(s): 580 - 593
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2670 KB) |  | HTML iconHTML  

    Sub-pixel mapping is technique used to obtain the spatial distribution of different classes at the sub-pixel scale by transforming fraction images to a classification map with a higher resolution. Traditional sub-pixel mapping algorithms only utilize a low-resolution image, the information of which is not enough to obtain a high-resolution land-cover map. The accuracy of sub-pixel mapping can be improved by incorporating auxiliary datasets, such as multiple shifted images in the same area, to provide more sub-pixel land-cover information. In this paper, a sub-pixel mapping framework based on a maximum a posteriori (MAP) model is proposed to utilize the complementary information of multiple shifted images. In the proposed framework, the sub-pixel mapping problem is transformed to a regularization problem, and the MAP model is used to regularize the sub-pixel mapping problem to be well-posed by adding some prior information, such as a Laplacian model. The proposed algorithm was compared with a traditional sub-pixel mapping algorithm based on a single image, and another multiple shifted images based sub-pixel mapping method, using both synthetic and real hyperspectral images. Experimental results demonstrated that the proposed approach outperforms the traditional sub-pixel mapping algorithms, and hence provides an effective option to improve the accuracy of sub-pixel mapping for hyperspectral imagery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral Derivative Features for Classification of Hyperspectral Remote Sensing Images: Experimental Evaluation

    Page(s): 594 - 601
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1651 KB) |  | HTML iconHTML  

    Derivatives of spectral reflectance signatures can capture salient features of different land-cover classes. Such information has been used for supervised classification of remote sensing data along with spectral reflectance. In the paper, we study how supervised classification of hyperspectral remote sensing data can benefit from the use of derivatives of spectral reflectance without the aid of other techniques, such as dimensionality reduction and data fusion. An empirical conclusion is given based on a large amount of experimental evaluations carried out on three real hyperspectral remote sensing data sets. The experimental results show that when a training data set is of a small size or the quality of the data is poor, the use of additional first order derivatives can significantly improve classification accuracies along with original spectral features when using classifiers which can avoid the “curse of dimensionality,” such as the SVM algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using OWA Fusion Operators for the Classification of Hyperspectral Images

    Page(s): 602 - 614
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3139 KB) |  | HTML iconHTML  

    In this paper, we propose a novel ensemble-based classification system for improving the classification accuracy of hyperspectral images. To generate the ensemble, we run the mean-shift (MS) algorithm several times on different bands randomly selected from the hyperspectral cube and with distinct kernel width parameters. The resulting set of MS maps are then successively labeled via a pair wise labeling procedure with respect to a spectral-based classification map generated by the support vector machine (SVM) classifier. To this end, for each region in the MS maps, the weighted-majority-voting (WMV) rule is applied to the corresponding pixels in the SVM map. The output of this step is a set of spectral-spatial classification maps termed as SVM-MS maps. In order to generate the final classification result, we propose to aggregate this set of SVM-MS maps using the ordered weighted averaging (OWA) operator. The determination of the associated weights is made using the idea of a stress function. The performance of the proposed classification system is assessed on three different hyperspectral datasets acquired by the Reflective Optics System Imaging Spectrometer (ROSIS-03), the Digital Imagery Collection Experiment (HYDICE) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Classification for Assessing PRISMA Hyperspectral Potential for Agricultural Land Use

    Page(s): 615 - 625
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1758 KB) |  | HTML iconHTML  

    The upcoming launch of the next generation of hyperspectral satellites (PRISMA, EnMap, HyspIRI, etc.) will meet the increasing demand for the availability/accessibility of hyperspectral information on agricultural land use from the agriculture community. To this purpose, algorithms for the classification of remotely sensed images are here considered for agricultural monitoring of cultivated area, exploiting remotely sensed high spectral resolution images. Classification is accomplished by procedures based on discriminant analysis tools that well suit hyperspectrality, circumventing what in statistics is called “the curse of dimensionality”. As a byproduct of classification, a full assessment of the spectral bands of the sensor is obtained, ranking them with the purpose of understanding their role in segmentation and classification. The methodology has been validated on two independent image datasets gathered by the MIVIS (Multispectral Infrared and Visible Imaging Spectrometer) sensor for which ground validations were available. A comparison with the popular multiclass SVM (Support Vector Machines) classifier is also presented. Results show that a good classification (minimum global success rate 95% through all experiments) is achieved by using the 10 spectral bands selected as the most discriminant by the proposed procedure; moreover, it also appears that nonparametric techniques generally outperform parametric ones. The present study confirms that the new generation of hyperspectral satellite data like PRISMA can ripen an end-user application for agricultural land-use of cultivated area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Kernel-Based Target-Constrained Interference-Minimized Filter for Hyperspectral Sub-Pixel Target Detection

    Page(s): 626 - 637
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2299 KB) |  | HTML iconHTML  

    The target-constrained interference-minimized filter (TCIMF) method has been successfully applied to various hyperspectral target detection applications. This paper presents a nonlinear version of TCIMF, called kernel-based TCIMF (KTCIMF), employing the kernel method to resolve the issue of nonlinear endmember mixing in hyperspectral images (HSI). Input data are implicitly mapped into a high-dimensional feature space, where it is assumed that target signals are more separable from background signals. Conventional TCIMF performs well in suppressing undesired signatures whose spectra are similar to that of the targets, thereby enhancing performance, and with less false alarms. KTCIMF not only takes into consideration the nonlinear endmember mixture but also fully exploits the other spectrally similar interference signatures. In this way, it is effective in suppressing both the background and those undesired signatures that may cause false alarms in traditional methods. Experimental results with both simulated and real hyperspectral image data confirm KTCIMF's performance with intimately mixed data. Compared with conventional kernel detectors, KTCIMF shows improved ROC curves and better separability between targets and backgrounds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Target Detector for Hyperspectral Data Using Cointegration Theory

    Page(s): 638 - 643
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1345 KB) |  | HTML iconHTML  

    This paper introduces Cointegration Theory to address the problem of adaptive target detection in hyperspectral imagery. Cointegration Theory aims at mining a long-term equilibrium relationship, which refers to the condition that an appropriate linear combination of several non-stationary series can be stationary as long as they have similar or related drift. Hyperspectral response sequences, which are highly non-stationary, have similar patterns among the same materials. To be treated as a time series, each given hyperspectral curve is matched with the reference spectrum via the Johansen Cointegration Test. The statistic of the test is then used for target detection. Experimental results indicate that our proposed method is effective and has a strong capacity to identify interesting objects from their background. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple-Window Anomaly Detection for Hyperspectral Imagery

    Page(s): 644 - 658
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3485 KB) |  | HTML iconHTML  

    Due to advances of hyperspectral imaging sensors many unknown and subtle targets that cannot be resolved by multispectral imagery can now be uncovered by hyperspectral imagery. These targets generally cannot be identified by visual inspection or prior knowledge, but yet provide crucial and vital information for data exploitation. One such type of targets is anomalies which have recently received considerable interest in hyperspectral image analysis. Many anomaly detectors have been developed and most of them are based on the most widely used Reed-Yu's algorithm, called RX detector (RXD). However, a key issue in making RX detector-like anomaly detectors effective is how to effectively utilize the spectral information provided by data samples, e.g., sample covariance matrix used by RXD. Recently, a dual window-based eigen separation transform (DWEST) was developed to address this issue. This paper extends the concept of DWEST to develop a new approach, to be called multiple-window anomaly detection (MWAD) by making use of multiple windows to perform anomaly detection adaptively. As a result, MWAD is able to detect anomalies of various sizes using multiple windows so that local spectral variations can be characterized and extracted by different window sizes. By virtue of this newly developed MWAD, many existing RXD-like anomaly detectors including DWEST can be derived as special cases of MWAD. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Anomalies Produced by Buried Archaeological Structures Using Nonlinear Principal Component Analysis Applied to Airborne Hyperspectral Image

    Page(s): 659 - 669
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3892 KB) |  | HTML iconHTML  

    In this paper, airborne hyperspectral data have been exploited by means of Nonlinear Principal Component Analysis (NLPCA) to test their effectiveness as a tool for archaeological prospection, evaluating their potential for detecting anomalies related to buried archaeological structures. In the literature, the NLPCA was used to decorrelate the information related to a hyperspectral image. The resulting nonlinear principal components (NLPCs) contain information related to different land cover types and biophysical properties, such as vegetation coverage or soil wetness. From this point of view, NLPCA applied to airborne hyperspectral data was exploited to test their effectiveness and capability in highlighting the anomalies related to buried archaeological structures. Each component obtained from the NLPCA has been interpreted in order to assess any tonal anomalies. As a matter of a fact, since every analyzed component exhibited anomalies different in terms of size and intensity, the Separability Index (SI) was applied for measuring the tonal difference of the anomalies with respect to the surrounding area. SI has been evaluated for determining the potential of anomalies detection in each component. The airborne Multispectral Infrared and Visible Imaging Spectrometer (MIVIS) images, collected over the archaeological Park of Selinunte, were analyzed for this purpose. In this area, the presence of remains, not yet excavated, was reported by archaeologists. A previous analysis of this image, carried out to highlight the buried structures, appear to match the archaeological prospection. The results obtained by the present work demonstrate that the use of the NLPCA technique, compared to previous approaches emphasizes the ability of airborne hyperspectral images to identify buried structures. In particular, the adopted data processing flow chart (i.e., NLPCA and SI techniques, data resampling criteria and anomaly evaluations criteria) applied to MIVIS airborne hyperspectr- l data, collected over Selinunte Archaeological Park, highlighted the ability of the NLPCA technique in emphasizing the anomalies related to the presence of buried structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Highly-Parallel GPU Architecture for Lossy Hyperspectral Image Compression

    Page(s): 670 - 681
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1412 KB) |  | HTML iconHTML  

    Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidia's CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Critical Nitrogen Curve and Remote Detection of Nitrogen Nutrition Index for Corn in the Northwestern Plain of Shandong Province, China

    Page(s): 682 - 689
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1409 KB) |  | HTML iconHTML  

    The nitrogen nutrition index (NNI) is calculated from the measured N concentration and the critical nitrogen (N) curve. It can be used to determine the N required by a crop and is helpful for optimizing N application in the field. Our objectives were to validate the existing corn critical N curve for the northwestern plain of Shandong Province and to design a more accurate remote detection method for the NNI. For this purpose, field measurements were conducted weekly to acquire the biomass and N concentrations during the corn growing season of 2011. Additionally, nearly 60 corn canopy spectra were collected during field campaigns. First, limiting and non-limiting N points were selected from sampled data, and they were used to validate the existing critical N curve. Second, an NNI estimation model based on a Principal Component Analysis method and Back Propagation Artificial Neural Network (PCA-BP-ANN) model was established. The collected canopy spectra and corresponding NNI were used to compare the performances of the above mentioned method and other for NNI estimation. The results showed that the N curve proposed in the literature is suitable for the study region. Among the three remote detection methods, PCA-BP-ANN provided the best results with highest R value and lowest root mean square error value. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Aphid Density of Winter Wheat Leaf Using Hyperspectral Measurements

    Page(s): 690 - 698
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1540 KB) |  | HTML iconHTML  

    Wheat aphid, Sitobion avenae F. is one of the most destructive pests that emerge in northwest China almost every year, impacting on the production of winter wheat. Hyperspectral remote sensing has been demonstrated to be superior to a traditional method in detecting diseases and pests. In this study, spectral features (SFs) were examined by four methods to detect aphid density of wheat leaf and model was established to estimate aphid density using partial least square regression (PLSR). A total of 60 wheat leaves with different aphid densities were selected. Aphid density of the leaves was first visually estimated, and then the reflectance of leaves was measured in the spectral range of 350-2500 nm using a spectroradiometer coupling with a leaf clip. A total of 48 spectral features were obtained and examined via correlation analysis, independent t-test by spectral derivative method, continuous removal method, continuous wavelet analysis (CWA) and commonly used vegetation indices for stress detection. Based on variable importance in projection (VIP), five spectral features (VIP ≥ 1) were selected from 17 spectral features due to their strong correlation with aphid density (R2 ≥ 0.5) to establish the model for estimating aphid density by PLSR. The result showed that the model had a great potential in detecting aphid density with a relative root mean square error (RMSE) of 15 and a coefficient of determination (R2) of 0.77. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (J-STARS) addresses current issues and techniques in applied remote and in situ sensing, their integration, and applied modeling and information creation for understanding the Earth.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Jocelyn Chanussot
Grenoble Institute of Technology