# IEEE Transactions on Geoscience and Remote Sensing

Includes the top 50 most frequently accessed documents for this publication according to the usage statistics for the month of

• ### Deep Recurrent Neural Networks for Hyperspectral Image Classification

Publication Year: 2017, Page(s):3639 - 3655
Cited by:  Papers (36)  |  Patents (1)
| | PDF (5078 KB) | HTML

In recent years, vector-based machine learning algorithms, such as random forests, support vector machines, and 1-D convolutional neural networks, have shown promising results in hyperspectral image classification. Such methodologies, nevertheless, can lead to information loss in representing hyperspectral pixels, which intrinsically have a sequence-based data structure. A recurrent neural network... View full abstract»

• ### When Deep Learning Meets Metric Learning: Remote Sensing Image Scene Classification via Learning Discriminative CNNs

Publication Year: 2018, Page(s):2811 - 2821
Cited by:  Papers (9)
| | PDF (3873 KB) | HTML

Remote sensing image scene classification is an active and challenging task driven by many applications. More recently, with the advances of deep learning models especially convolutional neural networks (CNNs), the performance of remote sensing image scene classification has been significantly improved due to the powerful feature representations learnt through CNNs. Although great success has been... View full abstract»

• ### Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks

Publication Year: 2016, Page(s):6232 - 6251
Cited by:  Papers (176)
| | PDF (7555 KB) | HTML

Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image ... View full abstract»

• ### Generative Adversarial Networks for Hyperspectral Image Classification

Publication Year: 2018, Page(s):5046 - 5063
Cited by:  Papers (2)
| | PDF (5661 KB) | HTML

A generative adversarial network (GAN) usually contains a generative network and a discriminative network in competition with each other. The GAN has shown its capability in a variety of applications. In this paper, the usefulness and effectiveness of GAN for classification of hyperspectral images (HSIs) are explored for the first time. In the proposed GAN, a convolutional neural network (CNN) is ... View full abstract»

• ### Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification

Publication Year: 2017, Page(s):645 - 657
Cited by:  Papers (94)
| | PDF (11453 KB) | HTML

We propose an end-to-end framework for the dense, pixelwise classification of satellite imagery with convolutional neural networks (CNNs). In our framework, CNNs are directly trained to produce classification maps out of the input images. We first devise a fully convolutional architecture and demonstrate its relevance to the dense classification problem. We then address the issue of imperfect trai... View full abstract»

• ### Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks

Publication Year: 2017, Page(s):2486 - 2498
Cited by:  Papers (25)
| | PDF (8193 KB) | HTML

In this paper, we focus on tackling the problem of automatic accurate localization of detected objects in high-resolution remote sensing images. The two major problems for object localization in remote sensing images caused by the complex context information such images contain are achieving generalizability of the features used to describe objects and achieving accurate object locations. To addre... View full abstract»

• ### The Importance of Physical Quantities for the Analysis of Multitemporal and Multiangular Optical Very High Spatial Resolution Images

Publication Year: 2014, Page(s):6241 - 6256
Cited by:  Papers (29)
| | PDF (2665 KB) | HTML

The analysis of multitemporal very high spatial resolution imagery is too often limited to the sole use of pixel digital numbers which do not accurately describe the observed targets between the various collections due to the effects of changing illumination, viewing geometries, and atmospheric conditions. This paper demonstrates both qualitatively and quantitatively that not only physically based... View full abstract»

• ### Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images

Publication Year: 2016, Page(s):7405 - 7415
Cited by:  Papers (168)
| | PDF (2147 KB) | HTML

Object detection in very high resolution optical remote sensing images is a fundamental problem faced for remote sensing image analysis. Due to the advances of powerful feature representations, machine-learning-based object detection is receiving increasing attention. Although numerous feature representations exist, most of them are handcrafted or shallow-learning-based features. As the object det... View full abstract»

• ### Unsupervised Spectral–Spatial Feature Learning via Deep Residual Conv–Deconv Network for Hyperspectral Image Classification

Publication Year: 2018, Page(s):391 - 406
Cited by:  Papers (5)
| | PDF (7004 KB) | HTML

Supervised approaches classify input data using a set of representative samples for each class, known as training samples. The collection of such samples is expensive and time demanding. Hence, unsupervised feature learning, which has a quick access to arbitrary amounts of unlabeled data, is conceptually of high interest. In this paper, we propose a novel network architecture, fully Conv-Deconv ne... View full abstract»

• ### Unsupervised Deep Feature Extraction for Remote Sensing Image Classification

Publication Year: 2016, Page(s):1349 - 1362
Cited by:  Papers (129)
| | PDF (2708 KB) | HTML

This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsuperv... View full abstract»

• ### Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach

Publication Year: 2016, Page(s):4544 - 4554
Cited by:  Papers (127)
| | PDF (2590 KB) | HTML

In this paper, we propose a spectral-spatial feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and spatial feature extraction, respectively. In this framework, a balanced local discriminant embedding algorithm is proposed for spectral feature extraction from high-dimensional hyperspectral data sets. In the meantime, convol... View full abstract»

• ### Hyperspectral Image Classification With Deep Learning Models

Publication Year: 2018, Page(s):5408 - 5423
| | PDF (4980 KB) | HTML

Deep learning has achieved great successes in conventional computer vision tasks. In this paper, we exploit deep learning techniques to address the hyperspectral image classification problem. In contrast to conventional computer vision tasks that only examine the spatial context, our proposed method can exploit both spatial context and spectral correlation to enhance hyperspectral image classifica... View full abstract»

• ### Interferometric Processing of Sentinel-1 TOPS Data

Publication Year: 2016, Page(s):2220 - 2234
Cited by:  Papers (54)
| | PDF (2793 KB) | HTML

Sentinel-1 (S-1) has an unparalleled mapping capacity. In interferometric wide swath (IW) mode, three subswaths imaged in the novel Terrain Observation by Progressive Scans (TOPS) SAR mode result in a total swath width of 250 km. S-1 has become the European workhorse for large area mapping and interferometric monitoring at medium resolution. The interferometric processing of TOPS data however requ... View full abstract»

• ### Target Classification Using the Deep Convolutional Networks for SAR Images

Publication Year: 2016, Page(s):4806 - 4817
Cited by:  Papers (116)
| | PDF (2963 KB) | HTML

The algorithm of synthetic aperture radar automatic target recognition (SAR-ATR) is generally composed of the extraction of a set of features that transform the raw input into a representation, followed by a trainable classifier. The feature extractor is often hand designed with domain knowledge and can significantly impact the classification accuracy. By automatically learning hierarchies of feat... View full abstract»

• ### Self-Taught Feature Learning for Hyperspectral Image Classification

Publication Year: 2017, Page(s):2693 - 2705
Cited by:  Papers (10)
| | PDF (6786 KB) | HTML

In this paper, we study self-taught learning for hyperspectral image (HSI) classification. Supervised deep learning methods are currently state of the art for many machine learning problems, but these methods require large quantities of labeled data to be effective. Unfortunately, existing labeled HSI benchmarks are too small to directly train a deep supervised network. Alternatively, we used self... View full abstract»

• ### Predicting Missing Values in Spatio-Temporal Remote Sensing Data

Publication Year: 2018, Page(s):2841 - 2853
| | PDF (9968 KB) | HTML Media

Continuous, consistent, and long time-series from remote sensing are essential to monitoring changes on Earth's surface. However, analyzing such data sets is often challenging due to missing values introduced by cloud cover, missing orbits, sensor geometry artifacts, and so on. We propose a new and accurate spatio-temporal prediction method to replace missing values in remote sensing data sets. Th... View full abstract»

• ### Classification of hyperspectral remote sensing images with support vector machines

Publication Year: 2004, Page(s):1778 - 1790
Cited by:  Papers (1431)  |  Patents (3)
| | PDF (564 KB) | HTML

This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines (SVMs). First, we propose a theoretical discussion and experimental analysis aimed at understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces. Then, we assess the effectiveness of SVMs with respect to conventional feature-reduction-ba... View full abstract»

• ### Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework

Publication Year: 2018, Page(s):847 - 858
Cited by:  Papers (7)
| | PDF (6136 KB) | HTML

In this paper, we designed an end-to-end spectral-spatial residual network (SSRN) that takes raw 3-D cubes as input data without feature engineering for hyperspectral image classification. In this network, the spectral and spatial residual blocks consecutively learn discriminative features from abundant spectral signatures and spatial contexts in hyperspectral imagery (HSI). The proposed SSRN is a... View full abstract»

• ### Hyperspectral Image Classification With Deep Feature Fusion Network

Publication Year: 2018, Page(s):3173 - 3184
Cited by:  Papers (3)
| | PDF (15280 KB) | HTML

Recently, deep learning has been introduced to classify hyperspectral images (HSIs) and achieved good performance. In general, deep models adopt a large number of hierarchical layers to extract features. However, excessively increasing network depth will result in some negative effects (e.g., overfitting, gradient vanishing, and accuracy degrading) for conventional convolutional neural networks. I... View full abstract»

• ### Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification

Publication Year: 2017, Page(s):7177 - 7188
Cited by:  Papers (9)
| | PDF (3323 KB) | HTML

Following the great success of deep convolutional neural networks (CNNs) in computer vision, this paper proposes a complex-valued CNN (CV-CNN) specifically for synthetic aperture radar (SAR) image interpretation. It utilizes both amplitude and phase information of complex SAR imagery. All elements of CNN including input-output layer, convolution layer, activation function, and pooling layer are ex... View full abstract»

• ### 3-D Deep Learning Approach for Remote Sensing Image Classification

Publication Year: 2018, Page(s):4420 - 4434
| | PDF (2930 KB) | HTML

Recently, a variety of approaches have been enriching the field of remote sensing (RS) image processing and analysis. Unfortunately, existing methods remain limited to the rich spatiospectral content of today's large data sets. It would seem intriguing to resort to deep learning (DL)-based approaches at this stage with regard to their ability to offer accurate semantic interpretation of the data. ... View full abstract»

• ### Efficient Thermal Noise Removal for Sentinel-1 TOPSAR Cross-Polarization Channel

Publication Year: 2018, Page(s):1555 - 1565
Cited by:  Papers (2)
| | PDF (22376 KB) | HTML

The intensity of a Sentinel-1 Terrain Observation with Progressive Scans synthetic aperture radar image is disturbed by additive thermal noise, particularly in the cross-polarization channel. Although the European Space Agency provides calibrated noise vectors for noise power subtraction, residual noise contributions are significant when considering the relatively narrow backscattering distributio... View full abstract»

• ### Compressed-Domain Ship Detection on Spaceborne Optical Image Using Deep Neural Network and Extreme Learning Machine

Publication Year: 2015, Page(s):1174 - 1185
Cited by:  Papers (116)
| | PDF (1111 KB) | HTML

Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infr... View full abstract»

• ### Large-Scale Remote Sensing Image Retrieval by Deep Hashing Neural Networks

Publication Year: 2018, Page(s):950 - 965
Cited by:  Papers (5)
| | PDF (6558 KB) | HTML

As one of the most challenging tasks of remote sensing big data mining, large-scale remote sensing image retrieval has attracted increasing attention from researchers. Existing large-scale remote sensing image retrieval approaches are generally implemented by using hashing learning methods, which take handcrafted features as inputs and map the high-dimensional feature vector to the low-dimensional... View full abstract»

• ### Complex-Image-Based Sparse SAR Imaging and its Equivalence

Publication Year: 2018, Page(s):5006 - 5014
| | PDF (40808 KB) | HTML

Using sparse signal processing to replace matched filtering (MF) in synthetic aperture radar (SAR) imaging has shown significant potential to improve image quality. Due to the huge computational cost needed, it is difficult to apply conventional observation-matrix-based sparse SAR imaging method for large-scene reconstruction. The azimuth-range decouple method is able to minimize the computational... View full abstract»

• ### Rotation-Insensitive and Context-Augmented Object Detection in Remote Sensing Images

Publication Year: 2018, Page(s):2337 - 2348
Cited by:  Papers (6)
| | PDF (3514 KB) | HTML

Most of the existing deep-learning-based methods are difficult to effectively deal with the challenges faced for geospatial object detection such as rotation variations and appearance ambiguity. To address these problems, this paper proposes a novel deep-learning-based object detection framework including region proposal network (RPN) and local-contextual feature fusion network designed for remote... View full abstract»

• ### Convolutional SVM Networks for Object Detection in UAV Imagery

Publication Year: 2018, Page(s):3107 - 3118
| | PDF (4969 KB) | HTML

Nowadays, unmanned aerial vehicles (UAVs) are viewed as effective acquisition platforms for several civilian applications. They can acquire images with an extremely high level of spatial detail compared to standard remote sensing platforms. However, these images are highly affected by illumination, rotation, and scale changes, which further increases the complexity of analysis compared to those ob... View full abstract»

• ### Learning Spatial–Spectral Features for Hyperspectral Image Classification

Publication Year: 2018, Page(s):5138 - 5147
| | PDF (3485 KB) | HTML

Combining spatial information with spectral information for classifying hyperspectral images can dramatically improve the performance. This paper proposes a simple but innovative framework to automatically generate spatial–spectral features for hyperspectral image classification. Two unsupervised learning methods—$K$-means and ... View full abstract»

• ### Hyperspectral Image Classification Using Deep Pixel-Pair Features

Publication Year: 2017, Page(s):844 - 853
Cited by:  Papers (69)
| | PDF (3546 KB) | HTML

The deep convolutional neural network (CNN) is of great interest recently. It can provide excellent performance in hyperspectral image classification when the number of training samples is sufficiently large. In this paper, a novel pixel-pair method is proposed to significantly increase such a number, ensuring that the advantage of CNN can be actually offered. For a testing pixel, pixel-pairs, con... View full abstract»

• ### Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines

Publication Year: 2018, Page(s):1579 - 1597
Cited by:  Papers (5)
| | PDF (3580 KB) | HTML

Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in the last four decades from being a sparse research tool into a commodity product available to a broad user community. Specially, in the last 10 years, a large number of new techniques able to take into account the special properties of hyperspectral data have been introduced for hyperspectral data processing, where ... View full abstract»

• ### Deep Supervised and Contractive Neural Network for SAR Image Classification

Publication Year: 2017, Page(s):2442 - 2459
Cited by:  Papers (16)
| | PDF (8888 KB) | HTML

The classification of a synthetic aperture radar (SAR) image is a significant yet challenging task, due to the presence of speckle noises and the absence of effective feature representation. Inspired by deep learning technology, a novel deep supervised and contractive neural network (DSCNN) for SAR image classification is proposed to overcome these problems. In order to extract spatial features, a... View full abstract»

• ### Supervised Deep Feature Extraction for Hyperspectral Image Classification

Publication Year: 2018, Page(s):1909 - 1921
Cited by:  Papers (4)
| | PDF (5303 KB) | HTML

Hyperspectral image classification has become a research focus in recent literature. However, well-designed features are still open issues that impact on the performance of classifiers. In this paper, a novel supervised deep feature extraction method based on siamese convolutional neural network (S-CNN) is proposed to improve the performance of hyperspectral image classification. First, a CNN with... View full abstract»

• ### A Deep Neural Network With Spatial Pooling (DNNSP) for 3-D Point Cloud Classification

Publication Year: 2018, Page(s):4594 - 4604
| | PDF (3454 KB) | HTML

The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. Most works in deep learning have achieved a great success on regular input representations, but they are hard to be directly applied to classify point clouds due to the irregularity and inhomogeneity of the data. In this paper, a... View full abstract»

• ### Oil Spill Segmentation via Adversarial$f$-Divergence Learning

Publication Year: 2018, Page(s):4973 - 4988
Cited by:  Papers (1)
| | PDF (9324 KB) | HTML

We develop an automatic oil spill segmentation method in terms of$f$-divergence minimization. We exploit$f$-divergence for measuring the disagreement between the distributions of ground-truth and generated oil spill segmentations. To render tractable optimiz... View full abstract»

• ### Hyperspectral Image Classification Using Dictionary-Based Sparse Representation

Publication Year: 2011, Page(s):3973 - 3985
Cited by:  Papers (467)  |  Patents (1)
| | PDF (1750 KB) | HTML

A new sparsity-based algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond... View full abstract»

• ### Multisource Remote Sensing Data Classification Based on Convolutional Neural Network

Publication Year: 2018, Page(s):937 - 949
Cited by:  Papers (5)
| | PDF (4503 KB) | HTML

As a list of remotely sensed data sources is available, how to efficiently exploit useful information from multisource data for better Earth observation becomes an interesting but challenging problem. In this paper, the classification fusion of hyperspectral imagery (HSI) and data from other multiple sensors, such as light detection and ranging (LiDAR) data, is investigated with the state-of-the-a... View full abstract»

• ### Spatial Group Sparsity Regularized Nonnegative Matrix Factorization for Hyperspectral Unmixing

Publication Year: 2017, Page(s):6287 - 6304
Cited by:  Papers (3)
| | PDF (38300 KB) | HTMLCode

In recent years, blind source separation (BSS) has received much attention in the hyperspectral unmixing field due to the fact that it allows the simultaneous estimation of both endmembers and fractional abundances. Although great performances can be obtained by the BSS-based unmixing methods, the decomposition results are still unstable and sensitive to noise. Motivated by the first law of geogra... View full abstract»

• ### On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance

Publication Year: 2006, Page(s):2207 - 2218
Cited by:  Papers (449)
| | PDF (1051 KB) | HTML

The 16-day revisit cycle of Landsat has long limited its use for studying global biophysical processes, which evolve rapidly during the growing season. In cloudy areas of the Earth, the problem is compounded, and researchers are fortunate to get two to three clear images per year. At the same time, the coarse resolution of sensors such as the Advanced Very High Resolution Radiometer and Moderate R... View full abstract»

• ### Fusing Meter-Resolution 4-D InSAR Point Clouds and Optical Images for Semantic Urban Infrastructure Monitoring

Publication Year: 2017, Page(s):14 - 26
Cited by:  Papers (16)
| | PDF (9075 KB) | HTML

Using synthetic aperture radar (SAR) interferometry to monitor long-term millimeter-level deformation of urban infrastructures, such as individual buildings and bridges, is an emerging and important field in remote sensing. In the state-of-the-art methods, deformation parameters are retrieved and monitored on a pixel basis solely in the SAR image domain. However, the inevitable side-looking imagin... View full abstract»

• ### Large-Area Land Use and Land Cover Classification With Quad, Compact, and Dual Polarization SAR Data by PALSAR-2

Publication Year: 2018, Page(s):5550 - 5557
| | PDF (6990 KB) | HTML

In this paper, we demonstrated the possibility of performing land use and land cover (LULC) classification over a wide area by an L-band polarimetric synthetic aperture radar (SAR). In previous studies, there has been scant LULC classification by polarimetric SAR data over a wide area. We used satellite-based SAR data with an area of ca. 320 000 km2obtained by the Phased Array type L-ba... View full abstract»

• ### The SMOS Soil Moisture Retrieval Algorithm

Publication Year: 2012, Page(s):1384 - 1403
Cited by:  Papers (376)
| | PDF (1164 KB) | HTML

The Soil Moisture and Ocean Salinity (SMOS) mission is European Space Agency (ESA's) second Earth Explorer Opportunity mission, launched in November 2009. It is a joint program between ESA Centre National d'Etudes Spatiales (CNES) and Centro para el Desarrollo Tecnologico Industrial. SMOS carries a single payload, an L-Band 2-D interferometric radiometer in the 1400-1427 MHz protected band. This w... View full abstract»

• ### Hyperspectral Image Classification Based on Deep Deconvolution Network With Skip Architecture

Publication Year: 2018, Page(s):4781 - 4791
| | PDF (2463 KB) | HTML

Convolution neural network (CNN) utilizes alternating convolutional and pooling layers to learn representative spatial information when the training samples are sufficient. However, for pixelwise classification of hyperspectral image, some important information is neglected by CNN, such as the erased information by the pooling operation and the appearance information from lower layers. Moreover, t... View full abstract»

• ### A New Spatial–Spectral Feature Extraction Method for Hyperspectral Images Using Local Covariance Matrix Representation

Publication Year: 2018, Page(s):3534 - 3546
Cited by:  Papers (3)
| | PDF (16303 KB) | HTML

In this paper, a novel local covariance matrix (CM) representation method is proposed to fully characterize the correlation among different spectral bands and the spatial-contextual information in the scene when conducting feature extraction (FE) from hyperspectral images (HSIs). Specifically, our method first projects the HSI into a subspace, using the maximum noise fraction method. Then, for eac... View full abstract»

• ### Unsupervised Rapid Flood Mapping Using Sentinel-1 GRD SAR Images

Publication Year: 2018, Page(s):3290 - 3299
| | PDF (16907 KB) | HTML

We present a new methodology for rapid flood mapping exploiting Sentinel-1 synthetic aperture radar data. In particular, we propose the usage of ground range detected (GRD) images, i.e., preprocessed products made available by the European Space Agency, which can be quickly treated for information extraction through simple and poorly demanding algorithms. The proposed framework is based on two pro... View full abstract»

• ### SAR Automatic Target Recognition Based on Multiview Deep Learning Framework

Publication Year: 2018, Page(s):2196 - 2210
Cited by:  Papers (1)
| | PDF (4732 KB) | HTML

It is a feasible and promising way to utilize deep neural networks to learn and extract valuable features from synthetic aperture radar (SAR) images for SAR automatic target recognition (ATR). However, it is too difficult to effectively train the deep neural networks with limited raw SAR images. In this paper, we propose a new approach to do SAR ATR, in which a multiview deep learning framework wa... View full abstract»

• ### Fully Convolutional Networks for Multisource Building Extraction From an Open Aerial and Satellite Imagery Data Set

Publication Year: 2018, Page(s):1 - 13
| | PDF (9998 KB)

The application of the convolutional neural network has shown to greatly improve the accuracy of building extraction from remote sensing imagery. In this paper, we created and made open a high-quality multisource data set for building detection, evaluated the accuracy obtained in most recent studies on the data set, demonstrated the use of our data set, and proposed a Siamese fully convolutional n... View full abstract»

• ### Algorithmic Chain for Lightning Detection and False Event Filtering Based on the MTG Lightning Imager

Publication Year: 2018, Page(s):5115 - 5124
| | PDF (3367 KB) | HTML

Meteosat Third Generation (MTG) is the next generation of European meteorological geostationary satellites, set to be launched in 2021. Besides ensuring continuity with Meteosat Second Generation imagery mission, the new series will feature new instruments, such as the Lightning Imager (LI), a high-speed optical detector providing near real-time lightning detection capabilities over Europe and Afr... View full abstract»

• ### Human Activity Classification Based on Micro-Doppler Signatures Using a Support Vector Machine

Publication Year: 2009, Page(s):1328 - 1337
Cited by:  Papers (290)
| | PDF (1594 KB) | HTML

The feasibility of classifying different human activities based on micro-Doppler signatures is investigated. Measured data of 12 human subjects performing seven different activities are collected using a Doppler radar. The seven activities include running, walking, walking while holding a stick, crawling, boxing while moving forward, boxing while standing in place, and sitting still. Six features ... View full abstract»

• ### A Critical Comparison Among Pansharpening Algorithms

Publication Year: 2015, Page(s):2565 - 2586
Cited by:  Papers (221)
| | PDF (3453 KB) | HTML

Pansharpening aims at fusing a multispectral and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter. In the last decades, many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking,... View full abstract»

• ### A Case Study on the Correction of Atmospheric Phases for SAR Tomography in Mountainous Regions

Publication Year: 2018, Page(s):1 - 16
| | PDF (31943 KB)

Synthetic aperture radar (SAR) tomography with repeat-pass acquisitions generally requires a priori phase calibration of the interferometric data stack by compensating for the atmosphere-induced phase delay variations. These variations act as a disturbance in tomographic focusing. In mountainous regions, the mitigation of these disturbances is particularly challenging due to strong spatial variati... View full abstract»

## Aims & Scope

IEEE Transactions on Geoscience and Remote Sensing (TGRS) s a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information. This journal publishes technical papers disclosing new and significant research.  The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions.

Full Aims & Scope

## Meet Our Editors

Editor-in-Chief
Simon H. Yueh
Jet Propulsion Laboratory