Learning Shared Cross-modality Representation Using Multispectral-LiDAR and Hyperspectral Data

Due to the ever-growing diversity of the data source, multi-modality feature learning has attracted more and more attention. However, most of these methods are designed by jointly learning feature representation from multi-modalities that exist in both training and test sets, yet they are less investigated in absence of certain modality in the test phase. To this end, in this letter, we propose to learn a shared feature space across multi-modalities in the training process. By this way, the out-of-sample from any of multi-modalities can be directly projected onto the learned space for a more effective cross-modality representation. More significantly, the shared space is regarded as a latent subspace in our proposed method, which connects the original multi-modal samples with label information to further improve the feature discrimination. Experiments are conducted on the multispectral-Lidar and hyperspectral dataset provided by the 2018 IEEE GRSS Data Fusion Contest to demonstrate the effectiveness and superiority of the proposed method in comparison with several popular baselines.


I. INTRODUCTION
R EMOTE sensing (RS) is one of the most common ways to extract relevant information about Earth and our environment.RS acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices.The complementary of the data acquired by different platforms can be helpful for more accurately characterizing land use [1].In this letter we will focus on the joint use of multispectral-Lidar (MS-Lidar) data, providing detailed information about the ground elevation, and hyperspectral image (HSI), providing information of the physical nature of the sensed materials, using a general cross-modality learning (CML) framework [2].
Intuitively, most existing multi-modality feature learning methods basically follow the concatenation-based fusion strategy [3].However, either early fusion or latter one might be incapable of effectively addressing the aforementioned challenge, as there is a lack of completely-paired multi-modality samples in the whole dataset.The problem setting naturally motivates us to find a latent shared feature space by learning modality-specific projections from the training samples.
For this purpose, some tentative works have been proposed by joint dimensionality reduction or alignment learning.In [2], principal component analysis (PCA) is used to simultaneously project multi-modal data into a common subspace.Rasti et al. [4] fused the HSI and Lidar data using total variation component analysis for land-cover and land-use mapping.Hong et al. [5] jointly embedded the spatial-spectral information for HSI classification.Besides, manifold alignment (MA) has been proven to be another powerful solution.Following it, Tuia et al. [6] proposed to align multi-view RS imagery on manifolds to reduce the gap between multi-modalities.More generally, Banerjee et al. [7] transferred the samples of source and target domains into a shared latent domain where the learned features in both domains are expected to be consistent.Du et al. [8] for the first time took multimodal RS data analysis as an unsupervised multi-task learning problem, and proposed a state-of-the-art and milestone blind source separation algorithm for multi-modal and HS data processing.Although these methods mentioned above might provide a feasible way for the CML-related issues, yet the ability to extract the discriminative features remains limited.This possibly results from the lack of directly modeling the latent subspace and label information.
To facilitate the improvement of feature discrimination, we propose to simultaneously learn the shared subspace and regress the labels from the learned subspace in a joint fashion.Inspired by MA, we also enforce a graph-based alignment constraint on the multi-modal data, aiming at a more compact subspace learning.In fact, the proposed method in this letter is an extended version of common subspace learning (CoSpace) presented in [2].The main differences lie in two aspects.On one hand, we emphatically analyze the effects of different regression strategies, such as ridge regression ( 2 -penalty), sparse regression ( 1 -penalty).On the other hand, we further investigate the potentials of CoSpace-based models while arXiv:1912.08837v1[cs.CV] 18 Dec 2019 handling the heterogeneous data (e.g., MS-Lidar and HS).

II. METHODOLOGY
Fig. 1 illustrates the proposed cross-modality feature learning framework.In the section, we start with a review of the existing CoSpace model, and then discuss and analyze the potentials of using sparse regression in CoSpace.Finally, an optimizer based on alternating direction method of multipliers (ADMM) is briefly introduced to solve the extended CoSpace.

A. Brief Motivation
Although operational optical satellites, e.g., Sentinel-2 and Landsat-8, enable the MS data openly and largely available on a global scale, the MS data fail to distinguish similar classes due to its few spectral bands.Rather, the HSI is acquired with rich spectral information, enabling identification of the materials more easily and accurately, but its space coverage is far narrower than that of MS data.This naturally motivates us to investigate a general but interesting question -can a limited amount of HS data partially overlapping with MS data improve the classification performance of the extra large-scale and non-overlapped MS data?This is a typical CML-based problem setting.

B. Review of CoSpace
For this purpose, we proposed a feasible solution in [2], namely 2 -CoSpace.The proposed method, supporting an multi-modal input in the training phase, aims at learning a common subspace from multi-modalities, where the learned features are expected to be discirminative by fusing the different modality-specific information as much as possible.Theoretically speaking, through the shared feature space, the different modalities can be arbitrarily translated each other.We also connected the learned features and the label information by means of regression techniques for a more discirminative representation.Moreover, simultaneously considering the above strategies leads to the following joint model.
Given two modalities X 1 ∈ R d1×N and X 2 ∈ R d2×N , namely HS data with d 1 bands by N pixels and MS-Lidar data with d 2 bands by N pixels in our case, the CoSpace can be modeled by optimizing the following objective function.
where Y ∈ R L×N denotes the one-hot encoded label matrix and Y is defined as and then D is computed by For the model solution in Eq. ( 1), we adopt an iterative alternating optimization strategy [9] to convert the nonconvexity of Eq. ( 1) to the convex subproblems of each variable P and Θ.The optimization subproblem with respect to the variable P is a typical least-squares problem with Tikhonov regularization, which has an analytical solution of P = ( YQ T )(QQ T + αI) −1 , where Q = Θ X.For the optimization problem of Θ, it can be effectively and efficiently solved by ADMM.Please refer to [2] for more details.

C. Sparse Regression based CoSpace ( 1 -CoSpace)
To further improve the CoSpace's representation ability, we propose to model the sparsity-promoting regression matrix, yielding a 1 -CoSpace.Unlike the original CoSpace model with 2 -regularized ridge regression ( 2 -CoSpace for short), 1 -CoSpace learns a sparse regression matrix to connect the latent subspace and label space.More specifically, the advantages of 1 -norm over 2 -norm can be summarized as follows: • Compared to the 2 -norm, it is well known that the 1norm plays a role of feature selection, which makes the learned features more robust and further enhances the model's generalization ability.• As introduced in [10], the sparsity-based learning or regression technique is capable of better interpreting the intrinsic structure of the data (or feature) space.This might effectively excavate or discover the underlying correspondences between selected features and certain classes, thereby yielding more effective feature learning.Accordingly, the resulting 1 -CoSpace can be formulated as where p k 1 is used to approximate the sparsity.
Similarly to 2 -CoSpace, the problem ( 4) can be separated into two convex subproblems for the variables P and Θ, respectively.Moreover, the optimization problem of P can be quickly solved by the well-known soft threshold operator [11] under the ADMM framework, while the solution for the variable Θ is same with that in 2 -CoSpace.Algorithm 1 details the specific solutions for the problem (1) or (4).

III. EXPERIMENTS
To assess the performance of CoSpace-related methods (e.g., 2 -CoSpace, 1 -CoSpace) compared to several state-ofthe-art baselines, we explore the classification as a potential application in terms of Overall Accuracy (OA), Average Accuracy (AA), and Kappa Coefficient (κ).Two popular classifiers: linear support vector machines (LSVM) and canonical correlation forests (CCF) [12], are used in our case.

A. Data Description
We conducted the experiments on MS-Lidar and HS data provided by the 2018 IEEE GRSS data fusion contest (DFC2018) [13], where the MS-Lidar data and HSI were acquired by an Optech Titam MW (14SEN / CON340) with a Lidar sensor and an ITRES CASI 1500 sensor, respectively.The MS-Lidar data was collected from three different wavelengths (1550 nm, 1064 nm, and 532 nm) at a 50 cm ground sampling distance (GSD).It consists of 1202 × 4768 pixels with nine bands (three downsampled RGB bands, three intensity bands, and 3 DEM bands).Note that these bands are stacked [14] as the model input.The corresponding HSI with

B. Experimental Configuration
All models involved in performance comparison are trained on MS-Lidar and HS data and tested only in presence of MS-Lidar data to meet our CML's problem setting.To have pixel-to-pixel aligned on the two different modalities, we downsampled the MS-Lidar and ground truth to the HSI's spatial resolution using the nearest neighbor interpolation.Notably, in the used dataset, the sample distribution between the classes is extremely unbalanced.To provide a more reasonable and meaningful performance analysis.we evenly select 200 The baseline yields poor classification performance, due to the limitation of the feature representation ability.By jointly embedding MS-Lidar and HS data, P-JDR tends to obtain a higher classification accuracy than the baseline, particularly using CCF classifier (nearly 3% improvement).Owing to fully considering the local topological structure of the input, L-USMA performs better than baseline and P-JDR.Similarly, L-SMA constructs an LDA-like graph based on the available labels, achieving better performance than L-USMA.Different with MA-based approaches (e.g., L-USMA, L-SMA) that attempt to directly find an aligned latent space from different modalities, the CoSpace-based models aim at jointly learning a latent subspace and a regression matrix bridging the learned subspace with labels.This might make the learned features more discriminative, thereby yielding the best classification performance.Remarkably still, we found that there is a further improvement in 1 -CoSpace over 2 -CoSpace.The possible reason for that is the use of sparse regression matrix, which is easy to implement the sparse-promoting structural learning.

IV. CONCLUSION
In this letter, we investigate a CML-related problem by using the heterogeneous RS data (MS-Lidar and HS data).Concretely, we propose a novel joint sparse subspace learning ( 1 -CoSpace) model, which is an improved version of CoSpace, by simultaneously learning a shared feature subspace and a sparse regression matrix.Benefiting from sparse modeling, the proposed 1 -CoSpace can interpret and mine the intrinsic structure of the data more effectively, resulting in a further performance improvement.

Fig. 1 .
Fig. 1.An illustration of the proposed cross-modality feature learning.The CML problem specifically refers to model learning using multi-modalities in the training phase and testing the model only using one of multi-modalities (please see the Section II.A: Brief Motivation for more details).

Fig. 2 .
Fig. 2. False-color images of the used two data (HS and MS-Lidar data).

Fig. 3 .
Fig. 3. Classification maps predicted by different methods under the two classifiers (LSVM and CCF).
Input: Y, X, L, and parameters α, β, maxIter.Output: P, Θ 1 t = 1, ζ = 1e − 4; 2 Initializing P and Θ 3 while not converged or t > maxIter do Compute the objective function value E t+1 and check the convergence condition: if | E t+1 −E t )represents the subspace projection with respect to X 1 and X 2 .d is the dimension of the learned subspace.The variable P ∈ R L×d is the regression matrix regularized by 2 -norm, which connects the latent subspace and label information for a discriminative feature representation.Moreover,L = D − W ∈ R 2N ×2N isAlgorithm 1: CoSpace-based solution 6