Bilateral Joint-Sparse Regression for Hyperspectral Unmixing

—Sparse hyperspectral unmixing has been a hot topic in recent years. Joint sparsity assumes that each pixel in a small neighborhood of hyperspectral images (HSIs) is composed of the same endmembers, which results in a few nonzero rows in the abundance matrix. Recall that a plethora of unmixing algorithms transform a 3-D HSI into a 2-D matrix with vertical priority. The transformation makes matrix computation easier. It is, however, hard to maintain the horizontal spatial information in HSIs in many cases. To make further use of the spatial information of HSIs, in this article, we propose a bilateral joint-sparse structure for hyperspectral unmixing in an attempt to exploit the local joint sparsity of the abundance matrix in both the vertical and horizontal directions. In particular, we introduce a permutation matrix to realize the bilateral joint-sparse representation and there is no need to construct the matrix explicitly. Moreover, we propose to simultaneously impose the bilateral joint-sparse structure and low rankness on the abundance and develop a new algorithm named bilateral joint-sparse and low-rank unmixing . The proposed algorithm is based on the alternating direction method of multipliers (ADMM) framework and employs a reweighting strategy. The convergence analysis of the proposed algorithm is investigated. Simulated and real experiments show the effectiveness of the proposed algorithm.


I. INTRODUCTION
S PECTRAL unmixing of hyperspectral images (HSIs) has attracted much attention in different scientific fields [1]- [3].It is the task of identifying the spectral signatures of distinct materials (endmembers) and estimating the fractions (abundances) of the materials for each pixel in HSI.The mixing process can be characterized as either linear [1] or nonlinear [4], [5].In particular, bilinear mixture models have been proposed and used more commonly in practice [6]- [8].Linear mixture model (LMM) assumes that the measurement spectrum of each pixel is a linear combination of the spectral signatures of the endmembers [3].Due to its simplicity and tractability, we will adopt the LMM in spectral unmixing for the remainder of this article.Commonly, the abundance vector of a mixed pixel should satisfy the abundance nonnegative constraint (ANC) and the abundance sum-to-one constraint (ASC) for physical meaning [9].
Nonnegative matrix factorization for spectral unmixing aims to find two nonnegative matrices, one for an endmembers' dictionary and another for fractional abundances, so that their product is equal to the target HSI matrix [10]- [13].Recently, nonnegative tensor factorization has been proposed for hyperspectral unmixing to further exploit spatial information of HSIs and improve the unmixing performance [14]- [16].With the available of the spectral dictionaries, sparse unmixing provides an alternative way for spectral unmixing.It assumes that, compared with large-scale dictionaries, only a few of spectral signatures participate in the LMM of each pixel, leading a sparse structure in the abundance matrix [17].Diverse sparse structures have been exploited in the literature, including the standard 1 -norm regularization [17], the 2,1norm regularization [18], and their variants [10], [19]- [27].
Exploiting spatial information in HSIs significantly improves the accuracy of spectral unmixing.Typically, the total variation (TV) regularization imposes the piecewise smooth on each abundance map [28]- [31].It suppresses the noise and generates smooth abundance maps with preserved edges.In many cases, however, it is overstrict to assume that neighboring pixels have both similar mixing endmembers and similar abundance fractions.The low-rank representation provides another perspective of spatial correlation for spectral unmixing [6], [32]- [34].In this vein, the highly spatial correlation of mixed pixels is transferred into the linear dependence among their corresponding abundance vectors.Simultaneously imposing the low-rank characteristic with sparsity has offered stimulating results [32].Moreover, it is shown that the unmixing performance can be further improved by adopting a jointsparse-blocks structure in the low-rank representation based unmixing [26].In addition, nonlocal spatial information has been utilized to improve the performance of abundance estimation [35]- [40].Broadly speaking, simultaneously exploiting spatial information and sparsity provides an effective way for spectral unmixing.Notice that most above-mentioned sparse unmixing algorithms first transfer an HSI, a third-order tensor, into a matrix, and then estimate the abundance matrix under the LMM.Thus, spatial information of HSIs might be lost in the transferring process [14]- [16], [41], [42].
In this article, we propose a bilateral joint-sparse regression for hyperspectral unmixing to make further use of the spatial information of HSIs.On the one hand, we consider a local joint sparsity structure of the abundance matrix, which is obtained, as usual, by arranging the HSI to be a matrix along the vertical direction.On the other hand, we consider the local joint sparsity on a reshaped abundance matrix obtained by arranging the HSI along a horizontal direction.In this case, we utilize the spatial information along both the vertical and horizontal directions in HSIs.In particular, we introduce a permutation matrix to realize the bilateral joint-sparse representation and there is no need to build this permutation matrix explicitly.In addition, as previously stated, low rankness is a natural character of the abundance matrix.Thus, we propose to simultaneously impose the bilateral joint-sparse structure and low rankness on abundance matrices.We then develop an algorithm called bilateral joint-sparse and lowrank unmixing (BiJSpLRU) under the alternating direction method of multipliers (ADMM) framework combined with a reweighting strategy.The convergence analysis of BiJSpLRU is investigated.Simulated and real data experiments show that the proposed algorithm outperforms other state-of-the-art sparse unmixing algorithms.
The organization of this article is as follows.Section II introduces a bilateral joint-sparse structure and derives a BiJSpLRU algorithm with convergence results.Then, the performance of the proposed algorithm is demonstrated by both simulated experiments in Section III and a real-data experiment in Section IV.Finally, concluding remarks are given in Section V.

II. BIJSPLRU
In this section, we first briefly review several sparse unmixing models and then propose a bilateral joint-sparse structure and an unmixing model.After that we present an algorithm to solve the proposed model and finally investigate the convergence results of the proposed algorithm.

A. Sparse Unmixing Models
Let Y ∈ R l×n be the observed data matrix with l bands and n pixels and A ∈ R l×m be the spectral dictionary with m endmembers.The LMM assumes that where X ∈ R m×n is the abundance matrix and E ∈ R l×n is an independent and identically distributed (i.i.d.) zero-mean Gaussian noise matrix.Two physical constraints: ANC and ASC, i.e., respectively, are often imposed on X [17].Here we relax the ASC due to the variability of the endmembers, similarly as in [18], [28], [32], see more details in [17].With large and available spectral dictionaries, many sparse unmixing algorithms have been well studied in the literature.Typically, the sparse unmixing algorithm via variable splitting and augmented Lagrangian (SUnSAL) [17] considers the following model and x ij is the (i, j)th element of X.Note that SUnSAL exploits the sparsity in each abundance vector but the collaborative SUnSAL (CLSUnSAL) [18] pays more attention to the joint sparsity, or row sparsity, of the abundance matrix.The CLSUn-SAL model is In addition, many sparse regularization terms, such as the weighted 1 -norm, the p -norm, and the p,q -norm, are utilized to enhance the sparsity of the abundance [19], [24], [25], [43], [44].
Recently, low-rank unmixing algorithms have attracted more and more attentions.The alternating direction sparse and lowrank unmixing (ADSpLRU) algorithm [32] simultaneously exploits sparsity and low rankness of the abundance, leading to an unmixing model as is a nonnegative weighting vector with r being the rank of X.
Consider instead the local joint-sparsity, the joint-sparseblocks and low-rank unmixing (JSpBLRU) algorithm [26] first partitions the abundance matrix as and then imposes the joint sparsity on each column block matrix X j , where X j ∈ R m×dj with s j=1 d j = n.Thus, the unmixing model becomes where is the ith row of the jth block of X and w j = [w 1j , w 2j , • • • , w mj ] is a nonnegative weighing vector computed by using a reweighting strategy.It is shown in [26], [32] that exploiting simultaneously sparsity and low rankness provides a new sight on the abundance matrix and the structural sparsity has great potential to achieve better abundance estimation results.In this vein, we aim to further exploit the structural sparsity by utilizing the spatial information in the HSI.

B. Bilateral Joint-Sparse Structure
We now introduce a bilateral joint-sparse structure for hyperspectral unmixing.The key idea is to incorporate more spatial information of HSIs into the structural sparsity.Recall that the main task of many sparse unmixing algorithms is to estimate the abundance of a reshaped HSI matrix Y as in (1).That said, a 3-D HSI, also a third-order tensor, is first transformed into a 2-D matrix and then one can estimate the abundance matrix.In fact, in this vein, it is hard to make full use of the spatial information of HSIs.
For example, consider a third-order tensor Y ∈ R 3×4×2 in [41], where ( If we view Y as an HSI in hyperspectral unmixing, then the task in [17], [18], [26], [28], [32], to name a few, is to unmix the matrix Y (3) .In this vein, to utilize the sparsity of the abundance, SUnSAL and ADSpLRU consider the sparsity of each abundance vector but JSpBLRU emphasizes the joint sparsity of the neighboring abundance vectors.Observe that the mode-3 unfolding Y (3) is obtained by arranging the mode-3 fibers of Y, as in Fig. 1(c), to be the columns of Y (3) .Particularly, the vertical direction gets priority over the horizontal direction in the arrangement of the frontal slice Y 1 (also for Y 2 ), see Fig. 2(a).It follows that the local joint sparsity on the abundance matrix of Y (3) exploits the spatial information only along the vertical direction of Y.
In attempt to further utilize the spatial information of the HSI, we propose to consider the mode-3 matricization of Y along the vertical and horizontal directions simultaneously.To this end, besides the construction of Y (3) in (10) with vertical priority, we construct another mode-3 unfolding of Y:   Following the analysis, for hyperspectral unmixing, we propose to impose the joint-sparse-blocks structure on the abundance matrices of the mode-3 unfoldings along both the vertical and horizontal directions and name it as a bilateral joint-sparse structure.To see it more clearly, we show two simple and extreme cases in Fig. 3.We consider nine pixels in the sliding window.Case 1 has a clear spatial characteristic along the horizontal direction and case 2 shows a vertical characteristic.From Fig. 3 we see that vertical and horizontal expansions have their respective advantages.In particular, we can mathematically realize the bilateral matricization via a permutation matrix.Partition the abundance matrix X of the mode-3 unfolding of the pixels with vertical priority as where X i is the ith column of X.Then the abundance matrix with horizontal priority is obtained by permuting the columns of X, that is, where P ∈ R 9×9 is a permutation matrix [46].Note that we can reshape the abundance matrix to do the permutation so there is no need to construct the matrix P explicitly, which is time-saving and easy to implement.Then we realize the bilateral joint-sparse structure by simultaneously imposing the joint-sparse-blocks structure on X and XP.In this vein, we incorporate both the vertical and horizontal spatial information of HSIs into the sparse structure of the abundance matrix.
In practical situations, utilizing bilateral spatial information certainly provides an advantage over either vertical or horizontal expansions, as will be demonstrated later in Sections III and IV.

C. Proposed Model and BiJSpLRU Algorithm
Now we propose an unmixing model by considering both the bilateral joint-sparse structure via the weighted 2,1 -norm and the low-rank character via the weighted nuclear norm.Reconsider the LMM model in (1) and the abundance matrix X ∈ R m×n .Note that X is obtained corresponding to the vertical expansion of the HSI.Let XP ∈ R m×n be the abundance matrix with respect to the horizontal expansion of the HSI, where P ∈ R n×n is the corresponding permutation matrix.We impose the joint-sparse-blocks structure on both X and XP.That is, we first partition X and XP as where X j and (XP) j ∈ R m×dj are the jth column blocks of X and XP, respectively, for s j=1 d j = n.We note that both X and XP share the same partition strategy for model simplicity.Then the proposed optimization problem is where λ 1 , λ 2 , and τ are nonnegative regularization parameters, W 1 , W 2 , and W 3 are nonnegative weighting matrices with and W 3 is a diagonal matrix of order m.Particularly, where, for any matrix T, (T) ij is the (i, j)th element of T.
We will use a reweighting strategy to automatically update W 1 , W 2 , and W 3 at each iteration.Notice that the model in ( 15) has three regularization parameters so it may be time-consuming to choose optimal parameters in real applications.In our experience, λ 1 and λ 2 attain the same optimal regularization values for many cases.Therefore for simplicity, we set λ 1 = λ 2 = λ in ( 15) and the proposed model becomes In the following, we show how to solve the proposed model in (18) under the ADMM framework [47].To this end, we first introduce four variables V 1 , V 2 , V 3 , and V 4 .Partition V 1 and V 2 as XP and X in (14), respectively, Then using variable replacement, we equivalently transform ( 18) into the following model where ι R + (x) is the indicator function, i.e., ι and define where I is an n-by-n identity matrix.Clearly (20) becomes We note that a reweighting strategy is employed to update W in each iterate.
Define the augmented Lagrangian function where is the Lagrange multiplier, µ > 0, and R, T = Tr(R T T) is the standard inner product between matrices, where R, T ∈ R n1×n2 .Partition X and XP as in (14) and Then the ADMM framework with a reweighting strategy is derived 3 µ Λ k 3 and = 10 −16 is a small constant added to avoid singularities.Now we solve each subproblem.
• Update X.For X-subproblem, it is to solve Clearly, the optimality condition is Recall that P is a permutation matrix with PP T = I and so GG T = 4I, then we have where Î is an m-by-m identity matrix.
• Update W. With obtained X k+1 , we update W 1 , W 2 , and W 3 as in (26), which will be used in the following Vsubproblem.We note that the weighting coefficients in W 1 and W 2 are computed, similarly as in [26], to enhance the row sparsity of V 1 and V 2 , respectively.The weighting coefficients in W 3 are used to enhance the sparsity of singular values of V 3 , similarly as in [26], [32], [45], [48].
• Update V. Clearly, the minimization problem about V can be decoupled and so we solve the V i -subproblem, i = 1, . . ., 4, separately.
• Update V 1 .For V 1 -subproblem, after dropping constant terms, we obtain (30) Partition XP, V 1 , and Λ 1 as in ( 14), (19), and (25), respectively.For each block of V 1,j , we obtain Define the vect-soft operator as for x ∈ R n and α > 0. Then from [49] we obtain that each subproblem in (31) has a unique solution and its ith row can be obtained by where (( is the ith row of the jth column block of which is similar to the V 1 -subproblem.Partition Λ 2 as in (25), then similarly, the ith row of V k+1 2,j can be obtained by . ., σr ) ṼT be the singular value decomposition (SVD) of X and σi is the ith singular value of X. Define the weighted singular value thresholding operator SVT on X as SVT S,β (X) = ŨDiag ((σ 1 − βs 11 ) + , . . ., (σ r − βs rr ) + ) ṼT (36) where ( • ) + = max( •, 0), S = Diag(s 11, . . ., s rr ) is a nonnegative weighting diagonal matrix, and β is a positive parameter.Then consider the V 3 -subproblem we obtain its closed-form solution • Update V 4 .Consider the V 4 -subproblem It is easy to obtain that • Update multipliers.The update rules of multipliers are as follows ). ( To clarify, we summarize the proposed BiJSpLRU algorithm in the following.We remark that both JSpBLUR and BiJSpBLRU have the same computational complexity.Compared with JSpBLUR, however, BiJSpBLRU mainly has one extra 2,1 -norm subproblem, i.e., V 1 -subproblem.

D. Convergence of BiJSpLRU
There are many convergence results about ADMM iterates in the literature, such as [50]- [53].Following the analysis in [54], we now consider the residual and objective convergence of BiJSpLRU under the following assumption.
has a saddle point.That is, there exist (X , V , W , Λ ), for which holds for all X, V, W, Λ.
From Assumption 1, we get that L 0 (X , V , W , Λ ) is finite, (X , V , W ) is a solution to (23), and Then we have the following convergence results of the sequence (X k , V k , W k , Λ k ) generated by (26).
be the sequence generated by (26).Under Assumption 1 and let W k+1 ≤ W , then the sequence (X k , V k , W k , Λ k ) satisfies the following: (1) Residual convergence.The residual error r k → 0 as k → ∞.
(2) Objective convergence.The objective function of the iterates approaches the optimal value: f Proof.The proof can be found in the Appendix.
Finally, we remark that in a similar way, one can obtain the convergence results of ADSpLRU and JSpBLRU, which are also based on ADMM and a reweighting strategy.

A. Experiment Setting
In our experiments, two spectral dictionaries are used: 1) A 1 ∈ R 224×240 : A dictionary of minerals extracted from the U.S. Geological Survey (USGS).The reflectance values are given in 224 spectral bands with uniform distribution in the interval of 0.4-2.5 µm.2) A 2 ∈ R 99×120 : It consists of 120 materials and 99 spectral bands.It is from the National Aeronautics and Space Administration Johnson Space Center Spacecraft Materials Spectral Database.We use the root-mean-square error (RMSE) and the signal-toreconstruction error (SRE) to measure the performance of the estimated abundance matrices.They are defined as, respectively, where xi and x i are the estimated and true abundance vectors of the ith pixel.Generally speaking, lower RMSEs and higher SREs give better estimations.For all six algorithms, regularization parameters are chosen to get maximum SREs.Similarly as in [26], we set d = 3 for both JSpBLRU and BiJSpLRU in the experiments.That said, for n pixels in an HSI, let s = n/d be the largest integer no greater than n/d, then each of the first s − 1 blocks in (14) contains d columns and the last block contains the remains.We select optimal regularization parameters for low-rank unmixing algorithms: ADSpLRU, JSpBLRU, and BiJSpLRU, as follows: λ ∈ {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5} τ ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100} .
For BiJSpLRU, recall the primal residual r k and define the dual residual s k at the kth iteration as      When both termination conditions are satisfied or the number of iterations reaches 300, the iterations are stopped.Here we set ζ = (3m + l)nζ rel as in [26], where ζ rel = 5 × 10 −6 .

B. Comparison With Different Algorithms
In the following, we show the performance of BiJSpLRU and other unmixing algorithms on three simulated examples.
Example 1: In this experiment, we consider a toy problem to highlight the significance of the proposed approach, i.e., the bilateral joint-sparse structure.We use the spectral library A 1 and randomly select five signatures from A 1 to generate a 15 × 15-pixel data cube according to LMM.There are five abundance maps, two of which are shown in the first column in Fig. 4 and the rest three are randomly generated with the ASC.Note that the true abundance maps in Fig. 4 show a clear spatial characteristic along vertical or horizontal directions.The generated data cube is contaminated by white Gaussian i.i.d.noise with signal-to-noise ratio (SNR) of 30 dB.
Example 2: This example shows the performance of BiJS-pLRU on a widely used simulation data set, which has been used in [26], [28], [34].The spectral library A 1 is used.Five signatures are randomly chosen from A 1 and then used to generate a 75 × 75-pixel data cube according to LMM.Fig. 5 shows corresponding true abundance maps of the five selected endmembers.The generated data cube is contaminated by white Gaussian i.i.d.noise with SNR = 20, 30, and 40 dB, respectively.
Example 3: We use the spectral library A 2 in this example.The data set contains 100 × 100 pixels and is generated according to the LMM by nine randomly selected spectral signatures from A 2 .The corresponding true abundance maps are shown in Fig. 6.Then, the true data cube is contaminated by white Gaussian i.i.d.noise with the same SNR values adopted for Example 2.
Results and Discussion: Fig. 4 shows the estimated abundance maps by different unmixing algorithms for Example 1. From this figure, we see that sparse unmixing algorithms SUnSAL and CLSUnSAL maintain the overall vertical and horizontal spatial information, but with many low abundance values supposing to be zero.By exploiting spatial information, SUnSAL-TV provides better estimations.For low-rank unmixing algorithms, JSpBLRU utilizes the vertical spatial information so it provides more accurate abundance estimates for endmember #2 compared with ADSpLRU.For endmember #1, however, the estimated abundance map of JSpBLRU also contains many low abundance values supposing to be zero.By simultaneously exploiting vertical and horizontal spatial information, the proposed BiJSpLRU algorithm provides accurate estimations for both endmembers #1 and #2.For further comparison, Table I      a similar conclusion, so we omit here for space considerations.Compared with SUnSAL and CLSUnSAL, SUnSAL-TV maintains the piecewise smooth behavior in the backgrounds, especially for endmember #5.But the abundance map of SUnSAL-TV is over-smooth for endmember #1.Low-rank unmixing algorithms delineate more square regions for endmember #1.Among them, BiJSpLRU produces most square blocks.It can be expected since BiJSpLRU utilizes more spatial information than ADSpLRU and JSpBLRU, especially for pixels in the boundaries.
The abundance maps estimated by different algorithms for Example 3 are shown in Fig. 8. Particularly, the regions in red boxes are zoomed in and displayed in Fig. 9. SUnSAL and CLSUnSAL estimate abundance maps with low accuracy.Note that each of the two algorithms has only one regularization parameter so it is time-saving to select optimal parameter values compared with other algorithms.In addition, they are fast since there is no need to implement the TV or SVD at each iteration.SUnSAL-TV provides piecewise smooth estimations, in which many details are lost.Clearly, the backgrounds of the abundance maps estimated by JSpBLRU are more clear than those by ADSpLRU.However, when we zoom in the abundance maps of JSpBLRU, we can see much banded vertical noise in the backgrounds.The reason is, in part, that JSpBLRU utilizes the local joint-sparsity structure along the vertical direction of the HSI.BiJSpLRU addresses this problem by utilizing both vertical and horizontal spatial information.We can see that BiJSpLRU provides accurate abundance maps with clearer backgrounds.Finally, Table II shows the SRE and RMSE values by different unmixing algorithms for Examples 2 and 3. We see that BiJSpLRU provides the optimal results, which is consistent with the observations from Figs. 7-9.

C. Parameters Discussion
Now we discuss the influence of the regularization parameters λ and τ and the number of iterations of BiJSpLRU.Recall that in the simulated experiments, we test all possible combinations of parameters for BiJSpLRU and choose the optimal ones for best SREs.as SNR increases.Optimal τ is far greater than optimal λ, which shows the effectiveness of the low-rank regularizer.Fig. 11 plots the SRE values against iteration for Example 3. Clearly for BiJSpLRU, it is enough to set the maximum number of iterations to be 300.In addition, we see from Fig. 11 that for different noise levels, the convergence behavior of BiJSpLRU becomes stable as the iteration increases.

IV. EXPERIMENTS ON REAL DATA
In this test, we show the performance of BiJSpLRU on the well-known Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite data set.Fig. 12 shows a mineral map produced in 1995 by USGS, in which different minerals were mapped by a Tetracorder software product [55].We use a subscene of the Cuprite data with 350 × 350 pixels and 188 spectral bands.The spectral dictionary A of size 188 × 240 is generated from the USGS spectral library.Similarly as in [17], [18], [28], we set regularization parameters λ = 0.001 for SUnSAL, λ = 0.01 for CLSUnSAL, and λ = λ T V = 0.001 for SUnSAL-TV.Similarly as in [26] and [32], we set λ = τ = 0.001 for ADSpLRU, JSpBLRU, and BiJSpLRU.Fig. 13 shows the estimated abundance maps obtained by different unmixing algorithms.Due to the low noise of the real data, all the algorithms produce overall similar results.From the enlarged area in the right bottom corner in Fig. 13, however, we can see that BiJSpLRU gives comparable or higher estimates in the regions considered as respective materials.It shows again the effectiveness of BiJSpLRU by simultaneously utilizing the bilateral joint sparsity structure and the low rankness of the abundance matrix.
V. CONCLUSION In this paper, we have proposed a bilateral joint-sparse structure to utilize the vertical and horizontal spatial information of HSIs.In particular, we realize the bilateral joint-sparse structure by introducing a permutation matrix, which is no need to be constructed explicitly.Then we have proposed an unmixing model by imposing the bilateral joint-sparse structure and the low-rank property on the abundance matrix, via the weighted 2,1 -norm and the weighted nuclear norm, respectively.We solve the proposed model under the ADMM framework with an iterative reweighting strategy, leading to an algorithm called BiJSpLRU.The residual and objective convergence results of BiJSpLRU have been provided.The simulated and real-data experiments show the effectiveness of the proposed algorithm, compared with other state-of-the-art unmixing algorithms.
The proposed bilateral joint-sparse structure has the potential to contribute other image processing problems via sparse representation, such as (hyperspectral) image restoration, super-resolution, blind hyperspectral unmixing, etc.In addition, the article provides a way to obtain theoretical convergence results of algorithms which are similarly based on the ADMM framework but with a reweighted strategy.

APPENDIX
In order to prove Theorem 1 about the convergence of the sequence (X k , V k , W k , Λ k ) by ( 26), we first give three lemmas.
Lemma 2. Under the assumptions as in Lemma 1, if W k+1 ≤ W , then Proof.From ( 26), we see that X k+1 minimizes L(X, V k , W k , Λ k ), or equivalently miminizes Since f is differentiable, the optimality condition is Recall that which implies that X k+1 minimizes Thus we have Similarly, V k+1 minimizes L(X k+1 , V, W k+1 , Λ k ).Let then V k+1 equivalently minimizes We remark that the minimization problem about V 3 can be transformed into a convex minimization problem about its singular values; see [48] for more details.Since h is subdifferentiable, the optimality condition for (57) is It follows that g(V k+1 , W k+1 )− Λ k+1 , V k+1 ≤ g(V , W k+1 )− Λ k+1 , V .
(60) Under the assumption of W k+1 ≤ W , then from the definition of g, we have g(V , W k+1 ) ≤ g(V , W ) and Now together with ( 55) and (61), we get That is, following that which conclude the proof.
the two frontal slices of Y. Recall that the mode-n unfolding of a tensor Y, denoted by Y (n) , arranges the moden fibers to be the columns of the resulting matrix.Here we show the mode-n fibers of Y in Fig. 1.Then from [41], three mode-n unfoldings of Y are Y (

Fig. 12 :
Fig. 12: The USGS map showing the location of different minerals in the Cuprite mining district in Nevada.

Fig. 13 :
Fig. 13: Estimated abundance maps for (top row) Alunite, (middle row) Buddingtonite, and (bottom row) Muscovite by different unmixing algorithms.The demarcated area is enlarged in the right bottom corner for better visualization.

TABLE I :
Average of SRE (dB) and RMSE after 20 runs by different unmixing algorithms for Example 1 with SNR = 30 dB.

TABLE II :
SRE (dB) and RMSE by different unmixing algorithms for Examples 2 and 3.
shows the average SREs and RMSEs after 20 runs of different algorithms for Example 1.It is clear that BiJSpLRU gives higher SRE and lower RMSE values, which is consistent with the visual observations from Fig.4.Fig.7shows the estimated abundances maps for endmembers #1 and #5 obtained by different unmixing algorithms for Example 2. Abundance maps for other endmembers show