<![CDATA[ IEEE Journal of Selected Topics in Signal Processing - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 4200690 2019January 21<![CDATA[[Front cover]]]>126C1C1290<![CDATA[IEEE Journal of Selected Topics in Signal Processing publication information]]>126C2C252<![CDATA[[Blank page]]]>126B1123B11234<![CDATA[[Blank page]]]>126B1124B11244<![CDATA[Table of contents]]>12611251126255<![CDATA[Introduction to the Issue on Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications]]>12611271130433<![CDATA[Adaptive L1-Norm Principal-Component Analysis With Online Outlier Rejection]]>126113111432557<![CDATA[Robust Multinomial Logistic Regression Based on RPCA]]>supervised way. Although the problem is nonconvex and nonsmooth, the convergence is guaranteed by the recent theoretical advance of alternating direction method of multipliers. Experimental analysis on synthetic and real-world data demonstrates that our method outperforms other state-of-the-art ones, in terms of classification accuracy.]]>126114411545331<![CDATA[Compressed Randomized UTV Decompositions for Low-Rank Matrix Approximations]]>$mtimes n$ with numerical rank $k$, where $k ll text{min} lbrace m,nrbrace$, CoR-UTV requires a few passes over the data, and runs in $O(mnk)$ floating-point operations. Furthermore, CoR-UTV can exploit modern computational platforms and, consequently, can be optimized for maximum efficiency. CoR-UTV is simple and accurate, and outperforms reported alternative methods in terms of efficiency and accuracy. Simulations with synthetic data as well as real data in image reconstruction and robust principal component analysis applications support our claims.]]>126115511692543<![CDATA[Low-Rank Matrix Recovery With Simultaneous Presence of Outliers and Sparse Corruption]]>$mathbf {D}in mathbb {R}^{N_1 times N_2}$ can be expressed as $mathbf {D}= mathbf {L}+ mathbf {S}+ mathbf {C},$ where $mathbf {L}$ is a low-rank matrix, $mathbf {S}$ is an elementwise sparse matrix, and $mathbf {C}$ is a matrix whose nonzero columns are outlying data points. To date, robust principal component analysis (PCA) algorithms have solely considered models with either $mathbf {S}$ or $mathbf {C}$, but not both. As such, existing algorithms cannot account for simultaneous elementwise and columnwise corruptions. In this paper, a new robust PCA algorithm that is robust to simultaneous types of corruption is proposed. Our approach hinges on the sparse approximation of a sparsely corrupted column so that the sparse expansion of a column with respect to the other data points is used to distinguish a sparsely corrupted inlier column from an outlying data point. We also develop a randomized design that provides a scalable implementation of the proposed approach. The core idea of sparse approximation is analyzed analytically where we show that the underlying $ell _1$-norm minimization can obtain the representation of an inlier in presence of sparse corruptions.]]>126117011811874<![CDATA[Turbo-Type Message Passing Algorithms for Compressed Robust Principal Component Analysis]]>$boldsymbol {L}$ and a sparse matrix $boldsymbol {S}$ are recovered from an underdetermined amount of noisy linear measurements of their sum $boldsymbol {L}+boldsymbol {S}$, arises in various applications such as face recognition and video foreground/background separation. This problem can be solved by Bayesian inference based iterative algorithms. However, most existing Bayesian algorithms factorize $boldsymbol {L}$ into the product of two rank-$r$ matrices, and estimate the two rank-$r$ matrices (rather than $boldsymbol {L}$ itself) in the iterative process, where $r$ is the rank of $boldsymbol {L}$. On one hand, this factorization is not essential to the original problem and so may cause a potential performance loss. On the other hand, the existing Bayesian algorithms assume a certain probability model for the low-rank matrix $boldsymbol {L}$ and the sparse matrix $boldsymbol {S}$, whereas the probability model of $boldsymbol {L}$ and $boldsymbol {S}$ is usually dif-
icult to acquire in real applications. In this paper, we develop a Bayesian message passing algorithm, termed turbo-type message passing (TMP), for the compressed RPCA problem. We show that the proposed TMP algorithm significantly outperforms the state-of-the-art compressed RPCA algorithms, and requires a much lower computational complexity. We also show that TMP does not assume any prior probability model for $boldsymbol {L}$ and $boldsymbol {S}$; TMP even does not require the knowledge of the environmental information, such as the rank of $boldsymbol {L}$ and the sparsity level of $boldsymbol {S}$. Therefore, TMP gives a promising approach for real applications of the compressed RPCA problem.]]>126118211961662<![CDATA[Low-Complexity Adaptive Algorithms for Robust Subspace Tracking]]>$alpha$-stable noise). Finally, a “detect-and-skip” approach is adopted, where the corrupted measurements are detected and treated as “missing” data. The resulting algorithm is particularly effective in the case where the data are affected by sparse “outliers.” All these approaches were analyzed, and their convergence properties were investigated. Moreover, the proposed subspace tracking algorithms were compared by simulated experiments to some state-of-the-art methods in different noise/outliers contexts.]]>126119712124951<![CDATA[Wasserstein Stationary Subspace Analysis]]>126121312231578<![CDATA[Subspace Change-Point Detection: A New Model and Solution]]>126122412391910<![CDATA[Subspace Estimation From Incomplete Observations: A High-Dimensional Analysis]]>$n$ tends to infinity. Moreover, the limiting processes can be exactly characterized as the unique solutions of certain ordinary differential equations (ODEs). A finite sample bound is also given showing that the rate of convergence toward such limits is $mathcal {O}(1/sqrt{n})$. In addition to providing asymptotically exact predictions of the dynamic performance of the algorithms, our high-dimensional analysis yields several insights, including an asymptotic equivalence between Oja's method and GROUSE, and a precise scaling relationship linking the amount of missing data to the signal-to-noise ratio. By analyzing the solutions of the limiting ODEs, we also establish phase transition phenomena associated with the steady-state performance of these techniques.]]>12612401252751<![CDATA[Binary Matrix Factorization via Dictionary Learning]]>12612531262559<![CDATA[Unsupervised Joint Subspace and Dictionary Learning for Enhanced Cross-Domain Person Re-Identification]]>126126312752655<![CDATA[M-Estimation-Based Subspace Learning for Brain Computer Interfaces]]>126127612855146<![CDATA[Successive Convex Approximation Algorithms for Sparse Signal Estimation With Nonconvex Regularizations]]>12612861302692<![CDATA[PF-FELM: A Robust PCA Feature Selection for Fuzzy Extreme Learning Machine]]>126130313122874<![CDATA[Moving Object Detection Through Robust Matrix Completion Augmented With Objectness]]>126131313235058<![CDATA[Multi-Attribute Robust Component Analysis for Facial UV Maps]]>age and identity. We evaluate the proposed method on problems such as UV denoising, UV completion, facial expression synthesis, and age progression, where MA-RCA outperforms compared techniques.]]>1261324133710755<![CDATA[Enhance Neighbor Reversibility in Subspace Learning for Image Retrieval]]>subspace learning. In the proposed method, we utilize a small number of training images to learn a low-dimensional subspace, where the NR correlation is best preserved. Then, images are transformed from their global features into the low-dimensional representation with the learned mapping function. In this manner, the efficiency of training will be improved. Experiment on four public retrieval datasets demonstrates that our method compares favorably with the subspace learning baseline methods for image retrieval.]]>126133813503066<![CDATA[SULoRA: Subspace Unmixing With Low-Rank Attribute Embedding for Hyperspectral Data Analysis]]>1261351136311429<![CDATA[Tensor Nuclear Norm-Based Low-Rank Approximation With Total Variation Regularization]]>126136413775388<![CDATA[Improved Robust Tensor Principal Component Analysis via Low-Rank Core Matrix]]>126137813894169<![CDATA[Low-M-Rank Tensor Completion and Robust Tensor PCA]]>126139014043189<![CDATA[t-Schatten-<inline-formula><tex-math notation="LaTeX">$p$</tex-math></inline-formula> Norm for Low-Rank Tensor Recovery]]>$p$ norm (t-Schatten-$p$ norm) based on t-SVD, and prove that this norm has similar properties to matrix Schatten-$p$ norm. More importantly, the t-Schatten-$p$ norm can better approximate the $ell _1$ norm of the tensor multi-rank with $0 < p < 1$. Therefore, it can be used for the Low-Rank Tensor Recovery problems as a tighter regularizer. We further prove the tensor multi-Schatten-$p$ norm surrogate theorem and give an efficient algorithm accordingly. By decomposing the target tensor into many small-scale tensors, the non-convex optimization problem $(0 < p < 1)$ is transformed into many convex sub-problems equivalently, which can greatly improve the computational efficiency when dealing with large-scale tensors. Finally, we provide the theories on the conditions for exact recovery in the noiseless case and give the corresponding error bounds for the noise case. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our t-Schattern-p norm in the Tensor Robust Principle Component Analysis and the Tensor Completion problems.]]>1261405141914451<![CDATA[Tensor Completion From Structurally-Missing Entries by Low-TT-Rankness and Fiber-Wise Sparsity]]>126142014347067<![CDATA[Robust Tensor Approximation With Laplacian Scale Mixture Modeling for Multiframe Image and Video Denoising]]>126143514488378<![CDATA[Distributed Differentially Private Algorithms for Matrix and Tensor Factorization]]>126144914641036<![CDATA[Fast and Flexible Large Graph Embedding Based on Anchors]]>$O(ndm)$, where $n$ is the number of samples, $d$ is the number of dimensions and $m$ is the number of anchors. Furthermore, it is interesting to note that locality preserving projection and principal component analysis are two special cases of FFLGE. In the end, the experiments based on several publicly large-scale datasets proves the effectiveness and efficiency of the method proposed.]]>126146514751465<![CDATA[Semi-Supervised Tensorial Locally Linear Embedding for Feature Extraction Using PolSAR Data]]>126147614908436<![CDATA[Unsupervised Feature Extraction for Hyperspectral Imagery Using Collaboration-Competition Graph]]>$ell _2$-norm regularization with locality constrained property into graph construction, named collaboration-competition preserving graph embedding. First, an undirected and weighted graph is constructed to exploit the data structure. Then, a weight matrix of edge in graph is built by formulating the combined collaborative-competitive representation into a convex optimization problem. The constructed graph is expected to reveal local intrinsic manifold and global geometry information of hyperspectral data. The superiority of the proposed graph-based unsupervised feature extraction method, compared with other traditional and state-of-the-art methods, is demonstrated by verifying the classification accuracy on four typical hyperspectral datasets.]]>1261491150311498<![CDATA[A General Framework for Understanding Compressed Subspace Clustering Algorithms]]>i.e., sparse SC (SSC), SSC-orthogonal matching pursuit (SSC-OMP), and thresholding based SC (TSC), as representatives and analyze their performance using the proposed framework. Our results are consistent with related work obtained by previous researchers, where our methodology is much more direct and has more universal implications. Finally, the practicability and efficiency of CSC are verified by numerical experiments.]]>126150415191370<![CDATA[On Geometric Analysis of Affine Sparse Subspace Clustering]]>affine SSC (ASSC), for the problem of clustering data from a union of affine subspaces. Our contributions include a new concept called affine independence for capturing the arrangement of a collection of affine subspaces. Under the affine independence assumption, we show that ASSC is guaranteed to produce subspace-preserving affinity. Moreover, inspired by the phenomenon that the $ell _1$ regularization no longer induces sparsity when the solution is nonnegative, we further show that subspace-preserving recovery can be achieved under much weaker conditions for all data points other than the extreme points of samples from each subspace. In addition, we confirm a curious observation that the affinity produced by ASSC may be subspace-dense—which could guarantee the subspace-preserving affinity of ASSC to produce correct clustering under rather weak conditions. We validate the theoretical findings on carefully designed synthetic data and evaluate the performance of ASSC on several real datasets.]]>126152015333904<![CDATA[Evolutionary Self-Expressive Models for Subspace Clustering]]>126153415461466<![CDATA[Data Recovery and Subspace Clustering From Quantized and Corrupted Measurements]]>126154715602329<![CDATA[Graph and Sparse-Based Robust Nonnegative Block Value Decomposition for Clustering]]>$ell _{2,1}$-norm for NBVD structure that compensates the effect of samples that are not conforming to NBVD. To exploit the connection between the learning matrix and its corresponding coefficients through sparse representation, we enforce the sparse constraints on the middle matrix in the R-NBVD framework called SR-NBVD. To enhance the geometrical information from data space to the new space, we add a term to our objective minimization function through a regularized graph representation compact form called GSR-NBVD. Then, we prove the convergence of our proposed methods and show a visualization of the effectiveness of G-NBVD and GSR-NBVD step-by-step. Finally, we evaluate our proposed clustering methods over different kinds of data sets. The experimental results confirm that our methods outperforms several state-of-the-art methods through different metrics.]]>126156115741343<![CDATA[Improving <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula>-Subspaces via Coherence Pursuit]]>$K$-Subspaces (KSS), an alternating algorithm that mirrors $K$-means, is a classical approach for clustering with this model. Like $K$-means, KSS is highly sensitive to initialization, yet KSS has two major handicaps beyond this issue. First, unlike $K$-means, the KSS objective is NP-hard to approximate within any finite factor for a large enough subspace rank. Second, it is known that the $ell _2$ subspace estimation step is faulty when an estimated cluster has points from multiple subspaces. In this paper, we demonstrate both of these additional drawbacks, provide a proof for the former, and offer a solution to the latter through the use of a robust subspace recovery (RSR) method known as coherence pursuit (CoP). While many RSR methods have been developed in recent years, few can handle the case where the outliers are themselves low rank. We prove that CoP can handle low-rank outliers. This and its low computational complexity make it ideal to incorporate into the subspace estimation step of KSS. We demonstrate on synthetic data that CoP successfully rejects low-rank outliers and show that combining CoP with $K$-Subspaces yields state-of-the-art clustering performance on canonical benchmark datasets.]]>12615751588719<![CDATA[Coded Aperture Design for Compressive Spectral Subspace Clustering]]>1261589160011324<![CDATA[Deep Multimodal Subspace Clustering Networks]]>126160116144343<![CDATA[Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers]]>126161516273417<![CDATA[2018 List of Reviewers]]>1261628163457<![CDATA[IEEE Journal of Selected Topics in Signal Processing information for authors]]>1261635163766<![CDATA[2018 Index IEEE Journal of Selected Topics in Signal Processing Vol. 12]]>12616381656248<![CDATA[IEEE Signal Processing Society Information]]>126C3C357<![CDATA[[Blank page]]]>126C4C44