Skip to Main Content
We address the problem of maximum likelihood subspace learning and detection in the presence of Laplacian noise and interference whose subspace may be known or unknown. For subspace learning, the Laplacian problem reduces to a maxmin convex mathematical program with polyhedral cost. The minimization involves projection of the measurements onto a subspace orthogonal to the signal and interference spaces and has linear constraints, while the maximization produces the subspace and has polyhedral constraints. The Laplacian noise model for subspace detection and estimation, motivated by applications in functional magnetic resonance imaging and applicable in other areas, yields maximum likelihood detectors and learned subspaces with unique structure due to the presence of corners in the polyhedrally convex optimization. For instance, the optimal learned subspace can consist of vectors whose elements take values of +1 or -1 only. Emergence of such a quantization attests to the robustness property of Laplacian learning, meaning that the solution is insensitive to perturbation in the data set. The resulting detectors are similarly robust to false alarms and have computationally attractive properties.