Loading [a11y]/accessibility-menu.js
A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure | IEEE Journals & Magazine | IEEE Xplore

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure


Abstract:

In this paper, we introduce a novel and robust approach to quantized matrix completion. First, we propose a rank minimization problem with constraints induced by quantiza...Show More

Abstract:

In this paper, we introduce a novel and robust approach to quantized matrix completion. First, we propose a rank minimization problem with constraints induced by quantization bounds. Next, we form an unconstrained optimization problem by regularizing the rank function with Huber loss. Huber loss is leveraged to control the violation from quantization bounds due to two properties: first, it is differentiable; and second, it is less sensitive to outliers than the quadratic loss. A smooth rank approximation is utilized to endorse lower rank on the genuine data matrix. Thus, an unconstrained optimization problem with differentiable objective function is obtained allowing us to advantage from gradient descent technique. Novel and firm theoretical analysis of the problem model and convergence of our algorithm to the global solution are provided. Another contribution of this letter is that our method does not require projections or initial rank estimation, unlike the state-of-the-art. In the Numerical Experiments section, the noticeable outperformance of our proposed method in learning accuracy and computational complexity compared to those of the state-of-the-art literature methods is illustrated as the main contribution.
Published in: IEEE Signal Processing Letters ( Volume: 26, Issue: 2, February 2019)
Page(s): 337 - 341
Date of Publication: 06 January 2019

ISSN Information:

Funding Agency:


I. Introduction

In This paper, we extend the Matrix Completion (MC) problem [1]–[3] to the Quantized Matrix Completion (QMC) problem. In QMC, accessible entries are quantized rather than continuous, and the rest are missing. The purpose is to recover the original continuous-valued matrix under certain assumptions. QMC problem addresses a wide variety of applications including collaborative filtering [4], sensor networks [5], and learning and content analysis [6] according to [7]. A special case of QMC, one-bit MC, is considered by several authors in [8]–[10]. However, the scope of this paper is not confined to one-bit MC, and addresses multi-level QMC. We investigate multi-level QMC methodologies in the literature hereunder: In [11], the robust Q-MC method is introduced based on Projected Gradient (PG) approach in order to optimize a constrained log-likelihood problem. Novel QMC algorithms are introduced in [7], [12]. An ML estimation under an exact rank constraint is considered as one part and Approximate PG (APG) method is introduced. Next, the log-likelihood term is penalized with log-barrier function, and bilinear factorization is utilized along the Gradient Descent (GD) technique to optimize the resulted unconstrained problem which is led to Logarithmic Barrier Gradient (LBG) method. These methods may suffer from local minimia or saddle point issues. In [13], the authors consider a Trace norm regularized ML estimation with a likelihood function for categorical distribution. They establish theoretical upper and lower bounds for the error. In [14], Augmented Lagrangian method (ALM) and bilinear factorization are utilized to address the QMC. The proposed method in [14], QMC-BIF, leads to enhanced accuracy in recovery compared to previous works. In [15], a new method for MC from quantized and erroneous measurements is proposed which considers the sparse additive error in the model. This is an extension of the APG Method in [7]. Later, the authors introduce a more robust version of their quantized recovery algorithm in [16]. Ignoring the sparse error in their proposed algorithm we reach out a new QMC algorithm for our assumed model denoted with AG. However, the aforementioned algorithms depend on knowledge of bounds for the rank (an initial rank estimation) while our proposed algorithm does not. Applications of QMC can be observed in [17]–[20].

Contact IEEE to Subscribe

References

References is not available for this document.