I. Introduction
In This paper, we extend the Matrix Completion (MC) problem [1]–[3] to the Quantized Matrix Completion (QMC) problem. In QMC, accessible entries are quantized rather than continuous, and the rest are missing. The purpose is to recover the original continuous-valued matrix under certain assumptions. QMC problem addresses a wide variety of applications including collaborative filtering [4], sensor networks [5], and learning and content analysis [6] according to [7]. A special case of QMC, one-bit MC, is considered by several authors in [8]–[10]. However, the scope of this paper is not confined to one-bit MC, and addresses multi-level QMC. We investigate multi-level QMC methodologies in the literature hereunder: In [11], the robust Q-MC method is introduced based on Projected Gradient (PG) approach in order to optimize a constrained log-likelihood problem. Novel QMC algorithms are introduced in [7], [12]. An ML estimation under an exact rank constraint is considered as one part and Approximate PG (APG) method is introduced. Next, the log-likelihood term is penalized with log-barrier function, and bilinear factorization is utilized along the Gradient Descent (GD) technique to optimize the resulted unconstrained problem which is led to Logarithmic Barrier Gradient (LBG) method. These methods may suffer from local minimia or saddle point issues. In [13], the authors consider a Trace norm regularized ML estimation with a likelihood function for categorical distribution. They establish theoretical upper and lower bounds for the error. In [14], Augmented Lagrangian method (ALM) and bilinear factorization are utilized to address the QMC. The proposed method in [14], QMC-BIF, leads to enhanced accuracy in recovery compared to previous works. In [15], a new method for MC from quantized and erroneous measurements is proposed which considers the sparse additive error in the model. This is an extension of the APG Method in [7]. Later, the authors introduce a more robust version of their quantized recovery algorithm in [16]. Ignoring the sparse error in their proposed algorithm we reach out a new QMC algorithm for our assumed model denoted with AG. However, the aforementioned algorithms depend on knowledge of bounds for the rank (an initial rank estimation) while our proposed algorithm does not. Applications of QMC can be observed in [17]–[20].