On the Best Lattice Quantizers

A lattice quantizer approximates an arbitrary real-valued source vector with a vector taken from a specific discrete lattice. The quantization error is the difference between the source vector and the lattice vector. In a classic 1996 paper, Zamir and Feder show that the globally optimal lattice quantizer (which minimizes the mean square error) has white quantization error: for a uniformly distributed source, the covariance of the error is the identity matrix, multiplied by a positive real factor. We generalize the theorem, showing that the same property holds (i) for any lattice whose mean square error cannot be decreased by a small perturbation of the generator matrix, and (ii) for an optimal product of lattices that are themselves locally optimal in the sense of (i). We derive an upper bound on the normalized second moment (NSM) of the optimal lattice in any dimension, by proving that any lower- or upper-triangular modification to the generator matrix of a product lattice reduces the NSM. Using these tools and employing the best currently known lattice quantizers to build product lattices, we construct improved lattice quantizers in dimensions 13 to 15, 17 to 23, and 25 to 48. In some dimensions, these are the first reported lattices with normalized second moments below the best known upper bound.


I. INTRODUCTION
L ATTICES are regular arrays of points in R n .They are obtained as arbitrary linear combinations of (at most n) linearly independent basis vectors, with integer coefficients.Hence, lattices are a countably infinite set of vectors, closed under addition.The remarkable book by Conway and Sloane [1] provides a comprehensive review of lattices and their properties.
As fundamental geometric structures, lattices have found applications in a variety of disciplines, including digital communications [2], experimental design [3], data analysis [4], and particle physics [5].In each application, the problem of designing the best lattice for a given purpose arises.Such optimization challenges often reduce to familiar mathematical problems such as sphere-packing, sphere-covering, or quantization [1, Ch. [1][2].
In this paper, we are concerned with the quantization problem, which can be defined as follows.Random vectors in R n are drawn from some (source) probability distribution, and approximated by their closest lattice points.This approximation (or quantization) process creates a round-off (or quantization) error: the difference between the vector and its closest lattice point.Among all lattices having the same number of lattice points per unit volume, the optimal lattice quantizer is the lattice with the minimum mean square error.This is equivalent to minimizing the normalized second moment (NSM), which is a scale-invariant measure of this mean square error.
As in most work on lattice quantization, we assume that the lattice is sufficiently dense so that the source probability distribution is approximately constant over each Voronoi region.In this case, the optimal lattice does not depend upon the source distribution of the random vectors.
Tables of the NSM, showing the best known lattices for quantization in various dimensions are listed in [6], [7], [1, p. 61], and the quantization performance of some additional lattices is computed in [8]- [13].Yet, proofs of optimality are known only in dimensions up to three [6], [14].
In a pioneering 1996 paper [15], Zamir and Feder show that the optimal lattice quantizer in any dimension has a white quantization error.More precisely, the error defined above (vector difference between a random source vector and its closest lattice vector) has a covariance matrix which is the identity matrix, scaled by a positive real constant.
In this paper, we extend the Zamir and Feder result to locally optimal lattices.These are lattices whose NSM cannot be reduced by a small perturbation of the lattice generator matrix.We also consider product lattices, which are the Cartesian product of two or more lower-dimensional lattices.The NSM of a product lattice depends on the relative scaling between the component lattices.A closed-form expression for the optimal scale factors is derived, and we call a product lattice using such scale factors an optimal product.If each of the lowerdimensional lattices is locally optimal, then we prove that the optimal product is the one for which the quantization error is white.
Lastly, we apply these methods to explicitly design some product lattices and analytically optimize their scale factors.These provide constructive upper bounds on the quantization performance of the optimal lattices in their respective dimensions.This simple construction yields better lattice quantizers than previously reported in all dimensions above 12 except for 16 and 24.We also prove that further optimization is possible: the NSM of such product lattices is a saddle point in the space of generator matrices, and can be further reduced by certain perturbations of the generator matrix.

II. MATHEMATICAL PRELIMINARIES AND METHOD
Notation: Bold lowercase letters x denote row vectors, while bold uppercase letters X denote either matrices or random vectors.An all-zero vector or matrix of an arbitrary arXiv:2202.09605v2[cs.IT] 28 Jun 2023 size (inferred from the context) is denoted by 0, and identity matrices are denoted by I. Sets are denoted by uppercase Greek letters Ω, apart from the integers Z and real numbers R. Arithmetical operations on sets should be understood as operating per element, e.g., Ω + λ {x + λ : x ∈ Ω}.Definitions are indicated by .
Without loss of generality, we consider n-dimensional lattices Λ that are generated by square invertible n × n generator matrices B. The lattice consists of the set of points uB for all row vectors u with integer components.The all-zero row vector 0 belongs to all lattices.The cubic lattice Z n is the special case for which B is the identity matrix.
Until now, we have used "quantization" to denote the map from a vector in R n to the closest lattice point.However, for the proofs in this paper, we consider more general mappings.A quantization rule or quantizer for a lattice Λ is a function The quantizer's properties are completely determined by its behavior in the fundamental decision region since (2) may then be used to determine the action anywhere.
The translate Ω(Q Λ ) + λ of the fundamental decision region is called the decision region of the lattice point λ.All have the same volume [2, Prop.2.2.1] As indicated by the notation V Λ , this volume depends upon the lattice Λ, but is independent of the quantization rule.Taken together, the decision regions of all lattice points cover R n without overlap.
As mentioned in the Introduction, the performance of lattice quantizers does not depend upon the source distribution if the lattice is sufficiently dense.To prove this, consider a source probability density function (pdf) p X (x), normalized by p X (x) d n x = 1.The mean square quantization error of the quantization rule is Since the translates Ω(Q Λ ) + λ, ∀λ ∈ Λ cover R n without overlap, the mean square error can be written as where we use (2) to write this as an integral over the fundamental decision region and (1) to set Q Λ (x) = 0 inside that region.If now Λ is sufficiently dense, then λ∈Λ p X (λ+ξ) is approximately constant, independent of ξ.Such a probability distribution can be obtained by rescaling a smooth base pdf p of compact support, for example as p X (x) = α n p(αx) in the limit as α → 0. In such a limit, λ∈Λ p X (λ + ξ) = 1/V Λ for any ξ,1 and the mean square error (6) approaches [16] Hence, in what follows, the pdf of the source does not appear.Important quantities that are closely related to the mean square error are the NSM or quantizer constant G(Q Λ ) and the correlation matrix R(Q Λ ), which are [6], [2, pp. 48, 71] Note that the NSM G(Q Λ ) is "dimensionless" in the sense that it is invariant under uniform rescaling of the lattice.From ( 8) and (10), it follows that so the trace of the correlation matrix gives the mean square error.
It follows immediately from the definition (10) that the correlation matrix R is real, symmetric, and positive definite.If the quantization error is not white, then R provides "preferred directions" in the space, for example corresponding to the eigenvector with the largest or the smallest eigenvalue.In the case of white quantization error, however, R is proportional to the identity, and does not generate preferred directions, since every vector is an eigenvector with the same positive real eigenvalue.
For a given lattice Λ, the most common and important rule is the minimum-distance quantization rule, denoted by a hat: For any vector x, it returns the closest vector in the lattice.Ties can be broken by any criterion that respects condition (2).This quantization rule is special because, for a given lattice Λ, it minimizes E(Q Λ ) and G(Q Λ ).This follows immediately from (7), because the expectation 2 is minimized for every x.Hence, QΛ is the optimal decision rule for a given lattice.
For this rule, the fundamental decision region (3) is the Voronoi region (13) which geometrically consists of all points in R n whose closest lattice point is the origin. 2An important property of the Voronoi region of any lattice is that it is symmetric about 0: the center of gravity Ω( QΛ) x d n x = 0. Hence, the correlation matrix R(Q Λ ) is equal to the covariance matrix whenever Q Λ = QΛ (but not for arbitrary quantization rules Q Λ ).
Throughout this paper, the word "optimal" is used in several senses.For a given lattice, the optimal decision rule is the one which minimizes the NSM, i.e., (12).Among all lattices of given dimension, the optimal lattice is the one with the smallest NSM.The optimal product of given lattices is the one that minimizes the NSM among all Cartesian products of those lattices, by varying the relative scales between them.
Our main theorem-proving technique is, as in [2,Sec. 4.3], to construct different decision rules for a given lattice, exploiting the fact that their NSMs are equal to or greater than the NSM of the optimal decision rule Q.For example, in Sec.III, if the quantization error of a lattice Λ is not white, then R provides preferred directions (say, the eigenvectors with the largest eigenvalue) 3 .With these, we construct a family of lattices Λ with nonoptimal decision rules, whose NSM is smaller than that of the original lattice Λ.Since the NSM of Λ with an optimal decision rule cannot be larger, we have thus shown that the original lattice is not optimal.A similar proof technique is applied in Sec.V. Starting with a product lattice, we generate a new non-product lattice, with a nonoptimal decision rule, but whose NSM is equal to that of the original starting lattice.Hence, the optimal decision rule on the new non-product lattice must yield a smaller NSM than that of the original product.

III. LOCALLY OPTIMAL LATTICES
Our starting point is the following theorem, which states that the globally optimal quantizer lattice has a white quantization error: a covariance matrix proportional to the identity.
Theorem 1 , [2, Sec.4.3]): For the optimal lattice Λ in any dimension n, We now generalize this to the locally optimal case.A locally optimal lattice is a lattice Λ whose NSM G( QΛ ) cannot be decreased by an infinitesimal perturbation of the generator matrix B [14].The extension to Theorem 1 is: Theorem 2: Any locally optimal lattice satisfies (14).Proof: Our proof is constructive.If the covariance matrix of Λ is not proportional to the identity matrix, we use it to build a nearby lattice Λ with a smaller NSM than the NSM of Λ.
Let Λ = ΛA β , where A β is an invertible n × n matrix and β is a real parameter to be defined later.As in [15], we consider the minimum-distance quantizer QΛ (x) on Λ and the suboptimal quantizer QΛ (x) QΛ (xA −1 β )A β on Λ.It is straightforward to show that QΛ (x) satisfies ( 1)-( 2) and has a fundamental decision region Ω( QΛ ) = Ω( QΛ )A β .Note that while Ω( QΛ ) is the Voronoi region of Λ, the fundamental decision region Ω( QΛ ) is generally not the Voronoi region of Λ.By (4), it has volume The covariance matrices of the two quantization rules are easily related using the change of variables (mapping) provided by A β .From (10), the covariance matrix of QΛ is Eq. ( 15)].Hence, the NSM is where we have used the cyclic property of the trace.
To select A β , we follow the approach described earlier, using R( QΛ ) to obtain preferred directions. 4Let R denote the traceless part, which by assumption is nonzero: and let A β exp(β R).This choice of mapping is volumepreserving, since for any square matrix M , det(exp(M )) = exp(tr M ) [18, p. 16].Thus, detA β = exp(β tr R) = 1.Note that because the covariance matrix is symmetric and real, both R and A β are symmetric and real.
For the proof, we only need A β for infinitesimal β: Substituting A β from ( 17) and R( QΛ ) = I trR( QΛ )/n + R from ( 16) into (15), the NSM becomes where we have distributed the trace over additions and used tr R = 0.
It is clear from (18) that for negative β near zero, we have G( QΛ ) < G( QΛ ).This follows because, since R is a non-vanishing real symmetric matrix, tr R2 must be positive 5 .Since the NSM of the minimum-distance quantization rule (12) on Λ satisfies6 G( QΛ ) ≤ G( QΛ ), we have established that for negative β near zero, G( QΛ ) < G( QΛ ).
To test Theorem 2, we examine a large number of numerically optimized lattice quantizers.These were designed in 1996 using an iterative algorithm, which converges to different locally optimal lattices [11].A total of 90 locally optimal lattices are available as online supplementary material to the 1998 article [19]; we estimate the covariance matrices R( QΛ ) of their quantization errors using Monte Carlo integration.In all cases, consistent with the theorem, the obtained covariance matrices are proportional to the identity matrix, apart from minor round-off errors.
Theorem 2 establishes that local optimality is a sufficient condition for a white quantization error (14).Is it also a necessary condition?In other words, is any lattice that satisfies (14) locally optimal?In the next two sections, we will show that this is false, using product lattices as a counterexample.

IV. PRODUCT LATTICES
In this section, we study lattices that are formed as the Cartesian products of lower-dimensional lattices.Gersho applied this technique to obtain upper bounds on the optimal NSM for n = 5 and n = 100, without formalizing the expressions [6, Sec.VII].
Let k lattices in dimensions n 1 , . . ., n k be denoted by Λ 1 , . . ., Λ k , and consider their product where B i is a generator matrix of Λ i for i = 1, . . ., k.The Voronoi region, volume, and other properties of a product lattice are as follows.Proposition 3: For any k ≥ 1 and where Ω i , V i , E i , G i , and R i denote the corresponding properties of Λ i .An example of a 3-dimensional Voronoi region Ω, constructed according to Proposition 3 as the Cartesian product of two lower-dimensional Voronoi regions Ω 1 and Ω 2 , is illustrated in Fig. 1.
for all then for a fixed i we set λ j = 0 in (25) for all j = i.This implies (20), which in turn proves (21).
where Λ 1 is the two-dimensional hexagonal lattice A 2 and Λ 2 is the one-dimensional integer lattice Z.The origin 0 belongs to all three lattices and is the centroid of all three Voronoi regions.The top and bottom facets of Ω are shifted copies of Ω 1 , and the six vertical edges are shifted copies of Ω 2 .
The definition (8), applied to a product lattice using ( 20) and ( 21), implies that which proves (22).Equation (23) follows by substituting E = nV 2/n G and the corresponding expressions for E i into (22), and simplifying using (21) and Lastly, to prove (24), we use (20) in (10) to obtain For the submatrices on the diagonal, whose integrands have the form x T i x i , the volumes cancel as in (26), leaving R i .The off-diagonal submatrices with integrands x T i x j for i = j are separable into products of two integrals such as Ωi x i d ni x i .These vanish because (as pointed out after (13)) the Voronoi region Ω i is symmetric about zero and thus has its center of gravity at the origin.
We now generalize the product construction by introducing a list of real positive scale factors a = [a 1 , . . ., a k ] to build a family of product lattices The properties of Λ(a) follow by replacing Ω i by a i Ω i , V i by a ni i V i , E i by a 2 i E i , and R i by a 2 i R i in Proposition 3, while G i , due to its scale-invariant definition (9), remains unchanged for i = 1, . . ., k.These substitutions result in For given Λ 1 , . . ., Λ k , what scale factors a produce the optimal product Λ(a), in the sense of minimizing G(a)?We note that G(a) has at least one minimum for finite and positive a 1 , . . ., a k , because if one of these scale factors is varied while keeping the others fixed, then (32) diverges to infinity as the scale factor tends to either zero or infinity.This minimum is unique up to a linear scale factor, and has a closed form as follows.
Theorem 4: For given lattices Λ 1 , . . ., Λ k , varying only a, Λ(a) is an optimal product if and only if for an arbitrary real constant C > 0. An equivalent condition is The optimal NSM G(a) is given by Substituting which follows from (30), into (37) and simplifying yields Equating (39) to zero for i = 1, . . ., k reveals that a 2 i V
An interesting special case is when the sublattices Λ i are locally optimal for all i = 1, . . ., k.If the scale factors a are optimally chosen according to Theorem 4, which we denote by a opt , then Λ(a opt ) also has white quantization error.We state this in a way similar to Theorems 1 and 2, but with different conditions.
Corollary 5: If Λ 1 , . . ., Λ k are locally optimal lattices, Furthermore, G(a opt ) is locally minimal with respect to any perturbations in the submatrices on the diagonal of (28).
To prove the local optimality of G(a opt ), we let , where as before V i = detB i .Then the submatrices on the diagonal of ( 28) can be written as a i B i = a i B i for all i, where V i detB i = 1.We will consider variations in a i and B i separately.First, if a i is varied for any fixed (not necessarily optimal) B i , then Theorem 4 applies and the minimal NSM is attained when a i = C/ √ G i .Consequently, the optimal scale factors a i = a i /V 1/ni i follow (34), and (39) is zero.Second, we consider variations in B i for any fixed (not necessarily optimal) a i , keeping detB i = 1.Then the NSM in (32) is locally minimal when G i is locally minimal, i.e., when Λ i is a locally optimal lattice.Theorem 2 and Corollary 5 are curiously related to each other: both give sufficient (but not necessary) conditions for a white quantization error.Is Corollary 5 perhaps a special case of Theorem 2? In other words, is the optimal product Λ(a opt ), to which Theorem 4 and Corollary 5 apply, also a locally optimal lattice, as defined in Sec.III?We will see in the next section that the answer is "no"; for any lattice of the form (28) with a = a opt , the NSM can be decreased by perturbing any of the off-diagonal submatrices written as 0 in (28).

V. UPPER BOUND
In this section, we show that the NSM of any lattice is bounded above by that of a product lattice, which is given by ( 23) or (32).For brevity, we develop the case k = 2 explicitly and then show that k > 2 follows by induction.
Consider a square generator matrix of the form where Proof: The method of proof is to define a suboptimal quantizer QΛ for which G( QΛ ) = G( QΛp ).The lemma then follows, because by the definition of QΛ , G( QΛ ) ≤ G( QΛ ).The decision regions for the three different quantizers are illustrated in Fig. 2.
We construct the suboptimal quantization rule from the optimal quantization rules for Λ 1 and Λ 2 .As earlier, write the source vector as x = [x 1 x 2 ], where x 1 and x 2 have respective dimensions n 1 and n 2 .Our suboptimal quantization rule is defined by the following four-step algorithm: Quantities with subscript "1" lie in the subspace spanned by the first n 1 coordinates (horizontal in Fig. 2) and the quantities with subscript "2" lie in the subspace spanned by the final n 2 coordinates (vertical in Fig. 2).We first show that this is a quantization rule: it satisfies the conditions (1)-(2).
(50) Thus, the fundamental decision region of the (suboptimal) quantization rule for Λ is identical to the fundamental decision region of the optimal quantization rule for Λ p .This can be intuitively understood by comparing Figs.2(a) and 2(b).Since the fundamental decision regions of QΛ and QΛp are identical, so are all properties derived from these regions, e.g., E( QΛ ) = E( QΛp ), R( QΛ ) = R( QΛp ), and G( QΛ ) = G( QΛp ).But since the optimal decision rule for Λ satisfies G( QΛ ) ≤ G( QΛ ), our proof is complete: G( QΛ ) ≤ G( QΛp ).Equality if and only if H = 0 follows, since B in (41) and B p in (19) are equal if and only if H = 0.
In digital communications, the suboptimal quantization rule QΛ is known as successive interference cancellation [20].In that scenario, x 1 and x 2 represent information received on two parallel communication channels, which interfere with each other.If x2 is detected first (42), its effect on x 1 can be calculated (43) and cancelled (44) before x1 is detected in (45).If (42)-( 45) are extended to n steps of one-dimensional quantization, then the resulting suboptimal quantization rule yields the Babai point [21].
The extension to k > 2 follows immediately.Let Λ be the lattice generated by Λ i be the lattice generated by B i for i = 1, . . ., k, and Theorem 7: For given B 1 , . . ., B k and any H i,j , G( QΛ ) ≤ G( QΛp ), with equality if H i,j = 0 for all i, j.
Proof: By induction.If the theorem holds for the upperleft (k − 1) × (k − 1) part of (51), then it extends to k × k by Lemma 6.
Like Proposition 3, Theorem 7 can also be extended by scale factors a. Specifically, if B 1 , . . ., B k in (51) are multiplied by scale factors a = [a 1 , . . ., a k ], then the NSM G(a) of the resulting lattice is bounded by (32) for arbitrary given a or by (36) for optimal a.
We now return to the question of whether the optimal product Λ(a) generated by B(a) in ( 28) is always locally optimal at a = a opt .It was observed in Corollary 5 that if Λ 1 , . . ., Λ k are locally optimal, then G(a opt ) is locally minimal with respect to perturbations in any submatrix a i B i about its local optimum.On the other hand, it follows from Theorem 7 that G(a opt ) is locally maximal with respect to perturbations about 0 in any submatrix below the block diagonal of B(a opt ).The same theorem holds if (51) is replaced by an upper-triangular matrix B. This proves that G(a opt ) is also locally maximal with respect to perturbations about 0 above the block diagonal of B(a opt ).Since the NSM increases for any perturbations in the submatrices on the diagonal of (28) and decreases for any perturbations in submatrices either below or above the block diagonal, the first derivative of G with respect to these entries must vanish: the NSM has a saddle point at B(a opt ).In conclusion, all lattices that fulfill (14) are not locally optimal.
One way to construct is by lamination of a lowerdimensional lattice.To build an n-dimensional laminated lattice Λ, take a generator matrix B 1 for an (n − 1)-dimensional lattice Λ 1 and an arbitrary (n−1)-dimensional vector h.Then construct the n × n generator matrix Here a > 0 is a real number, which is the distance between the shifted lattice copies in the direction orthogonal to their subspace, and the vector h is the stacking offset in the (n − 1)-plane.Fig. 1 illustrates the Voronoi region of a laminated lattice with n = 3 and h = 0.
In the classical lattice literature, "laminated lattices" are built recursively in this way, to maximize the packing density, starting from Z.So B 1 is itself the generator for a laminated lattice, and in each recursive iteration, h and a in (52) are selected to maximize the packing density.This construction gives rise to the well-studied Λ and K series in [22], [23], [1, Sec. 4 of Ch. 5 and Ch.6].
In this paper, our focus is the NSM rather than the packing density, so we use "laminated lattice" more broadly for any lattice generated via (52).This broader meaning is consistent with [13], [24].Note that to maximize the packing density, the optimal choice of h is a "deep hole" in Λ 1 (a vertex of the Voronoi region most distant from the origin).This is also (intuitively) a good choice to minimize the NSM, although it may not be optimal.
An upper bound on the quantization performance of a laminated lattice follows directly from the results of Secs.IV and V, as follows.
Corollary 8: An n-dimensional lattice Λ obtained by lamination of an (n − 1)-dimensional lattice Λ 1 satisfies, for an arbitrary offset h and the optimal layer separation a, Proof: Setting n 2 = 1, Λ 2 = aZ, and H = h in Lemma 6 yields where Combining ( 54) and (55) completes the proof.
As a curiosity, we note that (53) would have been more appealing if G had been defined a factor of 12 larger than the standard definition (9).With that alternative definition, Z n would have an NSM of 1 in any dimension and the denominator would disappear from expressions like (53) and (55).

VI. BEST KNOWN LATTICE QUANTIZERS
The upper bound in Corollary 8 has interesting implications.These call to mind an observation made by Cohn in the context of sphere packing [27].Referring to a plot of sphere-packing density as a function of dimension, he comments that "Certain dimensions, most notably 24, have packings so good that they seem to pull the entire curve in their direction.The fact that this occurs is not so surprising, since one expects cross sections and stackings of great packings to be at least good, but the effect is surprisingly large."Indeed, the sphere-packing density in a given dimension n is bounded by a function of the sphere-packing density in dimension n − 1 according to Mordell's inequality [1, Eq. ( 19) of Ch. 6].Here, in the context of lattice quantizers, Corollary 8 does precisely this for NSMs.A lattice with particularly small NSM G in dimension n − 1 makes it possible to also obtain a small NSM in dimension n, and hence "pulls down the NSM curve" for larger dimensions.More generally, Theorem 7 can pull down the curve over intervals of more than one dimension.
We designed product lattices in dimension n by applying Theorem 4 with k = 2 to the best known lattices in dimensions n 1 and n 2 = n − n 1 , for n 1 ranging from 1 to n − 1.
There is no need to explicitly consider k > 2, although recursive application of Theorem 4 with k = 2 can generate product lattices with larger k.The minimal NSM obtained in each dimension provides a constructive upper bound on the optimal NSM.It follows from Theorem 7 that better lattice quantizers can be found among lattices of the form (51), where B 1 , . . ., B k are the best known lattices in their dimensions, for example by lamination if n i = 1 for any i.
Tab.I summarizes the best known lattice quantizers in dimensions n ≤ 48.The first such list was compiled in 1979 for n = 1 to 5 [6].It was extended to n ≤ 10 in 1982 [7], which also analytically calculated the NSMs of the classical lattices A n , A * n , D n , and D * n for any n.Since then, progress has been much slower.Better lattices were reported for n = 6 and 7 in [8] and for n = 9 and 10 in [11], although the NSMs of these lattices were only computed numerically.Reference [8] also gave the best known lattice quantizers for n = 12, 16, and 24 with numerically computed NSMs.Later, the corresponding exact NSMs were calculated for n = 6 [9], n = 7 [10], n = 9 [13], and n = 10 and 12 [12].In the latter reference, a new best known lattice quantizer was identified for n = 11.NSM results for n ≤ 15 were summarized in [4].
In Tab.I, we also include the best known lower and bounds on the NSM.The lower bound is a conjecture by Conway and Sloane [25], which we evaluated numerically by high-resolution trapezoidal integration.The upper bound is by Torquato in [26, Eq. ( 105)] 8 and improves on a well-known bound by Zador [16,Lemma 5].The Torquato upper bound is for arbitrary quantizers, which might not be lattices.We conjecture that there is always at least one lattice quantizer satisfying this bound.For the best known packing densities, which are needed to evaluate [26, Eq. ( 105)], we use [1, Tab.I.1].Taken together, these two results provide conjectured lower and upper bounds on the NSM of optimal lattice quantizers.While intuitively plausible, these remain unproven.
For each dimension, the last four columns of Tab.I list the best product lattice that can be constructed from the lattices given on the preceding rows of the table.While Theorem 7 establishes that these lattices are not even locally optimal, they nevertheless improve significantly on the previously lowest reported NSMs.For brevity, we use ⊗ to denote the Cartesian product of lattices with the optimal choice of relative scale.More precisely, given by Theorem 4.
The first dimension in which this construction provides a better lattice quantizer than previously reported is n = 13, where our best product lattice is K 12 ⊗ Z.With an NSM of 0.0710, it is the first reported 13-dimensional lattice whose NSM falls below the generic upper bound 0.0724.It is significantly better than the currently best known 13-dimensional lattice quantizer D * 13 , whose NSM is 0.0749.Our optimized product lattices are also the first reported lattices which lie below the upper bound in dimensions n = 14, 17, and 25.

Fig. 2 :Lemma 6 :
Fig. 2: An example of Theorem 7 with n 1 = n 2 = 1.Each cell is the decision region of the lattice point it contains, and the shaded cells are the fundamental decision regions.Comparing (a) and (b) shows that the NSMs of QΛp and QΛ are equal, because their fundamental decision regions are identical.Comparing (b) and (c) shows that QΛ cannot have a smaller NSM than QΛ , because the lattices are identical and QΛ (x) minimizes the quantization error for every x.

TABLE I :
[8] best known lattice quantizers.Columns 2-3 list the smallest normalized second moments (NSM) G previously reported in dimension n, while columns 4-5 list the conjectured lower and upper bounds, respectively.Columns 6-7 list our best product lattices.Columns 8-9 indicate if the product lattice in Column 7 provides a smaller NSM than previously reported in Column 2 (<G) and/or is below the upper bound of Column 5 (<U).NSMs that are are known exactly are listed with nine decimals, whereas NSMs with five decimals are derived from numerical estimates in[8].