A Concrete LIP-Based KEM With Simple Lattices

Recent developments have been made in the construction of cryptosystems with security based on the hardness of the lattice isomorphism problem (LIP). Due to lattice conjectures, one may expect in the future that breaking such schemes is computationally harder than most current lattice-based cryptosystems. To the best of our knowledge, there have not been any attempts to concretely instantiate a key encapsulation mechanism (KEM) based on LIP. In this work, we propose the first instance of such a KEM, following the framework of Ducas and van Woerden (EUROCRYPT 2022), using simple lattices. We present a randomness extractor derived from a hash function based on the short integer solution problem; define a concrete set of parameters for instantiating the scheme; provide a rigorous security estimation of an attacker trying to decode an encapsulated key through reductions to hard lattice problems; and use well-known methods to convert the IND-CPA secure KEM into an IND-CCA2 secure KEM, comparing the latter with other modern lattice-based KEMs. The resulting security is estimated under the assumption that an adversary cannot efficiently solve related instances of LIP, which is a consequence of the lack of cryptanalysis towards identifying isomorphism between lattices.


I. INTRODUCTION
The application of lattices in the field of cryptography has been studied for several decades, especially in the context of quantum-safe cryptography, to build digital signature schemes, key encapsulation mechanisms (KEM) and public-key encryption (PKE) schemes.A framework based on the lattice isomorphism problem (LIP) has recently been introduced by Ducas and van Woerden [18] to overcome performance issues derived from schemes whose security depends on classical lattice problems, such as learning with errors (LWE) and short integer solution (SIS), by instantiating cryptosystems with remarkably decodable lattices.
To the best of our knowledge, there have been only a few modern attempts to build concrete cryptosystems based on hard lattice problems other than LWE and SIS.Exceptions from the standard are based on other hard problems, such as the bounded distance decoding (BDD) problem on lattices other than q-ary lattices, and LIP.Boschini et al. [13] propose The associate editor coordinating the review of this manuscript and approving it for publication was Aneel Rahim .a PKE scheme whose security is based on a variant of LWE for a specific class of ''hybrid lattices'' as defined by the authors, but the accompanying security arguments are lacking.Li et al. [26] propose a PKE scheme based on the hardness of decoding and correcting errors on lattice vectors.The authors propose using a trapdoor function as a private key to solve BDD correctly; the resulting cryptosystem achieves IND-CCA2 security.
Bruin et al. [12] discuss an efficient selection of lattice pairs to be used with the LIP framework.However, such a choice is insufficient to achieve a concrete instantiation of a LIP-based cryptosystem.Bennett et al. [9] propose a PKE scheme whose security is based on identifying lattices isomorphic to Z n .Nevertheless, the authors did not conjecture that the problem of recovering rotations of Z n (ZSVP) is hard; consequently, there is no evidence of IND-CCA2 security.Finally, Ducas et al. propose a concrete signature scheme called Hawk [17] based on the module variant of LIP.When compared to Falcon [19], signature generation is about 4 times faster on the x86 architecture while producing signatures that are approximately 15% smaller.Consequently, to the best of our knowledge, there have not been any attempts to concretely instantiate a KEM based on LIP.
Our main contribution is the first instance of a KEM in accordance with the framework of [18], using simple lattices.To build our KEM, we (i) define a novel randomness extractor derived from an SIS-based hash function; (ii) provide a rigorous security estimation of an attacker trying to decode an encapsulated key through reductions to hard lattice problems; (iii) suggest multiple parameter sets described by functions of the lattice dimension; and (iv) use well-known methods to convert the IND-CPA secure KEM into an IND-CCA2 secure KEM, and compare the latter with other modern lattice-based KEMs.
The structure of our work is as follows.In Section II, we define the necessary mathematical background to discuss our proposal.In Section III, we present a novel randomness extractor that is used to build our cryptosystem.In Section IV, we give the concrete definition of an IND-CPA-secure KEM whose security is based on LIP, estimate key sizes, and give several parameter sets.Finally, in Section V, we summarize our findings and give possible further research topics.

II. PRELIMINARIES
Our notation is given as follows.All vectors and bases of a lattice L are represented by bold letters.We define L(B) to be the lattice generated by its basis B. We use the symbol ∼ = to denote isomorphisms between lattices.We write to represent a basis orthogonalized via the Gram-Schmidt method.We define B Q ∈ R n×m to represent a basis of a lattice obtained by the Cholesky decomposition of a quadratic form Q. Consider a basis B with Gram matrix Q; we set λ i (B) = λ i (Q) to be the i-th successive minima of L(B).The Euclidean norm (ℓ 2 ) of a vector x is denoted as ∥x∥ 2 .
The symbol $ ← is read as ''chosen uniformly at random from''.We consider lg to be log 2 .The soft-O notation Õ(f (x)) is the same as O(f (x) lg k x), for a function f with input x and some constant k ∈ N. We define Pr Y ∼X [e] to be the probability of event e occurring when sampling a random variable Y from a distribution X .We define U(X ) to be the uniform distribution of a set X .The general linear group of degree n is denoted as GL n (R) for any ring R; the set U n ⊂ GL n (Z) contains all unimodular integer matrices; and the set O n contains all orthogonal transformations of degree n, those that preserve the length and internal product of all vectors.The finite field of order 2 is referenced by F 2 ; a multivariate polynomial ring on n variables over F 2 is denoted by F 2 [x 1 , . . ., x n ].

A. DENSE DECODABLE LATTICES
There have been many proposed construction methods for obtaining highly dense and decodable lattices from a family of error-correcting codes.Applications of such lattices go beyond the field of cryptography and are well-suited for error-correcting on real-world communication protocols.We hereafter restrict our definitions strictly to binary codes of length n, where codewords are a subset of F n 2 .Then, we define the generic Barnes-Wall lattice from a nested family of Reed-Muller codes.

Definition 1: [Error-correcting codes parameters] A binary error-correcting code C with length n, dimension k, and minimum distance d is a subset of F n
2 such that |C| = 2 k .We denote this by saying that C has parameters [n, k, d].
The minimum distance of a code is similar to the shortest vector in a lattice.Let C be a code with parameters [n, k, d] and L be an n-dimensional lattice.An error e ∈ F n 2 can be corrected if Dist(e) ≤ ⌊ (d − 1) /2⌋, where Dist(•) denotes the Hamming distance from the zero codeword.Similarly, in lattices, an error e ∈ R n can be corrected if ∥e∥ < λ1(L) /2.An error-correcting code is said to be linear whenever it can be generated by a finite generator matrix, as defined below.
to be the 2 ndimensional evaluation vector whose coordinates contains the evaluation of f at every z ∈ F n 2 following a specific order.Definition 3: [Reed-Muller code] The Reed-Muller code of r-th order is a linear code with parameters x n ] and f has degree ≤ r}, where n = 2 m .
We refer to E n r as the ''evaluation matrix'' of RM n r , with its construction described as follows.The columns of E n r are the evaluation vectors obtained from Eval for all monomials of degree equal to or less than r.There are m r = r i=0 m i different monomials with such degrees.Therefore, . It is also easy to see that the columns of E n r+1 ∈ F . From these definitions, we obtain a well-defined method for constructing lattices from a nested family of linear error-correcting codes.
Definition 4: [Construction D] Let a, γ ∈ N. Consider the nested family C = {C 0 , . . ., C a } of binary linear codes where The first code C 0 is the universal code with parameters [n, n, 1], and for be a generator matrix of C 0 where a permutation of its rows form an upper triangular matrix, and for 1 ≤ l ≤ a, let [c 1 , . . ., c k l ] be a generator matrix of C l . For The resulting lattice on the nested family C is Finally, consider the following generic construction for the N -dimensional Barnes-Wall lattice, when N = 2 m for some positive integer m. 2a .The first efficient decoding algorithm for the generic Barnes-Wall lattice was presented in [29].This decoding algorithm uses an equivalent representation of the Barnes-Wall lattice that uses Gaussian integers.

B. EQUIVALENCES BETWEEN LATTICES
We assume the reader is familiar with fundamental lattice concepts.We recall that the rank of a lattice L is the number of linearly independent vectors in its basis B. When the rank of the lattice is the same as its dimension, the lattice is fullrank and is represented by a square matrix.We only consider full-rank lattices hereafter.When two different bases generate the same lattice, we say that both bases are equivalent.Any two equivalent bases of a full-rank lattice are related by an unimodular integer matrix.
Lemma 1: [Equivalent bases] Let B 1 , B 2 ∈ R n×n be bases of a lattice L, i.e., L(B 1 ) = L(B 2 ).Then, there exists a matrix U ∈ U n such that B 1 = B 2 U .
Sometimes two bases do not generate the same lattice, but two isomorphic lattices.The isomorphism of lattices is defined using orthogonal transformations.
Definition 6: [Isomorphic lattices] Let B 1 , B 2 ∈ R n×n be bases of two n-dimensional lattices, respectively L(B 1 ) and In other words, L(B 1 ) ∼ = L(B 2 ) if there exists an orthogonal transformation O ∈ O n and a matrix U ∈ U n such that B 1 = OB 2 U , which is the classic LIP definition.
The following lemma allows sampling equivalent lattice basis in polynomial time, and plays an important role in many lattice-based schemes.This sampling algorithm is used to properly define the distribution related to the average-case of LIP variant (Definition 15; Section II-E).
Lemma 2: [From [28,Ch. 7]] Consider an r-rank lattice basis B = [b 1 , . . ., b r ] ∈ R m×n and a set S = {s 1 , . . ., s r } ⊂ L(B) of linearly independent vectors where ∥s 1 ∥ ≤ ∥s 2 ∥ ≤ • • • ≤ ∥s r ∥.There exists a polynomial time algorithm that takes as input B and S, and outputs a sampled basis R = [r 1 , . . ., r r ] equivalent to B such that, for every The proof of Lemma 2 presented in [28,Ch. 7] describes the algorithm below, which could be implemented to run in polynomial time.We adapt the output of the algorithm so that it also outputs the uniform matrix which defines the equivalence between both bases.On Step 3.3 of this algorithm, the name NearestPlane referers to Babai's nearest plane algorithm [28,Ch. 2].
Algorithm 1 An algorithm that samples equivalent basis, according to Lemma 2.
1) Compute the integer coefficients of each vector in S as Y = B −1 S.
2) Diagonalize Y with elementary matrix operations, and apply the same operations on B. That is, compute an upper Finally, the Hermite normal form is a diagonal matrix and is helpful in the context of lattices: the Hermite normal form of a basis is unique to the lattice and is not dependent on each individual basis.

Definition 7: [Hermite normal form, column version] Let H ∈ Z n×m be a matrix. We have that H is in Hermite normal form if (i) H is lower triangular; (ii) all terms in the diagonal of H are positive; and (iii) for all elements h
that all vectors in the columns of A are linearly independent.Then, there exists a unique matrix H in Hermite normal form such that H = AU for some U ∈ U n .
When L(A) ⊆ Z n , the pair (H , U ) is unique.The diagonalization process in Step 2 of Algorithm 1 can be achieved by any algorithm that computes the unique Hermite normal form of an integer matrix.

C. GRAM MATRICES AND QUADRATIC FORMS
We recall that a quadratic form Q ′ is a polynomial on n variables with all terms of degree two.We may store the coefficients of is a quadratic form.We observe that the Gram matrix of a basis is independent of any n-dimensional orthogonal transformation.For any lattice basis B ∈ R n×n , we have that (OB) T (OB) = B T B, which is the Gram matrix of B. In this case, the problem of finding isomorphisms between lattices can be rewritten in the quadratic form scenario.We define the syndrome S >∞ n as the set of all integer quadratic forms obtained from Gram matrices of lattice bases of dimension n.

Definition 8: [Equivalence between quadratic forms] Let
In Definition 6, the matrix O can contain non-integer values.However, in Definition 8, only S >∞ n is considered, which restricts the quadratic forms to integer values, and so LIP may be defined entirely via integer matrices.There exist other definitions of LIP [9, Sec.1.3], but we observe that they are equivalent; solving one is as hard as solving other instances of the problem.The following definition helps us to define LIP later, considering integer matrices.
Finally, we give some intuition on the relationship between coefficients and lattice vectors.Consider the vectors u = Bx ∈ R n and v = By ∈ R n , for some basis B ∈ R n×n , a quadratic form Q = B T B, and coefficients x, y ∈ Z n .Let us define ⟨x, In other words, instead of working with lattice vectors, we manipulate the integer coefficients multiplied by the basis via quadratic forms.

D. GAUSSIAN FORM DISTRIBUTION AND SAMPLING
To construct a secure randomness extractor and assert the correctness of our proposal, several probability results are needed.The following definitions are mainly derived from [18]; we include them here for completeness.
Definition 10: We set the Gaussian mass of a lattice L to be ρ s,c,Q (L) = x∈L ρ s,c,Q (x).That allows us to define a corresponding discrete Gaussian distribution as follows.
Definition 11: [Discrete Gaussian dist.on lattice vector] Consider a lattice L, vectors x, c ∈ R n , and a Gaussian function ρ s,c,Q .The probability of sampling x from the discrete Gaussian distribution D s,c,Q,L centered on c with standard deviation s > 0 is defined as and zero otherwise.
We hereafter consider the discrete Gaussian distribution over Z n .Therefore, for simplicity, we hide the lattice from the notation and write D s,c,Q to denote this distribution.From Lemma 4, we may efficiently sample vectors from the desired distribution.We now turn to the main results that allow our scheme to operate correctly.
Definition 12: [Smoothing parameter] Let Q ∈ S >∞ n and ϵ > 0 be a small constant.The smoothing parameter is We omit a second inequality from the original smoothing bound lemma in [18] since it is not needed for our purpose.For the following results, we consider any quadratic form Finally, we define a new Gaussian distribution, which is directly used in the reduction from the worst-case of LIP to the average-case.We give an algorithmic definition similar to [18] with identical proof, but explicitly define the lattice basis with Gram matrix equal to the sampled quadratic form, since we believe it improves comprehension of the algorithm.
Definition 13: [Gaussian form dist. [18,Definition 3.3]] Consider Q as the Gram matrix of a full-rank lattice basis

4) Repeat steps 2 and 3 until rank(Y
We sometimes refer to the generating set S used to sample the quadratic form and uniform transformation pair via Algorithm 1.For convenience, we use (Q, U , S) to denote that the intermediate generating set S is also returned from the sampling algorithm.

E. HARD LATTICE PROBLEMS
As our proposal is based on the framework of Ducas and van Woerden [18], the resulting KEM is IND-CPA-secure under the assumption that the average-case delta lattice isomorphism problem (ac-LIP) is hard.The worst-case version of LIP is reducible to the averagecase LIP [18,Lemma 3.9].This essentially allows the scheme to have its security based on a worst-case lattice problem, which is a stronger security assurance.

III. DEFINITION OF A RANDOMNESS EXTRACTOR
In this section, we present our first contribution: the construction of a concrete randomness extractor to be used in the encapsulation and decapsulation algorithms of our KEM.The leftover hash lemma [8] is a known strategy to build randomness extractors, used to generate bits that look uniformly random from a non-uniformly distributed set.Therefore, we first build a suitable hash function family that fits the aforementioned lemma and then define the associated extractor.We use known techniques to build randomness extractors and prove in Theorem 1 that a randomness extractor with the necessary parameters for the framework exists.

A. LATTICE-BASED UNIVERSAL HASH FUNCTION
It is common among lattice-based KEMs to use SHA3-256, SHAKE-256 [35], or some other standardized hash primitive to extract a shared secret that is statistically close to being uniformly random [6], [7].However, due to the use of quadratic forms, it is not trivial to implement these primitives into the framework of Ducas and van Woerden.The shared secret key is randomly extracted from a small vector.For instance, to use SHAKE-256, it is required to first uniquely encode the corresponding vector in binary and then extract the shared secret through SHAKE-256.Since the framework manipulates the integer coefficients that are multiplied by the lattice basis via quadratic forms, a standard unique encoding of vectors is not well-defined.and both distinct vectors have the same encoding.A straightforward approach to make the encoding unique is first to identify an upper bound for each term in the vector and then 'add enough zeros to the left' of each encoded term.However, note that ∥x∥ Q = ∥Bx∥ 2 = 1 is very different from the norm ∥x∥ 2 ≥ 12 .Having an upper bound on ∥x∥ Q does not impose an upper bound on ∥x∥ 2 and the straightforward approach is not trivial.We use randomness extractors where the domain contains small vectors with respect to the quadratic form, disregarding the necessity of non-trivial encoding techniques.Manipulating these integer coefficients becomes a burden only on the implementation of the scheme and not into its correctness.
Ajtai proposed a one-way function based on the assumption that certain lattice problems are hard [2].Later, Goldreich et al. [22] demonstrated that for specific parameters the hash function of Ajtai can achieve collision resistance and it became commonly used on many lattice-based schemes.In their work, they define a universal hash family where the functions have domain {0, 1} n and are collision-resistant under the assumption that solving certain instances of SIS is hard.We adapt their hash family for a different domain, preserving the universal property.This general strategy is used to build trapdoor signatures from SIS-based collision-resistant functions [21].However, the resulting randomness extractor, composed from the hash family defined below, is novel to the best of our knowledge.

Definition 16: [Hash function family] We define a family of functions H m,n,q,α = {f
The following definitions are used to state the leftover hash lemma, which allows us to obtain the desired extractor.
Definition 17: [Universal hash family] Let X and Y be, respectively, the domain and the image of H.For any distinct Now, it remains to show that our proposed hash function family is universal.The proof of the next proposition is simply an adaptation of the standard proof of the universality of an SIS-based hash family.
Proposition 1: For any n, m ∈ N, α ∈ R + and q prime, the hash function family H m,n,q,α is universal when 2α < q.
Proof: Let x 1 , x 2 ∈ D n,α be distinct arbitrary vectors, and f A $ ← H m,n,q,α be a function with image Z m q and defined from a matrix A = [a 1 , . . ., a m ] ∈ Z m×n q .Consider x i = (x i,1 , . . ., x i,n ) for i ∈ {1, 2} and note that f Because both vectors are distinct, they must differ in at least one term x 1,k ̸ = x 2,k for some index j ∈ {1, . . ., n}.Observe that Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

holds if and only if a
• j̸ =k a j (x 1,j − x 2,j ) (mod q).Because the columns of matrix A are sampled independently of one another, and the family H m,n,q,α is a universal hash family.□

B. RANDOMNESS EXTRACTOR FROM THE LEFTOVER HASH LEMMA
Let D X , D Y be two probability distributions.Then, we denote by (D X , D Y ) the usual statistical distance and by A generic randomness extractor is defined below.It is followed by a simplified leftover hash lemma (Lemma 9), whose description is only slightly modified and assumes the independence of the inputs.

Lemma 9: [From [8, Lemma 2.1]] Let v ∈ N and consider a universal hash family
/2 m .Now, we prove the existence of a concrete randomness extractor according to the framework of [18] whose parameters are compatible with our KEM proposal.
Theorem 1: Let m, n ∈ N, α ∈ R + , q prime with 2α < q, a quadratic form Q ∈ S >∞ n , and x ∈ D n,α sampled from the discrete Gaussian distribution D s,c,Q with standard deviation s ≥ 2η ε (Q) and centered on a vector c ∈ R n .Assume that n ≥ 2m lg q and consider a universal hash function family H m,n,q,α and a hash function h $ ← H m,n,q,α .Then, there exists an Proof: We show that there exists a negligible ϵ according to Lemma 9. From Lemma 7, the guessing probability on D s,c,Q is By the definition of min-entropy, H ∞ (X) = − lg(γ ), and using the inequality above, H ∞ (X) ≥ − lg( Considering small ε such that or simply H ∞ (X) ≥ n.We recall that n ≥ 2m lg q, and we obtain 2 −H ∞ (X ) ≤ 2 −(2m lg q) .There are q m elements in the range of h, and every element may be represented in m lg q bits.Finally, we observe that which is negligible for large m.Thus, by Lemma 9, the universal hash family H m,n,q,α can be used to build an (n, ϵ)extractor where ϵ ≤ 1 /2 • √ 1 /q m .□

IV. A CONCRETE KEY ENCAPSULATION MECHANISM
Here, we present our main contribution: the first concrete LIP-based KEM.We start by presenting the key generation, encapsulation, and decapsulation algorithms, followed by correctness and security discussions.Then, we discuss the parameter choices and key sizes.
We adapt the framework of Ducas and van Woerden according to our definitions of hash function (Proposition 1) and randomness extractor (Theorem 1).The system parameters of our KEM are defined as follows.Let m, n ∈ N be the security parameters, B 1 be the basis of an n-dimensional lattice L, a prime q which is the modulus of the underlying SIS instance, ρ < λ1(B1) /2 to be the maximum decoding distance of L, q = s /ρ • √ ln(2n + 4)/π , Q 1 = B T 1 B 1 to be common intermediate steps of the algorithms, and s ∈ N to be a standard deviation according to Lemma 8. Consider the scheme to be λ-bits secure for some λ ∈ N + .

Algorithm 2
The key generation algorithm of our KEM is given below. 1) 2) Return (Q 2 , S) as the public and secret keys, respectively.

Algorithm 3 The encapsulation algorithm of our KEM is given below, taking as input a public key
Return the pair (k, (c, ρ)) as the generated key, and the encapsulated key with the seed, respectively.

A. CORRECTNESS
The correctness of the underlying framework has already been discussed in [18].Besides the standard correctness, we adapted the procedure to obtain a generated key Algorithm 4 The decapsulation algorithm of our KEM is given below, taking as input (i) an encapsulated key (c, ρ) from Algorithm 3 and (ii) a secret key S from Algorithm 2, and using a decoding algorithm Decode that efficiently corrects errors with norm up to ρ.
det Q 2 with respect to q. 8) Return k as the generated key.k ∈ {0, 1} m lg q from our (m, ϵ)-extractor, which requires further explanation.
Proposition 2: Let m, n, q, q, ρ be system parameters of our KEM and • ÂQ 2 e ′ (present in step 6 of Algorithm 3, and in step 7 of Algorithm 4) generates bits from an (n, 1 /2 • √ 1 /q m )-extractor.Proof: Consider a basis B 2 ∈ R n×n with Gram matrix Q 2 , and a matrix A $ ← H m,n,q,qρ .For any X ∈ Z n×n , we have that AX ∈ Z m×n q is also uniformly random over Z m×n q .Hence, the matrix 2 ) is also uniformly random over Z m×n q .Then, (det Therefore, we have that ∥B 2 e ′ ∥ 2 ≤ qρ, and consequently B 2 e ′ ∈ D n,qρ .Hence, by Theorem 1, the computation AB 2 e ′ generates a key k ∈ {0, 1} m lg q from an (n, 1 /2 We comment on the main methods for an adversary to attempt to retrieve the encapsulated key without firsthand knowledge of the private key.Let λ be the key length.The trivial brute force approach for searching the encapsulated key cannot be executed efficiently due to the 2 λ distinct possibilities. The randomness extractor guarantees that the resulting key is statistically close to a uniform distribution over the key space, and there is no clever process for retrieving it.Instead, given that the norm of error vectors is bounded by the maximum decoding distance, a better solution is to attempt to find a small vector that extracts the key.However, since the standard deviation is large enough, no specific vector can be sampled with a non-negligible probability (via Lemma 6).Since the randomness extractor guarantees a low number of collisions, blindly finding some vector e ′ such that E(e ′ ) = k is unlikely.
A common method for finding the encapsulated key is retrieving the secret key through the public key.For our KEM, this is analogous to solving the isomorphism problem.Given that the adversary retrieves a uniform transformation U that defines an equivalence between the Gram matrix of B 1 and Q 2 , the encapsulated key is obtained simply by executing Algorithm 4 from Step 2. Once trivial instances of LIP are not considered, the most efficient known methods for solving the problem require to initially find short vectors in the lattice [18].Consequently, the conjecture proposed by Ducas and van Woerden [18, Conjecture 7.2] exploits the Gaussian heuristic to estimate the hardness of standard search LIP and wc-LIP between two quadratic forms.Hence, certain non-trivial instances of these isomorphism problems are conjectured to be 2 O(n) computationally hard.We state the conjectures below for completeness; further technical information about the Gaussian heuristic and derived definitions may be found in the appendix.

Conjecture 7.2]] For any two classes of quadratic forms
Lastly, we observe the estimated difficulty for an adversary to decode the error vector on the public rotated lattice.We note that the error vector is the difference between the input and output vectors of the corresponding BDD instance.Because it is difficult to concretely estimate the complexity of hard lattice problems, most modern lattice-based schemes only provide the computational hardness of executing state-ofthe-art attacks as their evidence of security.In our case, we estimate the complexity of solving the related BDD problem by reducing it to the unique shortest vector problem (uSVP), a well-known variant of SVP.
Lyubashevsky and Micciancio [27] present a simple reduction from BDD to uSVP.Let B ∈ R n×n be the basis of an n-dimensional full-rank lattice L(B).Let t ∈ R n be a vector not belonging to L(B).Consider v ∈ L(B) to be the vector closest to t, with distance ∥v − t∥ 2 = µ.The reduction states that the unique shortest vector in the lattice spanned by . Therefore, finding the unique shortest non-zero vector in the lattice is equivalent to finding the solution v that solves BDD.In practice, the reduction is slightly different; otherwise, building B ′ would require knowing µ, which is not known to an adversary.However, because we are only estimating the hardness of the reduced uSVP, the basis B ′ containing µ is sufficient.
To estimate the hardness of the BDD instances, we now have to estimate the hardness of solving uSVP in the lattice generated by B ′ .In our analysis, we consider the classic BKZ algorithm, which finds reduced bases by calling SVP with lattices of dimension β, named the blocksize, polynomially many times.Our process consists of estimating a value β sufficient to solve uSVP through BKZ, and then estimating the number of operations required to successfully conclude the execution of BKZ.We estimate β according to the analysis of Dachman-Soled et al. [16], where β is the smallest positive integer between 50 and dim(B ′ ), inclusive, such that √ β ≤ (δ β ) 2β−n−2 • det(B ′ ) 1 /(n + 1) holds, where δ β = ((πβ) and n = dim(L(B)).The inequality above is more easily respected when det(B ′ ) is the largest possible.As a consequence of the Leibniz formula, we have that det(B ′ ) = det(B) • det([µ]) = det(B) • µ.Therefore, our estimations consider the largest possible determinant, derived from the largest possible distance.In our case, the largest possible error norm (distance) is ρ.Due to the underlying geometric series assumption, the estimation does not work for β < 50; thus, we assume that there is no blocksize below 50 that allows solving the underlying uSVP problem through BKZ.That is not an uncommon assumption when the estimated blocksize is far greater than 50 and the lattices have large dimensions.
We estimated the hardness of BDD by iterating over all blocksizes β and testing the inequality with the determinant of B ′ as det(B) • ρ.Finally, we estimate the number of CPU cycles necessary to solve SVP similarly to other works [5], [10], [15].This is a pessimistic estimation, meaning that it accounts for future advancements in techniques to solve SVP.We set the number of CPU cycles to solve a single instance of SVP as β2 cβ , where β is the blocksize and c = 0.292 [24] for classical security, or c = 0.265 [25] for quantum security.
The proposed KEM algorithm is IND-CPA secure when instantiated with the right parameters due to the theorem below.The proof considers two regular CPA-security games, where the two games are only differentiated by the quadratic form that instantiates the game.The second game considers a different quadratic form and is indistinguishable from the first game under the assumption that ac-LIP is computationally hard.Due to the presence of a dense sublattice in the second game, winning it with non-negligible advantage is statistically impossible whenever the shared key is small enough.Therefore, the encapsulated key cannot be distinguished from another key sampled uniformly and the KEM scheme is IND-CPA secure.
Theorem 2: [From [18,Theorem 5.2]] Let Q 1 ∈ S >∞ n be a quadratic form with maximum decoding distance ρ and s > 0 be a standard deviation.Let Q 2 ∈ S >∞ n be a quadratic form with dense rank r = θ(n) sublattice DZ r ⊂ Z n .Let K = (Algorithm 2, Algorithm 3, Algorithm 4) be a KEM instantiated with Q 1 and sharing keys of length l ≤ r − lg 3. We have that η1 /2 (D T Q 2 D) ≤ ρ /2 √ n implies that K is IND-CPA secure under the assumption that ac-LIP is computationally hard.
The lattice pair chosen in our KEM respects Conjecture 2, since it is the pair suggested by the authors in the original LIP work [18, Sec.8.1].Therefore, the scheme is IND-CPA secure via Theorem 2 under the corresponding assumptions.Consequently, no adversary should be able to obtain the encapsulated key via common lattice reductions.

C. PARAMETERS
The main theorem [18, Theorem 5.2] and the conjecture about the hardness of LIP [18, Conjecture 7.2] impose certain restrictions on the lattice pair used to instantiate the scheme.Different approaches for obtaining a pair of quadratic forms have been discussed using q-ary lattices [12].However, Ducas and van Woerden [18, Sec.8.1] presented a simple method for constructing this lattice pair from a remarkable lattice L with decoding radius ρ.This construction results in two lattices L S , L Q that follow the format L S := g • L ⊕ (g + 1) • L and L Q := L ⊕ g(g + 1) • L, for some g ∈ N + .The scheme is then instantiated with L S with decoding radius ρ ′ = g • ρ.When compared to the original lattice L, the decoding radius of L S is increased, in an analogous way as the increase of the length of the shortest vector.
The produced scheme could achieve IND-CPA security via Theorem 2, requiring the dense sublattice L of Because computing the smoothing parameter and verifying the previous inequality is computationally complex, the choice of g imposes further assumptions onto the security of the scheme.A lower value for g decreases the public and secret key sizes and has been shown to increase the estimated security against BDD attacks, which is very beneficial.However, it imposes weaker assumptions towards η1 /2 (L) being smaller since it decreases the decoding distance ρ ′ of L S .
Our proposed parameter sets use the aforementioned lattice pair construction, with the underlying remarkable lattice being the 2N -dimensional Barnes-Wall lattice BW = BW n , for N = 2 n−1 .The lattice that instantiates the scheme can be efficiently decoded using the decoder proposed by Micciancio and Nicolosi [29], which has a maximum decoding distance ρ = √ N /2.Also, the properties of the Barnes-Wall lattice are very well known.For instance, λ N ln(N )).Therefore, the second restriction can be easily checked and is expected to be met at larger dimensions, since q grows exponentially to the square root of 4N , while qgρ does not.Now, we focus on the length of the key shared securely through the encapsulation mechanism.We write the prime number q as ( (2 4N /2 m ) /3) • c ′ , where c ′ = (1 − 3c /2 4N /2 m ) ∈ (0, 1).Here, we assume the Adleman-McCurley conjecture [1] to be true, which states that the gap between the two primes is bounded by O(lg(q) k ) for any constant k > 2. Therefore, both c and the gap between primes are bounded by O(lg(q) 3 ) = O(lg( Finally, the hash family H ⌈ √ 4N/2⌉,4N ,( (2 4N/2 m ) /3−c),(qgρ) forms a suitable randomness extractor.Consequently, we summarize our recommendations in Table 1, where we present several parameter sets considering different lattice dimensions.

D. KEY SIZES
The necessary encoding size for the secret key is computed from the properties of the discrete Gaussian sampling.
, which is negligible for large n and small ε > 0. Therefore, we assume that for every 1 ≤ i ≤ n, with overwhelming probability, the norm ∥S∥ = ∥B 1 y i ∥ = ∥y i ∥ Q 1 ≤ s √ n.Consequently, □ Similar to other post-quantum schemes, instead of storing the secret key encoded as above, another alternative is to store the random seed used to sample the key pair and deterministically generate the secret key on every key Lemma 4: [From [11, Lemma 2.3]] Let Q ∈ S >∞ n be a quadratic form, a vector c ∈ R n and standard deviation s ≥ ∥B * Q ∥ 2 • √ ln(2n + 4)/π.Then, there exists a polynomial time algorithm that samples a vector from D s,c,Q .

n
be two quadratic forms and b$ ← {0, 1}.Given Q ′ ∈ [Q b ] ⊂ S >∞ n ,the delta lattice isomorphism problem consists of finding b.Definition 15: [Average-case LIP (ac-LIP)] Let Q 0 , Q 1 ∈ S >∞ n be two quadratic forms, b $ ← {0, 1} and let s > 0 be a standard deviation.Given Q ′ ∈ [Q b ] ⊂ S >∞ n taken from the Gaussian form distribution D s ([Q b ]), the averagecase delta lattice isomorphism problem consists of finding b.

√
1 (BW) = λ 2 (BW) = • • • = λ 2N (BW) = √ N .From this, we may use the non-trivial upper bound for the smoothing parameter proposed by Micciancio and Regev[30] to derive an upper bound and a safe choice for g.It follows thatη1 /2 (BW) ≤ η 2 − dim(BW) (BW)construction with the lattice BW, the decoding distance L S becomes ρ ′ = g • ρ = g • √ N /2 and the following inequality holds: η1 /2 (BW) ln(8N + 4) /π}.Consider the Gram matrix of L S to be obtained from an ordered basis with norm λ 4N (L S ) = (g + 1) • √ N .From the behaviour of the Cholensky decomposition and the orthogonalization process of Gram-Schmidt, the value of s is in O(gN √

Proposition 3 :
[Secret key length] Let n ∈ N, B 1 ∈ Z n×n be a full-rank basis such that L(B 1 ) ⊂ Z n , and consider Q 1 ∈ S >∞ n as the Gram matrix of B 1 .Let the standard deviation s ≥ 2η ε (Q 1 ) and S ∈ Z n×n be the matrix of rank n sampled from D 0,s,Q 1 , as in Definition 13.The secret key S = B 1 Y , created from Algorithm 2, is encoded in at most n 2 • ⌈lg(2 • s √ n)⌉ bits.Proof: The matrix Y = [y 1 , . . ., y m] is composed of n vectors sampled from D s,0,Q 1 .Proposition 6 states that Pr X∼D s,0,Q 1 A linear error-correcting code with parameters [n, k, d] always has a generator matrix of length k ×n.A particular class of linear error-correcting codes are the Reed-Muller codes, defined using the evaluation of multivariate polynomials with coefficients in F 2 .Consider, for now, f ∈ F 2 [x 1 , . . ., x n ], and Definition 2: [Linear error-correcting codes] Let C ⊆ F n 2 be a code with parameters [n, k, d].We say that C is a linear error-correcting code with dimension k and distance d if there exists a generator matrix G = [g 1 , . . ., g k ] ∈ F k×n 2 such that C = {xG : x ∈ F k 2 }. and