Introduction
Modern biometric recognition technologies lean on deep neural networks (DNNs) to extract distinctive representations of biometric samples (i.e., facial images), called feature vectors. Thus, the biometric recognition task becomes the comparison of two feature vectors against each other by calculating a (dis-)similarity score, usually via the cosine similarity or the squared Euclidean distance (SED). Studies have shown that from those feature vectors, it is possible to infer personal information (e.g., gender, age, health condition, ethnicity, occupation, etc.) [2] and reconstruct raw biometric samples (e.g., facial images) of individuals, known as model inversion attacks [3]. The plaintext access to those feature vectors enables, on the one hand, an undesirable personal information inference that intensifies the severeness of social issues, such as gender inequality and discrimination, due to biased decision-making models. Such models violate the privacy of individuals by performing classification tasks other than recognition. On the other hand, it permits the reconstruction of the raw biometric sample from a given reference template or probe, which leads to security issues such as identity fraud and impersonation attacks. Therefore, feature vectors extracted from DNNs are extremely sensitive and require strong protection.
Biometric template protection schemes (BTPs) [4] try to protect biometric information (e.g., biometric feature vector) with a maintained recognition performance. BTPs come in different flavors, each approaching the biometrics privacy challenges with distinct techniques (e.g., helper data and Bloom filters). Among existing BTPs, homomorphic encryption (HE) based BTPs [5], [6], [7], [8], [9] seem promising in tackling these issues since they carry out both the biometric data and its processing to the encrypted domain. Generally, HE-based BTPs compare two encrypted biometric feature vectors against each other and distinguish between two decision modes, namely cleartext decision mode and encrypted decision mode, which indicate whether the decision occurs within or outside the encrypted domain. Hence, measuring a (dis-)similarity score under encryption is followed by a recognition decision delivered to the party of interest, who, in the cleartext decision mode, receives a final score and performs the comparison with the threshold in the clear to make a decision [5], [6], [7], [8] or, in the encrypted decision mode, the party of interest receives the recognition decision encrypted for which the comparison with the threshold was performed under encryption [9]. Calculating such a (dis-)similarity score under encryption involves a number of homomorphic multiplications proportional to the feature vector’s dimension. The failure of
HE to handle computations with many multiplications hurts those solutions’ efficiency. For instance, to calculate the SED under encryption using the CKKS encryption scheme [8], [10] takes 2.11s for 128-dimensional encrypted feature vectors while for 512-dimensional vectors [6] runs in 5s and [7] runs in 3.39s for 128bits security level; using another HE scheme, BFV [11], the runtime improves to 0.85s [6] and 0.61s [7] but still improvable.
To reduce the number of homomorphic multiplications, the work [5] adopts the single-instruction multiple-data (SIMD) and the plaintext packing properties of fully HE to decrease this number to one homomorphic multiplication for calculating the inner product (IP) under HE over encrypted normalized feature vectors, which corresponds to the cosine similarity. In [5], the author considers only the cleartext decision mode, which has the downside of exposing the final score. The knowledge of the final score can lead to the reconstruction of the original biometric template, as shown by [12], [13], [14], [15]. In contrast to the encryption decision mode, the comparison with the threshold performed inside the encrypted domain reveals only one bit of information (match / no match). We compare our approach with [5] in Section VI. The first multiplication-free biometric recognition (MFBR) scheme is the homomorphically encrypted log likelihood-ratio classifier (HELR) introduced in [9]. It pre-computes the log likelihood-ratio (LLR) and organizes it into lookup tables, reducing the biometric recognition into three operations: selection of the individual scores from the tables, their addition to calculate a final score, and the comparison of the final score with the biometric threshold. However, to determine a score, this classifier requires prior knowledge about the statistics of the features learned from training the LLR. In general, this prior knowledge is hard to acquire for large-scale applications. Hence, the HELR classifier requires training data, and homomorphic multiplications represent a burden for biometrics, motivating us to tackle these challenges.
In this paper, we extend the work conducted in [1] by improving the integration of the MFBR schemes with HE, as illustrated in Table I, where we denote by MFBRv1 [1] our initial version, and by MFBRv2 our improved one proposed in the present paper. Our solution, as in [1], is built upon the HELR framework [9] but applied to the IP and SED measures that do not require training. Assuming normalized feature vectors extracted from a well-trained DNN, we determine the probability density function (PDF) and cumulative distribution function (CDF) corresponding to the projection of a point on the unit d-ball upon which we generate the lookup tables (that we call MFIP and MFSED) in an equiprobable manner to reinforce their security. We assess the biometric performance of our tables on synthetic and facial feature vectors and achieve a performance identical to their baseline measures, preserving biometric accuracy. Furthermore, the MFBRv1 represents the encrypted reference template by a set of d ciphertexts to facilitate the selection of specific components under encryption by using homomorphic rotations only; however, this comes at the cost of the encrypted reference template storage because, in each of those ciphertexts,
In summary, we make the following contributions:
We propose two MFBRs implementing IP and SED comparison measures that do not require training.
We experimentally investigate the MFIP and MFSED tables’ parameters and evaluate their biometric performance w.r.t. the influence of their integration with HE, achieving a performance better than the baseline.
We improve the integration of MFBRs with HE presented in [1], rendering the encrypted reference template more compact, restricting it to one ciphertext instead of d ciphertexts.
We evaluated the effect of our improved integration MFBRv2 on the speed and found that it eliminates the speed gap between cleartext and encrypted decision modes, making them equally fast and faster than MFBRv1.
Background: HELR Framework
The HELR classifier assumes that the features are independent and follow the Gaussian distribution. The features’ independency allows treating each feature separately and thus calculating the LLR per feature. Therefore, in the following, we describe the HELR framework process for a given feature; the same applies to all features.
A. Generation of HELR Lookup Tables
For a single feature, the LLR is a two-input-one-output function. Pre-computing it yields a lookup table where the rows’ (resp. columns’) indexes represent the possible values of the first (resp. second) input, feature from the reference template (resp. probe), and the cells contain the output, individual scores. The rows’ and columns’ indexes result from a feature quantization that maps the continuous domain of real numbers to a finite set of integers, which is needed to limit the possible feature values. In HELR, the feature quantization on
B. Biometric Recognition Based on HELR
Once the lookup tables are generated, the HELR has the following conventions for the reference template and the probe. The d-dimensional feature vector corresponding to the reference template is mapped to its integer representation according to the feature quantization procedure (Algorithm 1 and Algorithm 2). Thus, the HELR reference template becomes a vector of rows selected from the lookup table corresponding to a feature and its index is indicated by the quantized value of that feature. The same feature quantization is applied to the probe feature vector but a feature quantized value indicates the column’s index of the row corresponding to that feature in the reference template. Hence, given the HELR reference template and an HELR probe, the biometric recognition reduces to the row-wise selection of the individual scores from the reference template based on the probe. Then, their addition produces a final score S, which is compared against a biometric threshold
Multiplication-Free Biometric Recognition
Our primary goal is to apply the HELR framework to common (dis-)similarity measures not requiring training to learn the features’ statistics for determining a score, such as the cosine similarity and the squared Euclidean distance. To construct suitable lookup tables, we need to determine 1) the table’s cell borders by equiprobably dividing the proper PDF that represents a random observation of the features, 2) the proper PDF and its CDF, and 3) the representative value of a cell. For this purpose, finding the proper PDF and its CDF that defines the feature vectors resulting from a DNN might be tricky and dependent on the DNN’s training and architecture, which may lead to a different distribution per architecture. To avoid this, assuming that the feature vectors resulting from a DNN are points spread on the
Remark 1 ((Dis-)Similarity Measure Equivalent to the Inner Product):
Note that the cosine of normalized vectors equals to their inner product. While the squared Euclidean distance of two normalized vectors is equivalent to their inner product via the monotonic function
A. Point Projection on the Unit d-Ball
In this section, we describe how we derive the PDF and its CDF corresponding to the projection of a point on the unit d-ball. Note that this PDF is the PDF of every coordinate of a d-dimensional normalized feature vector.
1) PDF of the Point Projection on the Unit d-Ball:
Let X denote the random variable after projection on the x-axis. Let x denote a realization of that random variable. Let \begin{align*} f_{X}(x)=& \frac {1}{S_{d-1}(1)} S_{d-2} \left ({{\sqrt {1-x}}}\right ) \frac {1}{\sqrt {1-x^{2}}} \\=& \frac {1}{B\left ({{\frac {1}{2}, \frac {d-1}{2}}}\right )} \left ({{\sqrt {1-x^{2}} }}\right )^{(d-3)}\end{align*}
\begin{equation*} f_{X}(x) = C \cdot \left ({{\sqrt {1-x^{2}} }}\right )^{(d-3)} \tag {1}\end{equation*}
Illustration of the projection of a point belonging to the surface area of the unit d-ball onto the x-axis (left) and an example of an MFBR table with
2) CDF of the Point Projection on the Unit d-Ball:
Let \begin{equation*} F_{X}(x)=\int _{-1}^{x} C \cdot \left ({{\sqrt {1-t^{2}} }}\right )^{(d-3)} dt \tag {2}\end{equation*}
\begin{equation*} \int \left ({{\sqrt {1-t^{2}} }}\right )^{(d-3)} dt = \int \left ({{\cos (u) }}\right )^{(d-2)} \ du \tag {3}\end{equation*}
In Equation (3), the passage from the left-hand side to the right-hand side is justified by the fact that \begin{equation*} \cos ^{n}(u) = \frac {1}{2^{n}} \binom {n}{\frac {n}{2}} + \frac {2}{2^{n}} \sum _{k=0}^{\frac {n}{2} -1} \binom {n}{k} \cos ((n-2k) u)~ \tag {4}\end{equation*}
Plugging in (4) in the integral (3), the integral becomes:\begin{align*}& \int \left ({{\cos (u) }}\right )^{n} \ du=\int \frac {1}{2^{n}} \binom {n}{\frac {n}{2}} \\& \;\quad {}+ \frac {2}{2^{n}} \sum _{k=0}^{\frac {n}{2} -1} \binom {n}{k} \ \cos \left ({{(n-2k) \ u}}\right )~\ du \tag {5}\\& \;=\frac {1}{2^{n}} \binom {n}{\frac {n}{2}} \ u + \frac {2}{2^{n}} \sum _{k=0}^{\frac {n}{2} -1} \binom {n}{k} \ \frac {\sin \left ({{(n-2k) \ u}}\right )}{(n-2k)} \tag {6}\\& \;=\frac {1}{2^{n}} \binom {n}{\frac {n}{2}} \ \arcsin (t) \\& \;\quad {}+ \frac {2}{2^{n}} \sum _{k=0}^{\frac {n}{2} -1} \binom {n}{k} \ \frac {\sin \left ({{(n-2k) \ \arcsin (t)}}\right )}{(n-2k)} \tag {7}\end{align*}
From (6) to (7), we go back to the original variable by replacing \begin{equation*} \sin (rx) = \sum _{j \text {odd} }^{r} \ (-1)^{\frac {j-1}{2}} \ \binom {r}{j} \ \cos ^{r-j}(x) \ \sin ^{j}(x) \tag {8}\end{equation*}
\begin{align*} F_{X_{i}}(x)=& C \cdot \left [{{ \frac {1}{2^{n}} \binom {n}{\frac {n}{2}} \ \arcsin (t) + \frac {1}{2^{n-1}} \sum _{k=0}^{\frac {n}{2} -1} \binom {n}{k} \cdot \frac {1}{(n-2k)}}}\right . \\& \left .{{\cdot \left ({{ \sum _{j \text {odd} }^{n-2k} \ (-1)^{\frac {j-1}{2}} \ \binom {n-2k}{j} \ \left ({{ \sqrt {1-t^{2}} }}\right )^{n-2k-j} \cdot \ t^{j} }}\right ) }}\right ]_{-1}^{x} \tag {9}\end{align*}
B. Construction of MFIP and MFSED Tables
Unlike the HELR framework, where there is a lookup table for each feature, the pre-computed inner product and the SED require the generation of a single table for all features because they follow the same PDF. Figure (7)(b) shows the theoretical prediction under the uniformity assumption (the solid line) covers the empirical data (the bar graph), which empirically proves that the points on the unit d-ball are uniformly distributed. Moreover, the classifier’s performance can be improved as long as the training samples’ feature vectors are not uniform. If the training set is representative of the test data, we may assume that the features are uniformly spread over the ball. We call MFIP the lookup table that pre-computes the inner product and MFSED the one that pre-computes the SED. To generate those tables, we first specify the borders of a cell by equiprobably cutting the x-axis and y-axis according to the PDF of the projection of a point belonging to the unit d-ball onto an axis. Then, we define a table of \begin{align*} E_{X,Y}\big [x,y|\text {Bn} \big ]=& \frac {E_{X} \big [x|\text {Bn}_{x} \big ] \cdot E_{Y}\big [ y|\text {Bn}_{y} \big ]}{P\left ({{\text {Bn}}}\right )} \tag {10}\\ E_{X,Y}\big [x,y|\text {Bn} \big ]=& \frac {\left ({{E_{X} \big [x|\text {Bn}_{x} \big ] - E_{Y}\big [ y|\text {Bn}_{y} \big ]}}\right )^{2}}{P\left ({{\text {Bn}}}\right )} \tag {11}\end{align*}
\begin{align*} E_{X}\big [x|\text {Bn}_{x}\big ]=& \int _{\text {Bn}_{x}} x f_{X}(x) \ dx \tag {12}\\ P\left ({{\text {Bn}}}\right )=& \frac {1}{N \times N} \tag {13}\end{align*}
Given that the cells’ representative values are real-valued, we apply another quantization mapping, that we call cell quantization, to render them to integers, making them suitable for homomorphic encryption (HE) schemes with an integer plaintext space. The cell quantization takes the cell’s representative value, divides it by a quantization step
Improved Integration of MFBR With He
Similarly to the HELR framework, described in Section II-B, MFBR-based biometric recognition follows the same reference template and probe convention. This convention facilitates the application of an HE layer over the recognition process when both the reference template and probe are protected. The HELR classifier, in [9], was implemented with an additively homomorphic encryption scheme (additive ElGamal encryption) where the encrypted reference template comprises the rows encrypted component-wise making the number of the ciphertexts, constituting the encrypted reference template, proportional to the sum of the rows’ sizes, that is
Figure 2 depicts our improved version of the integration of our MFBR lookup table with HE, that is MFBRv2, for both the cleartext and encrypted decision modes. For the encrypted decision mode, we use the same procedure described in [9, Sec. V-A], which is suitable for our MFBR lookup table regardless of its integration with FHE. The setting of Figure 2 is a semi-honest three-party protocol comprising a client, a database server, and an authentication server, assuming no collusion between both servers. The client is the biometric data owner possessing d permutations
Improved integration of MFBRv2 with HE supporting the SIMD and packing properties, where the encrypted reference template packs all the rows into one ciphertext, unlike MFBRv1 [1], where each row is packed in a separate ciphertext.
Remark 2 (FHE Adaptation of the Comparison Procedure in[9]):
The comparison with the threshold under encryption procedure described in [9] needs an adaptation that leverages the SIMD property. Assuming that the final score ciphertext has the final score replicated over all its plaintext slots
Then, it subtracts
Experiments and Evaluations
In this section, we experimentally study the MFIP and MFSED tables in terms of their parameter choices, their biometric performance on a facial dataset, the impact of their integration with HE on the recognition performance, and their runtime performance under encryption. We implement the experiments of Sections V-A and V-B in Python 3.9 and the experiments of Section V-C in C++ using the OpenFHE [19] for the BFVrns homomorphic encryption scheme and OpenMP [20] for parallelization. We used a Linux Ubuntu 20.04.3 LTS machine run on a 64-bit computer Intel Core i7-10750H CPU with 4 cores (8 logical processors) rated at 2.60 GHz and 16GB of memory. We make our source code publicly available.4
A. Parameters Investigation
The MFIP (resp. MFSED) table is parametrized by a feature vector of dimension d, a feature quantization level
Comparison of MFIP and MFSED against non-quantized and non-precomputed IP and SED expressed in terms of Pearson correlation coefficient for three different dimensions of feature vectors, IP vs. MFIP (first row) and SED vs. MFSED (second row). The x-axis indicates the number of bits
B. Biometric Evaluation
As in [1], our MFBR approach supports any biometric modality that can be encoded as a fixed-length real-valued feature vector, assuming that the feature extractor is well-trained to yield features uniformly spread over the d-ball. To demonstrate this, we evaluate the biometric performance of the MFIP and MFSED tables on facial feature vectors. We used the VGGFace2 dataset [21] to extract facial feature vectors of dimension 512 using ResNet-100 [22] trained by two different losses: one trained with ArcFace [23] and another one trained with CosFace [24]. In the following experiments, we perform 52500 mated comparisons and 623750 non-mated comparisons.
The integration of MFBR tables with HE described in Section IV involves the use of pseudo-random permutations to generate the reference and probe templates. In the mated case,5 the permuted probe is equal to the non-permuted probe in the mated case because
1) Fixed Permutation:
In this section, we fix the permutation used to generate the MFBR reference and probe templates to neutralize the permutation effect as if they are not involved in the generation of the MFBR-based reference and probe templates. Figure 4 compares the biometric performance of the baseline IP (resp. SED), as a non-pre-computed and non-quantized approach, against the MFIP (resp. MFSED), as a pre-computed quantized approach, for the following lookup table’s parameters: table’s size of
Biometric performance of the baseline systems (IP and SED) and MFBR (MFIP and MFSED) over the VGGFace2 dataset with features extracted from ResNet-100-ArcFace and ResNet-100-CosFace.
2) Different Permutations:
In this section, we vary the permutations used to generate the MFBR reference and probe template, analyzing their impact on recognition performance. We assume that each subject possesses a different permutation with which he generates his MFBR-based reference and probe templates. Since IP and SED are equivalent, we focus on the MFIP table and claim that the MFSED table will yield similar observations. Figure 5 depicts the DET curves corresponding to the MFIP lookup table with fixed and different permutations compared to the baseline IP that operates directly on the normalized vectors without pre-computation, quantization, and permutation. For both feature quantization levels (
Biometric performance of the IP baseline system and MFIP over the VGGFace2 dataset with fixed permutation and different permutations.
To further investigate the permutation effect, in Figure 6, we depict the distribution of the mated and non-mated scores. The permutations do not affect the mated scores since they are calculated from probe-reference templates belonging to the same subjects. Thus, using permutations or not results in selecting identical scores since the same permutation is applied. Hence, the distribution of mated scores is depicted by the green histogram, a single histogram. Conversely, the non-mated scores experience a substantial impact from the permutations as they result from probe-reference templates derived from different subjects, each using a different permutation. Hence, the probe templates would select wrong positions non-corresponding to the order with which the reference template was permuted. In each graph, we plot the non-mated score distributions for the fixed permutation case (the red histogram) and the different permutation case (the blue histogram). We notice that the permutation impacts the non-mated score distribution by making it narrower than the non-mated score distribution for the fixed permutation case, causing a variation in the score threshold chosen at 0.1% FMR. This variation diminishes the amount of overlap between the mated and non-mated score distributions, which improves the overall performance observed in Figure 5.
Score distribution of MFIP
C. Storage and Runtime Evaluation
In this section, we omit the runtime evaluation of the integration of MFSED with HE on the basis that its runtime is similar to the MFIP in [1] for both decision modes and focus on the runtime assessment of the MFIP. Hence, we consider the following HE-based BTPs: the IP baseline system,6 the integration of MFIP described in [1], and the improved integration of MFIP presented in Figure 2 in Section IV of this work. For both the MFIPv1 and MFIPv2, we use the best parameters obtained in Section V-B (see Figure 7(a)) and measure their runtimes using the BFVrns encryption scheme. Our HE-based BTP for the IP baseline system consists of a) homomorphically multiplying two packed normalized feature vectors that were quantized similarly to MFIP, b) rotating the resulted ciphertext to the i-th position where
1) Probe-Reference Template Size:
For both MFIP integrations with HE, the protected probe template consists of an integer vector of dimension 512, making it of size 2.1kB regardless of the security level. The encrypted reference template size is mainly influenced by the implementation of the HE and its parameters, as in Table III, which vary w.r.t. the security level and decision mode. In general, we observe that the encrypted reference template size is identical for the 128 and 192 bits while it doubles for 256 bits. Table IV presents the storage improvement per mode and security level. For the IP baseline, in the cleartext (resp. encrypted) decision mode, the reference template and the probe template consist of one ciphertext each, making them of size 394.2kB (resp. 525.4kB), for the security levels 128 and 192 bits and of size 788.4kB (resp. 1MB) for 256 bits. For the cleartext (resp. encrypted) decision mode, the MFIPv1 reference template is represented by a set of 512 ciphertexts, making it of size 67.3MB (resp. 201.5MB) for the security levels 128 and 192 bits and 134.4MB (resp. 402.8MB) for 256 bits. For the improved MFIPv2, we reduce the encrypted reference template to one ciphertext with a size of 263kB for the security levels 128 and 192 bits and 525.1kB for the 256 bits in both decision modes. Therefore, our improved MFIPv2 significantly reduces the size of the encrypted reference template, making it approximately two orders of magnitude smaller than the MFIPv1 for all security levels.
2) Cleartext Decision Mode:
Figure 8(a) compares the speed of the improved MFIPv2 against the MFIPv1 [1] in the cleartext decision mode by measuring their runtime using the BFVrns scheme configured as given in Table III over three different security levels (128, 192, and 256bits). In the cleartext decision mode, the runtime of the improved MFIPv2 is comparable to MFIPv1, with MFIPv2 being slightly faster than MFIPv1, a difference between 9ms to 15ms. The design difference between the improved MFIPv2 and MFIPv1 [1] resides in how the individual scores are selected and the final score is formed either by the summation of rotated ciphertexts and first plaintext isolation of the resulting ciphertext (as MFIPv1) or by the application of a binary mask encoding the to-be-selected scores through a scalar multiplication over the encrypted reference template and summation over its plaintext slots, which does not require first plaintext isolation (as MFIPv2). Hence, the design difference in the cleartext decision mode has a minor influence on the speed but reduces the encrypted reference template to one ciphertext, resulting in about 3 orders of magnitude less. Table IV demonstrates that MFIPv2 is about 50 times (resp. 94 and 44 times) faster than the baseline IP for a 128 bits security level (resp. 192 and 256 bits) for a more compact encrypted reference template, saving from 65.7kB to 262.3kB. The same Table shows that MFIPv2 is ~2 times faster than [5] for a space gain as for the IP baseline.
MFIP (Figure 7(a)) is the lookup table used in Section V-C. Figure 7(b) plots the histogram of a feature coming from a 512-dim normalized facial feature vector and its value w.r.t. its projection on the unit d-ball using our derived PDF with
3) Encrypted Decision Mode:
Unlike the cleartext decision mode, where the final score is revealed, the encrypted decision mode reveals only one bit of information. To realize this, in Remark 2, we adapt the procedure described in [9, Sec. V-A] to FHEs supporting SIMD, which compares the encrypted final score against all possible scores between the threshold and the maximum score. We consider the score range depicted in Figure 6 by the threshold for the MFIP with different permutations (the blue vertical line) and the maximum score (the black vertical line). Hence, this range is small (794 integer scores) and contributes to the speedup of the verification in the encrypted decision mode. Figure 8(b) shows the runtime of the MFIPv1 and MFIPv2 in the encrypted decision mode. In this mode, the MFIPv2 outperforms the MFIPv1, making the runtime in both modes comparable. Unlike the MFIPv1, where the runtime in the encrypted decision mode is 2 to 4 times slower than the runtime in the cleartext decision mode. This improvement stems from how the final score ciphertext is formed after the selection of individual scores. In the MFIPv2, the final score ciphertext already contains the final score replicated over its plaintext slots. While in the MFIPv1, the replication of the final score over the plaintext slots requires the isolation of the first plaintext slot to be performed via the scalar multiplication of the binary mask
Discussions
Comparison with other work: The IP-based HE-based BTP in [5] differs from our baseline by cleverly summing the plaintext slots under encryption, reducing both the number of multiplications to one and the number of rotations to
Security of our MFBR lookup table: Our lookup table is generated independently from a dataset and does not contain any personally identifiable information. Its cells are generated in an equiprobable manner so that they have an identical probability for an arbitrary feature observation, reinforcing the table’s security. We assume that our MFBR lookup table is public knowledge. Because we encrypt the rows using a probabilistic encryption scheme, even encrypting the same row twice results in two completely different ciphertexts. As a result, the encrypted rows cannot be linked to the cleartext rows, whose index provides the quantized feature value. The client can make the server blindly select the specific columns without learning their real values by changing the probe’s encoding to the encryption of a vector of ones in the to-be-selected coordinates and zeros elsewhere and multiplying the encrypted probe with the encrypted reference template. However, this introduces one homomorphic multiplication that we avoid by making the client apply a secret permutation of the rows before encrypting the reference template and sending the indexes’ permutation for the selection. This enables a cheap blind selection on the server side since it transforms the protected permuted probe into a binary mask with which the server performs only a scalar multiplication.
Conclusion
In this work, we demonstrated that the two common biometric comparison measures (cosine similarity and squared Euclidean distance) can be pre-computed and quantized without biometric accuracy loss. Upon our findings, we succeeded in freeing these comparison measures from homomorphic multiplications for a smooth application of an encryption layer. The results of our experiments show that our approach improves the biometric performance baseline when tested on facial features, the speed by a factor of 2 to 4 for space-efficient encrypted references, 2 to 3 orders of magnitude less. This makes our multiplication-free solution more compact, more accurate, and faster under encryption for both the cleartext and encrypted decision modes than its initial version and the baseline. Consequently, our improved integration enhances the storage of encrypted reference templates and effectively reduces the time difference between the two decision modes.