Open-Source Finger Vein Acquisition Device for Biometric Applications

Vascular biometrics, including finger-vein recognition, are growing both in terms of academic research and industrial systems deployed in real life. In this article, we present an effective, portable, low-cost, and fully open-source finger-vein data acquisition system: the schematics, printed circuit board (PCB) Gerber and drill files, 3-D models for the 3-D printed case, device software and control, and data collection software for PC are made available. We have collected a test database consisting of 30 people/180 class finger-vein images and obtained preliminary recognition results with half-total error rate (HTER) = 2.8%, using state-of-the-art maximum curvature (MC)/Miura matching (MM) pipeline, which is a promising indicator for the viability of the proposed concept and further research.


I. INTRODUCTION
I N THE last decades, the demand for the design of afford- able and deployable biometric recognition systems, for applications spanning from access control to consumer devices for border control and surveillance, to name just a few examples, is growing at an increasing pace.Mature modalities, already exploited in commercial devices to automatically recognize people, include face, fingerprint, iris, and voice biometrics.However, the aforementioned characteristics are exposed by their nature and, therefore, potentially prone to presentation attacks [1], that is, the presentation of surreptitiously acquired biometric traits to the biometric data capture subsystem, with the goal of an attacker to be recognized by the system.
Among the emerging approaches, the exploitation of vein patterns [2] is receiving increasing interest from academia and industry, mainly because of their advantages.Hand vein patterns, such as the ones in the fingers, palm, dorsum, and wrist, are more robust to presentation attacks than other biometric identifiers, since they are not exposed.In addition, their acquisition can be made using a contactless approach, thus increasing user convenience and usability.
In the late 1980s, vascular patterns were introduced as a potential biometric identifier [4], exploiting both the property The authors are with the Section of Applied Electronics, Department of Industrial, Electronic and Mechanical Engineering, Roma Tre University, 00146 Rome, Italy (e-mail: teodors.eglitis@uniroma3.it;emanuele.maiorana@uniroma3.it;patrizio.campisi@uniroma3.it).
Digital Object Identifier 10.1109/TIM.2023.3324681Fig. 1.Optical characteristics influencing vein imaging.(a) Scattering coefficients of different biological tissues as functions of wavelengths in the 400-1700-nm region, which covers the visible, NIR-I, and NIR-II windows (blue, green, and red-shaded regions, respectively).They all show reduced scattering at longer wavelengths (image source [3]).(b) Absorption coefficient spectra of endogenous tissue chromophores at their typical concentrations in the human body (image source [3]).
of the near-infrared (NIR) light to penetrate the human skin and the absorption property of the blood hemoglobin of NIR light.More in detail, visible light is not appropriate to acquire the vein structure under the skin, since human skin and subcutaneous tissue have a high light scattering coefficient, as shown in Fig. 1(a).When using higher wavelengths, i.e., in the NIR region, human tissue scattering coefficients drastically decrease, thus light can penetrate the skin.In the NIR spectrum, light penetration in the human tissue is mainly determined by water (H 2 O) and hemoglobin light absorption.Hemoglobin in our blood can be classified as oxyhaemoglobin 6) We collect a database including data from 30 subjects, six fingers for each subject (180 classes), and five samples per each class and make it publicly available, also reporting baseline recognition performance to testify the effectiveness of the designed finger-vein acquisition device.This article is organized as follows.An overview of the current research on finger-vein recognition and the commercially available products is briefly provided in Section II.In Section III, the proposed data collection device and its software are described.Section IV covers the computer-side device control and data collection software developed in Python.Preliminary data analysis is described in Section V. Finally, Section VI draws future directions and conclusions.

II. STATE OF THE ART OF FINGER VEIN BIOMETRICS ACQUISITION DEVICES
A finger-vein-based biometric device usually comprises a NIR illumination module and an image sensor capable of obtaining data in the NIR spectrum.According to the relative position of the camera and the illuminator, the device can work according to two distinct modalities, namely, transmission or reflection.If the reflection modality is employed, the illumination module is located on the same side, with respect to the finger, as the imaging sensor.The portion of emitted light that is reflected from the finger is then sensed.With transmission illumination, the observable object is between the light source and the image sensor.A portion of the emitted light penetrates the skin, emerges through it, and is captured by the camera.
Typically, NIR LEDs with wavelengths ∈ [810, 950]-nm range are used for illumination, with most systems using the 850-nm wavelength.It is worth mentioning that the only examples of approaches that do not use infrared illumination found in the literature are the one proposed by Hitachi research [7], which performs vascular biometrics recognition using visible light and a mobile phone camera, as well as the Hitachi's VeinID Five, which authenticates a person using a laptop camera [8].However, there has not been any publicly available study on how much of the vascular pattern is visible when compared with a baseline acquired in NIR light and what are the features extracted by systems working in the visible light scenario.
A recent and extensive review on the state of the art of finger-vein biometrics, which is out of the scope of this article, is given in [9].

A. Commercial Finger Vein Biometric Systems
Among the companies actively involved in finger-vein biometric research, we can mention Hitachi, which has been in the market since 2006 [10].Recently, they have released a touchless finger-vein authentication unit [11].NEC's finger-vein recognition technology combines finger veins and fingerprint information to recognize both biometric traits simultaneously.SONY Corporation started in 2000 a research project on finger-vein authentication, and in 2009, a spin-off, Mofiria Corporation [12], was founded, with the first product released on the market in 2011.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

B. Open-Source Devices
Currently, a single open-source finger-vein acquisition device has been released [13].It allows palmar and dorsal finger-vein image acquisition and has multiple illumination versions: 860-nm LED transmission, 808-nm laser transmission, and 850-and 950-nm reflection illumination.The main disadvantage of the device is its cost, specifically the high cost of both the UI-1240ML-NIR-GL 1280 × 1024 pixel image sensor [14] and of the HF9HA-1B lens [15].Another disadvantage is that the device housing consists of parts made from various materials, 3-D printed plastic parts, and laser-cut wood parts.These reasons can make the device construction costly and problematic.
Another device, i.e., the University of Twente Finger Vein Capture Device [27], could be rebuilt from the description provided, yet the device project is not open source.In addition, the software to control the device is not available.
Affordable alternatives rely on the use of a Raspberry Pi single-board computer with the compatible Raspberry Pi Camera Module 2 NoIR camera [29], as in [30] and [31].In this case, data collection can be performed using a standalone system equipped with a touch screen or another input device.Such systems can be used in the wild, given that recognition algorithms can be implemented onboard and executed in real time.However, in these cases, the software employed to control the devices and to acquire the images is not open source, and details about the employed illumination are not properly given.

C. Finger Vein Databases
Even if there is an evident lack in the research community of open-source devices in terms of hardware and software to control the device and collect data, several publicly available finger-vein databases are available for research.A list of such public datasets is given in Table I, where the most important characteristics of the collected data are reported.In more detail, the following hold.
2) The Tsinghua University Finger Vein and Finger Dorsal Texture (THU-FVFDT) database [17] is the first one with data collected during two sessions for each subject.

3) The Shandong University Machine Learning and
Applications-Homologous Multimodal Traits (SDUMLA-HMT) database [18] is the first one collecting data from six fingers of each subject.

4) The Hong Kong Polytechnic University Finger Image
Database (HKPU-FID) [19] contains data from four fingers, collected during two sessions, for each subject.5) The University of Twente Finger Vascular Pattern (UTFVP) database [20] is one of the most employed datasets, with high-quality images collected during two sessions.6) The database collected by the Multimedia Laboratory, Chonbuk National University (MMCBNU_6000) [21], contains personal information regarding age, gender, and blood type of the involved user.7) The Shandong University Finger Vein Database (CFVD) [22] is the only public dataset collected in reflection modality instead of transmission like all the other ones.8) The Finger Vein Universiti Sains Malaysia (FV-USM) [23] database provides information on both finger veins and finger geometry.9) The Idiap Research Institute Finger Vein Database (VERA) [24] has been collected using the same device used for the UTFVP database [20], and it is the first one providing spoofing attack samples alongside bonafide images.10) The Protect Multimodal database (PMMDB) [25] contains samples from several biometric traits, including finger-vein patterns.11) The South China University of Technology Spoofing Finger Vein Database (SCUT-SFVD) [26] contains spoofing samples, allowing to evaluate presentation attack detection (PAD) solutions.12) The publicly available finger-vein database of the University of Salzburg (PLUSVein) [32] is the first of three  datasets released by the WaveLab group using different prototype devices.
13) The second one is the PLUSVein-FV3 database [13], collected using an acquisition device simultaneously acquiring three fingers of a hand, and contains images of both palmar and dorsal vein patterns.14) The third one is the PLUSVein Finger Rotation (FR) database [28], containing images collected from 361 perspectives for each finger acquisition, using a system where the camera and the illumination module rotate around the finger.As can be seen from Table I, most of the public databases have been collected using illumination at 850 nm, the same wavelength employed in the device here proposed, as reported in Section III-C.
The devices used to capture finger-vein samples, and whose realization details are public, are depicted in Fig. 4. Due to the experimental nature of the employed devices, most of them are rather bulky, with significant distances between the illumination and the acquisition camera, and without any onboard processing capabilities.Even though the devices used in [21] and [23] are smaller than the others, they are still significantly larger than the one proposed in this article, which is the first close-to-wearable prototype for finger-vein acquisition proposed in the literature.It is worth remarking that designing compact devices for finger-vein acquisition poses several additional challenges, with respect to approaches such as those depicted in Fig. 4, mainly related to illumination issues.
Furthermore, compactness, portability, and mobility are factors significantly affecting the usability of a device, which should be taken into account when designing a finger-vein acquisition system [10].The proposed device has also been designed to incorporate wireless communication capabilities, another aspect positively influencing its usability [10].
All the components selected to build the proposed device are low cost, with this choice made to facilitate the creation of replicas, also allowing for multiple copies to be made.The proposed device, therefore, represents an open-source project particularly suitable for distribution, given that its use should be acceptable for a large audience of users and in different conditions of usage, thus potentially allowing the creation of a vast shared finger-vein dataset through an incremental contribution process.

III. OPEN-SOURCE FINGER VEIN DEVICE: THE PROJECT
Our research aims to develop a fully open-source finger vascular data collection device with the following requirements.
1) Modular Design: Although, currently, we provide only finger-vein data acquisition, we intend that the different main modules of the device, namely, the acquisition and the imaging modules, could be assembled with a different geometry to acquire also wrist and annular vein patterns.2) Wireless and small size, enabling its usage as a wearable device.3) Low cost (around 25 e/prototype), being available to virtually anyone interested.4) Consisting of widely available electronic components to ease the manufacturing process.Fig. 2 shows our proposed open-source finger-vein acquisition device.It uses the transmission modality, as depicted in Fig. 3.At the top of the device, six NIR LEDs are located.The finger is inserted into the device over the opening window, under which the image sensor is located, as shown in Fig. 3.The device allows wireless connection and is lightweight, so the users can move it around, tilt it, and hold it to make the data collection process more ergonomic.
The device's 3-D renderings from different angles and the exploded views to understand better the device's construction can be seen in Fig. 5.

A. Development Board and Camera
We have chosen to base our development system on ESP32 family microcontrollers [33], since they have gained considerable popularity in recent years.
The selected development board, that is, the ESP32-CAM AI-Thinker (ESP32-CAM) [34], is one of the smallest footprint development boards supporting a camera, and it is represented in Fig. 6.The ESP32-CAM is equipped with Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.[20] and VERA [24], (e) PLUSVein-FV3 [13], and (f) PLUSVein-FR [28].The reported images are adapted from the corresponding papers.
an ESP32-S system on a chip (SOC) microcontroller with dual-core, 32-bit, 240-MHz CPU, built-in WiFi, and security features (IEEE 802.11 support, secure boot, flash encryption, and cryptographic hardware acceleration).ESP32-CAM has an external 8-MB pseudo static random access memory (PSRAM) that supports the OmniVision OV2640 image sensor.Besides, the ESP32-CAM is equipped with a fransflash (TF) card slot and two LEDs, one of which can be used as a flash for the camera.If the TF card is not used, five general-purpose input/output port (GPIO) can be used, which can be extended to 6 by removing the built-in flash.More details are given in the assembly manual.
The OV2640 component was released in 2005 as the world's first fully integrated type 1/4-in 2 MP, 1600 × 1200 (UXGA) resolution image sensor.The sensor is purchasable in compact camera module form with an M7 format lens.The sensor is not equipped with a NIR filter and can be purchased with multiple lens options.In our experience, the best lens for the application is one with a field of view of 160 • , f /2.4 aperture, and without a NIR filter.
To make the device easily reproducible, we have developed two PCBs, the main board (MB) and LED illumination board (IB), as detailed in Sections III-B and III-C.

B. Main Board
The MB houses the necessary electronic parts to power and run the ESP32-CAM.It can be assembled in two configurations: battery powered and powered through the USB connector.It has the following functions: 1) housing of the optional battery charging module; 2) power management for the LEDs and programmer UC [1-A, 3.3-V low-dropout voltage regulator (LDO)], power switch, and power delivery to the ESP32-CAM; 3) ESP32 programmer (e.g., CH340C USB to UART chip); 4) USB port for optional charging, power delivery, programming, and serial communication (e.g., for debugging); 5) necessary electronic components for powering the six NIR LEDs.The MB's layout is represented in Fig. 7, and the components are listed in Table II.MB's principal schematic can be found in the assembly manual.

C. LED Illumination Board
The IB is located in the top part of the device, as shown in the device-split view in Fig. 3.The 8.4-cm-long board layout is shown in Fig. 8, and the components are listed in Table III.We are using six OSRAM 850-nm LEDs with a maximal I F = 70 mA and a narrow illumination half-angle of 10 • , with the possibility to set different duty cycles and, thus, the brightness for each of the LEDs.IB's principal schematic can be found in the assembly manual.

D. Battery and Battery Control
As pointed out, our system can be powered using a rechargeable battery or a USB cable.However, we have noticed that noise from the USB cable connection can sometimes affect the attainable image quality.In addition, using the battery allows better usability of the device, since no wires are needed to be connected to it while collecting data.It can be handy for people with movement disorders, since the device can be freely moved around.We use the widely available HW-357 battery charging unit, equipped with the LTC4056 constant-current/constant-voltage Li-ion battery charger, and the STI3508 current mode step-up converter.The charging current is 2 A, but the maximum output current at 5 V is 1.4 A. We are using a 503 035 lithium polymer battery (3.7 V, 500 mAh) with a maximum discharge current of 1.5 A, which is the limiting power constraint of the system.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

E. 3-D Printed Enclosure
All of the parts mentioned above are housed in a 3-D printed enclosure.A detailed view of the enclosure can be seen in the 3-D renderings given in Fig. 5. Altogether, there are seven parts, two of which are permanently attached together and are not meant to be disassembled.Two of the parts each host two brass nut inserts.
Detailed instructions for manufacturing the 3-D printed parts and the prototype assembly are described in the assembly manual.

F. ESP32 Software
The ESP32 runs C code.The device's software is based on the "camera with web server" [35] by Espressif Systems.We use Visual Studio Code to adapt the example code to our needs.The example provides a web server hosted on the ESP32, from which one can control the camera.The device is connected to a WiFi network, and the WiFi router issues a local IP address to it.Alternatively, the device can work as an access point, where one connects the computer to the device's service set identifier (SSID) assigned to the device.In this case, the device IP address is set manually.From the web page, as shown in Fig. 9, it is possible to control the device, adjust camera settings, and stream photographs and videos.For our project, we have added controls for driving each LED.The device is controlled using the HTTP GET method: when adjusting any of the controls on the web page, a GET request is sent to the device, which processes it accordingly.The same communication can be established from other sources, e.g., from a Python script, as detailed in Section III-H.The software's inner modules are outlined in the diagram given in Fig. 10, where the overall device control mechanism is displayed.It has two main parts: the device control and the database manager.
We have developed data collection software with a graphical user interface (GUI), as depicted in Fig. 11, developed in Python.

G. Device Control
The device control module is responsible for controlling every action of the device.When connected, default image sensor parameters are programmed, e.g., white balance settings, frame size, and so on.The module can send commands, e.g., set the illumination or request an image to the device and receive data.As described in Section III-F, the communication is made over WiFi using GET requests.The main part of the device control is the automatic illumination module (AIM).

H. Automatic Illumination Module
Unfortunately, the OV2640 image sensor has limited auto illumination control (AEC) options, i.e., it is impossible to set the image area used for exposure calculation.This would be  of paramount importance in the considered scenario to set the correct exposure of the ROI in our images.An example of a finger-vein image can be seen in the Python GUI depicted in Fig. 11.This has driven the need to pilot the image sensor in manual exposure mode and to develop an AIM.
There are three adjustable parameters.We obtain finger-vein images by varying these parameters until we find an image with acceptable quality.Ma et al. [36] propose using contrast to evaluate the finger-vein ROI quality.From [36], contrast c can be calculated as follows: where N is the number of pixels, p i the generic pixel i value, and p the average ROI a (a stands for acquisition) pixel value.We calculate c in ROI a , a temporary square window used for illumination evaluation, represented as a green rectangle over the finger-vein image in Fig. 11.Since possible pixel values in an 8-bit image are ∈ [0, 255], we assume that the image is overexposed if any of the ROI a pixel value is over 235.

IV. DEVICE CONTROL AND DATA COLLECTION SOFTWARE
In Fig. 12, we can observe finger-vein image quality, expressed in terms of ROI a contrast c versus the exposure parameterized with respect to illumination i, and exposure e at the default g = 2.We can see that this relationship is linear before the image gets overexposed.That is why we have predefined 17 different (i, e) pairs.We start by using g = 2 and the last (i, e) combination, namely, the largest in value (i = 255 and e = 1200).If the obtained image is overexposed, we search for the optimal c and, thus, the best image for the current g; if c is over the threshold (c = 45), we use that image; otherwise, we increase g.This way, we determine the upper g value and look for the smallest (i, e) combination (minimum corresponding to i = 64 and e = 300).If the image is overexposed, we decrease g.Otherwise, we proceed to recursively search for the largest c over the (i, e) pairs.We iterate with the middle (ninth) (i, e) combination and, depending on c and overexposure values, continue from there.

A. Database Manager
Database manager uses Python SQL toolkit sqlalchemy and is loosely based on the Tksqla [37] example.An SQL database and a typical file-based database are used to have a detailed record of each of the obtained finger-vein images.The GUI is designed to work with different acquisition modalities, that is, managing data collection for finger vein, and wrist vein or palmar vein in future hardware implementations, and collecting multiple databases for each modality, as shown in Fig. 11.When collecting data, the data processor selects the device, the used database, and the subject whose data are being collected.The GUI also allows viewing the collected data, i.e., each attempt/finger/all subjects' data, to ensure that the data are collected correctly and to better represent the data collection to the user.
Each subject is registered by their unique ID, gender, age, and nationality information are saved.The number of sessions and samples to be taken is hard-coded.
Currently, the database manager is used to collect five different acquisitions per finger, six fingers for every subject, with three fingers on each hand.Data collection should be performed for at least two sessions.Software is designed,  so that the subsequent attempt acquisition starts when the first set of acquisitions is completed, with one image per each of the six fingers collected.
Detailed instructions about data collection can be found in the data collection manual.V. DATA ANALYSIS This section describes the data collected to evaluate the proposed finger-vein acquisition device and the tests performed to assess its effectiveness.

A. Database
Although data processing is not the primary goal of this article, we have collected an initial dataset consisting of finger-vein samples from 30 people, six fingers for each subject, for a total of 30 × 6 = 180 classes, to evaluate the project feasibility and perform an initial assessment of the system.The database is publicly available; see repository.When performing data collection, we first explained the process to the involved subjects and had them sign a consent form.Then, we collected five samples from each finger, A = {A 1 , . . ., A 5 }.Specifically, we obtained the A 1 image for each of the six fingers in a controlled manner, instructing the subject through the process.After that, we captured the A 2 -A 5 images, allowing the subject to use the system freely while being guided through the data collection process by the data collection GUI (Fig. 11).This resulted in a challenging dataset, with finger-vein samples having different qualities, typically getting slightly worse with each additional sample.

B. Processing
The bob.bio.veinframework [6] is used to calculate the recognition results, expressed in terms of false match rate (FMR), false nonmatch rate (FNMR), and half-total error rate (HTER), that is, the average of FMR and FNMR when their values are the closest measured for any threshold setting.
For the experiments, we are using bob.bio.veinmaximum curvature (MC) [38] with Miura matching (MM) pipeline [39], to enable result comparison with similar finger-vein database baseline results.
Since images in our database are acquired differently from other legacy databases, we have developed a dedicated ROI extractor strategy by considering the fixed finger-holder opening of the device and adopting the ROI extraction technique proposed in [40].Apart from that, we use the default MC pipeline, including HuangNormalization normalization, and using MC parameter σ = 8 and MiuraMatch parameters ch = 100 and cw = 220.

C. Data Protocols and Results
We analyze the collected data according to two distinct protocols.
1) Normal operation mode (Nom) protocol uses A 1 as the enrolment (reference) sample and A 2 as a probe sample.
Results are calculated in all versus all manner.Since we have n = 180 classes (fingers), we perform a total of n 2 = 32 400 comparisons, from which n = 180 genuine scores and n 2 −n = 32 220 zero-effort impostor scores are obtained.

2) Multi-image model (Mim) protocol uses A 1 and
A 2 images to construct the reference model, while A 3 -A 5 images are used as probe samples, thus utilizing the whole dataset.Fusion of the enrolment samples takes place in the scoring phase, using the mean score fusion.This means that features are extracted from all images, and each probe's features are compared against the features of both reference model images.Then, the mean score value is produced for each probe.This resembles a situation when the user needs to provide two biometric samples in the enrolment stage.Since we use all three remaining attempt images as separate probes, there are 3 × n = 540 genuine comparisons and (n − 1) × 3 × n = 96 660 zero-effort impostor scores.We achieve HTER = 2.8% in Nom protocol and HTER = 2.6% in Mim protocol, thus having a promising indicator that the collected data are of reasonably good quality.As shown in Table IV, as further proof of the effectiveness of the proposed device in capturing finger-vein images with a quality suitable to perform biometric recognition properly, we also apply the processing pipeline outlined in Section V-B to two other public databases, namely, [20] and [24] datasets.The obtained results testify that the performance achievable on our database is within the range covered by other publicly available datasets.It can also be observed that tests performed by applying the MC [38] feature extraction with MM [39] were performed, for instance, in [41], where equal error rates (EERs) at 2.8% and 2.2% were, respectively, achieved for the PKU [16] and HKPU-FID [19] public datasets, while significantly worse results have been reported for the PMMDB-FV database in [25], where the same processing here exploited led to EER = 9.75%.All the obtained results testify that the quality of the finger-vein data collected with the proposed device is in line, if not better, with that of several other public databases.
It is worth mentioning that we can also expect a further recognition performance improvement with a more sophisticated preprocessing and adapted recognition pipeline, involving, for instance, deep learning approaches [42].

VI. CONCLUSION AND FUTURE DEVELOPMENTS
We have here presented a novel open-source finger-vein data acquisition device, providing all the necessary instructions, files, and software, so that other researchers can replicate our system.To verify the viability of the proposed research tool, we did a preliminary data analysis based on samples acquired from 30 subjects (180 classes) and made public the collected vascular image database.Using a legacy MC/MM processing pipeline with a custom ROI extractor and adjusting the MC σ parameter and MM search displacement parameters leads to a promising HTER = 2.8%, which suggests that the collected data are of reasonable quality.However, allowing the subjects to use the system freely introduces substantial variability to Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
the data, opening the doors for research in more rigorous recognition algorithms and device usability questions.
In summary, the main advantages of the device presented are listed hereafter.
1) Fully open-source finger-vein data acquisition system.All the components of the device are made public, including the schematics, the list of electronic components, the 3-D files for printing the enclosure, the software to control the acquisition process, and the image acquisition software.2) Standalone device that does not need any physical connection with external peripherals.3) Small device compared with other open-source devices for finger-vein acquisition.4) Modular Design: Although, currently, we provide only finger-vein data acquisition, the different main modules of the device, namely, the acquisition and the imaging modules, could be assembled with a different geometry to acquire also wrist and annular vein patterns.5) Low cost (around 25e/prototype), therefore highly affordable.6) Widely available electronic components used to ease the manufacturing process.On the other side, it has to be observed that, even though the device is small in size compared with other research prototypes so far employed, it cannot be considered wearable.Some possible enhancements that could be developed for the proposed prototype are synthetically described hereafter.
1) ROI dynamic detection and its use to automatically optimize both the illumination and the exposure parameters.This process needs a design compromise between the implemented algorithms and the limited onboard processing capabilities.2) Miniaturization of the proposed device by further investigating the use of specialized optics to reduce the needed focal length for image acquisition.This improvement could lead to an actual wearable device.3) Adjustment of the current modules and architectures for the collection of other hand vein patterns, such as those related to dorsal and wrist veins.4) Collecting and making public a larger database, possibly joining the efforts of other research groups using the same open-source device.

APPENDIX
The single modules of this fully open-source project can be found at https://gitlab.com/biomedia4n6-public/open_finger_vein_device. 5 1) Hardware: The schematics of the control and IBs are given along with the component list.2) 3-D Enclosure Model: The.stl files of the different parts of the device housing are provided.3) Software: The source files of the software to control the device and to manage the database are given.4) Assembly Manual: Details on how to assemble the different parts of the device are provided.

Manuscript received 7
July 2023; accepted 11 September 2023.Date of publication 17 October 2023; date of current version 9 November 2023.The Associate Editor coordinating the review process was Dr. Zhenbing Zhao.(Corresponding author: Emanuele Maiorana.)

Fig. 3 .
Fig. 3. Artistic representation of the device operation principle.Light from six LEDs (top) penetrates the skin, where the image sensor captures the vascular data through the opening window.The device utilizes transillumination-NIR light penetrates the skin and tissue.

Fig. 9 .
Fig. 9. Web server running on the ESP32-CAM.Layout (left) modified by adding six-channel illumination control.Example vascular image (manual illumination control).

Fig. 11 .
Fig. 11.Device control and data collection software's GUI.Left, from the top: acquired finger-vein image (green rectangle-ROI a ); device controls, console output.Middle: manual illumination i, exposure e, and gain g override.Right: data collection controls.Selection of device, database, and subject; attempt's data collection, visual guide of which finger's data are obtained currently.

TABLE IV BASELINE
MI U R AMA T C H/MA X I M U MCU R V A T U R E PIPELINE RESULTS ON LEGACY DATABASES AND OUR DATABASE FOR ALL VERSUS ALL PROTOCOLS.IN BRACKETS-MISCLASSIFIED SAMPLES/TOTAL NUMBER OF COMPARISONS PER GIVEN RATE