Impact of Reconstruction Algorithms on Diffuse Correlation Tomography Blood Flow Imaging

Near-infrared diffuse correlation tomography (DCT) is an emerging technology for non-invasive imaging of the tissue blood flow. The flow imaging quality relies on the image reconstruction algorithm, which, however, is little studied thus far. In this study, we conducted the first investigation of reconstruction algorithm impact on DCT blood flow imaging. Two reconstruction algorithms, i.e., the finite element method (FEM) representing the imaging framework of partial differential equation, and the Nth-order linear (NL) approach, representing the imaging framework of integral equation that was recently proposed by us to incorporate the tissue morphological information, were compared. Both computer simulations and phantom experiment outcomes show that the NL approach performs much better in image accuracy and homogeneity over anomaly or background, when compared with the FEM at the same source-detector configuration and spatial resolution. This study demonstrates that the DCT blood flow imaging is substantially influenced by the reconstruction algorithm, thus it has great potential in future algorithm design and optimization.


I. INTRODUCTION
Many diseases are associated with abnormal tissue hemodynamics such as blood flow, blood oxygen or oxygen metabolism [1]- [3]. Longitudinal monitoring of hemodynamics at microvasculature level is important for early disease diagnosis and treatment evaluation. The current technologies used in clinic for probing the hemodynamics are limited by invasiveness (e.g., partial oxygen pressure [4]), large vessels (e.g., ultrasound Doppler [5]) or heavy and expensive instruments (functional [6] or ASL-MRI [7]). Near-infrared diffuse optical spectroscopy (NIRS) is a noninvasive, portable and inexpensive modality that has been developed since 1970s [8]. The conventional NIRS utilizes the signals of light intensity to detect the blood oxygen content in biological tissues including the oxy-and deoxy-hemoglobin concentra- The associate editor coordinating the review of this manuscript and approving it for publication was Ting Li . tion as well as tissue blood saturation [9], [10]. In recent years, a relatively new NIRS technology, namely diffuse correlation spectroscopy (DCS), has gained rapid development [10], [11]. DCS measures the microvasculature blood flow by using the temporal autocorrelation (g 1 (τ )) of the light electric field to quantify the movement of red blood cells, offering a direct approach to probe tissue blood flow [11]. Both NIRS and DCS have been translated to numerous physiological or clinical studies, providing a diagnostic basis for a variety of diseases such as cerebral ischemia, tumors, muscle ischemia or skin wounds [12]- [15]. In some situations, the NIRS and DCS are integrated to form a hybrid optical instrument, allowing for extraction of the tissue metabolic rate [16]. Moreover, both the NIRS and DCS have been extended from spectroscopy to tomography, namely diffuse optical tomography (DOT) and diffuse correlation tomography (DCT), respectively, offering spatial contrast of the oxygenation and blood flow at microvasculature level [1], [3].
In contrast to the NIRS and DCS wherein only a few source-detector (S-D) pairs are needed. The DOT and DCT require numerous S-D pairs for hemodynamic imaging, substantially elevating the instrumentation cost [1], [3]. Additionally, reconstructions of the oxygenation or blood flow imaging from DOT or DCT data are ill-posed problems in mathematics [17], because the S-D pairs (a few hundred or less) are far less than the unknown voxels in the images (thousands). Therefore, the reconstruction approach is critical for hemodynamic image quality, and it is the key topic in field of DOT and DCT. For oxygenation imaging with DOT, the primary reconstruction approach is finite element method (FEM), due to the fact that diffused light obeys a partial differential equation (PDE) [17]. Over years, many approaches based on FEM have been established to solve the DOT, and open-source codes packages are available, of which the Nirfast and TOAST are mostly adopted [17], [18]. For blood flow imaging with DCT, an analytical solution was utilized at early stage and applied to rat brain studies [19], [20]. However, simple and ideal geometries (e.g., semi-infinite or slab shape) of the target tissues must be assumed in order to derive the analytical solution from PDEs. In fact, biological tissues generally have irregular geometries with large curvature. Particularly, the assumption of simple geometry would cause errors in the reconstructed blood flow imaging with DCT data.
Over years, the FEM has been emerging as an alternative solution for DCT [21], [22], because the light temporal autocorrelation function also satisfies a PDE equation, termed as correlation diffusion equation [1], [22]. Moreover, there are similarities between the DOT and DCT in PDE form. Therefore, it is possible to translate DOT approaches into DCT. For example, the open-source FEM software, Nirfast (www.nirfast.org), has been implemented for DCT to reconstruct the blood flow imaging in the breast tumors [21]. The FEM could reflect the tissue geometry, therefore overcoming the limits of simple-geometry assumption that would cause errors. Nevertheless, the DCT data contain the autocorrelation function g 1 (τ ) at hundreds of delay times (i.e., τ ranges from nanoseconds to seconds). However, only the g 1 (τ ) function at single delay time (e.g., τ = 3.2e-6 [22]) could be utilized for DCT if the FEM approach is adopted. Although there are signal denoising methods available, the FEM is still sensitive to the data noise, because it is unable to fully utilize the DCT data in principle.
In recent years, we have proposed a new approach, namely, the Nth-order linear (NL) algorithm, to account for the irregular geometry and heterogeneity of the tissue [23], [24]. Unlike the analytical solution or FEM, the NL approach does not aim to find the solution to PDE. Instead, this approach is based on the integral form of g 1 (τ ), i.e., considering that the light temporal autocorrelation function is the weighted contributions from individual photons. By introducing the mathematical tool of Taylor polynomial, the integral form of g 1 (τ ) becomes computable (see details in Section II (C)). With the NL approach, the irregular geometry and tissue heterogeneity are well reflected through the photon information such as path lengths and weight factors. More importantly, the NL approach enable fully utilization of DCT data via linear regression of the g 1 (τ ) at multiple delay times. The NL approach for extracting the DCS blood flow information has been extensively validated through computer simulations, in vivo animal experiments as well as human experiments [23], [24]. Recently, we extended the NL approach to DCT, and computer simulations on human head images verified its capability in accurate and robust reconstruction of blood flow imaging [25].
As mentioned earlier, the FEM and NL approach are based on different physical modelling of light autocorrelation function (i.e., PDE [22] and integral equation [24], respectively), leading to different computation procedure for the blood flow imaging. Both physical model and reconstruction algorithm (taking together as ''imaging framework'') are vital for the image quality. However, the impact of these factors on the DCT image is far less reported.
In this study, we extensively investigated the physical modeling of DCT and the derived reconstruction approaches, and we compared the imaging frameworks through computer simulations and phantom experiments. For this purpose, two popular categories of imaging frameworks, i.e., FEM and NL approach, are compared. The FEM was implemented by Nirfast, the mostly used open-source package consisting of the rendering interface and computing codes. The NL approach was created by us and has been validated on a realistic human head for cerebral blood flow imaging. For fair comparisons, both imaging frameworks are implemented in the same geometrical models, and the number of unknowns (the image voxels or element nodes) as well as spatial resolution are kept balance. The image quality was evaluated by the mean error between the reconstructed flow values and target ones in computer simulations. Besides, the image homogeneity over anomaly or background from both computer simulations and liquid phantoms was investigated.

II. METHODS
In this section, we will briefly introduce the DCT theory and instrumentation. Then, the FEM and NL imaging frameworks are depicted, followed by their computational procedures. Additionally, we illustrate the setup configurations for computer simulations and phantom experiments. The evaluation criteria for both imaging frameworks are finally provided.

A. DCT PRINCIPLE
The DCT (as well as the DCS) was evolved from the technology of dynamic light scattering, in which the photons are collected after being scattered once from a thin sample containing the moving scatterers. In biological tissues, however, the photons are scattered multiple times when escaping out from the tissues, and individual photon has different transmission trajectories. Therefore, the light electric field temporal autocorrelation function g 1 (τ ) is modeled as the integral of autocorrelation function from all the detected photons, in the VOLUME 8, 2020 following form [26]: where E(0) and E(τ ) are the light electric field at time 0 and τ , respectively. E * (τ ) is the electric field conjugation of E(τ ). P(s) is the normalized distribution of individual photon path length s, k 0 is light wave vector magnitude, l * is the photon random-walk step length, represented by 1/µ s ' (µ s ' is the reduced scattering coefficient), and τ is the delay time of autocorrelation function. < r 2 (τ ) >= 6D B τ is the mean-square displacement of the moving scatters. Here, D B is the effective diffusion coefficient. A factor α is added to < r 2 (τ ) > (i.e., < r 2 (τ ) >= 6αD B τ ), indicating that not all scatters are ''moving'' in the tissue. Here α is the ratio of ''moving'' scatters to the total number of scatters. The combined term, αD B , is referred to as blood flow index (BFI) in biological tissues [27]. Since the light electric field E(τ ) cannot be directly measured, alternatively, we calculate the light intensity temporal autocorrelation function g 2 (τ ). The g 1 (τ ) could be derived from g 2 (τ ) via the Siegert relation [28]. Conventionally, by assuming a specific boundary condition (more often, the semi-infinite), the blood flow index (BFI), which represents the moving of red blood cells, is extracted analytically from the correlation diffusion equation.
Both the analytical solution and the finite element method are derived from the following correlation diffusion equation [22], [26].
Here, G 1 (r, τ ) is the unnormalized electric field temporal autocorrelation function, τ is the delay time, r is the position vector, v is the light speed in the medium, λ is the wavelength, µ a is the medium absorption coefficient, D(r) ≈ ν/3µ s (r) is the medium photon diffusion coefficient, and S(r) is continuous-wave isotropic source, k 0 is the wave number of the incident light field.

B. FINITE ELEMENT METHOD FOR DCT BLOOD FLOW IMAGING
The finite element method (FEM) to solve the DOT oxygenation imaging has been adopted widely and reported in numerous studies [3], [17]. The basic principle and computing procedures are described in details elsewhere [17]. As mentioned earlier, the DCT has the PDE form similar to DOT, allowing for directly translating the DOT solutions into DCT. Specifically, the term 1/3 · µ s (r)k 2 0 α < r 2 (τ ) > in Eq.(2) is defined as ''dynamic absorption'' [22], which could be derived from the DOT software package. It is noted that the input variable in Eq.(2) is the unnormalized light electric field temporal autocorrelation function (i.e., G 1 (τ )), rather than the light intensity signal for the DOT equation.

C. NL APPROACH FOR DCT BLOOD FLOW IMAGING
As mentioned earlier, the NL approach aims not to seek for the solution to correlation diffusion equation. Alternately, this algorithm is based on the integral form of temporal autocorrelation function g 1 (τ ), which has the below form at (mth, jth) S-D pair [24]: Here P(m, j, s 1 , . . . , s n ) is the normalized distribution of the detected photon path length over n elements, k 0 (i) is light wave vector magnitude in ith element, l * i is the photon random-walk step length in ith element, which equals to 1/µ s ' (µ s ' is the reduced scattering coefficient in ith element), and τ is the delay time of autocorrelation function. < r 2 i (τ ) > is the mean-square displacement of the moving scatters in ith element. Based on different physical models, < r 2 i (τ ) > will have different mathematical forms. According to the outcomes derived from the previous experiments, the Brownian motion model, i.e., < r 2 i (τ ) >= 6αD B (i)τ was found to well fit the real DCS/DCT data. The variable αD B (i) denotes the blood flow index (BFI) in ith element.
On the other hand, the continuous function g 1 (m,j,τ ) can be written as the following form of N th-order Taylor polynomial: Combing Eq. (3) and Eq. (4) and performing a few math derivations, we obtain the first-order (N = 1) and Nth-order (N > 1) approximations when τ is sufficient small, in the below equations (5-6), where equations (5)(6) are shown at the bottom of this page.
For computer implementation, the above formulas are discretized, leading to the following expression of first-order (N = 1) and N th-order (N > 1) approximations. Here The w(q,m,j), called photon weight, is the discretized form of P(m,j,s 1 , . . . s n ), in which the qth photon (q =1,. . . , Q) packet has the path length (s 1 , . . . , s n ) spreading over the n elements. s(i,q,m,j) denotes the ith path length corresponding to qth photon packet at (mth ,jth) S-D pair. Both w(q,m,j) and s(i,q,m,j) could be estimated from the photon Monte Carlo simulations on heterogeneous tissue with arbitrary geometry.
For convenience in computing implement, the two- We then define a few variables [25]: As such, A is a matrix with MJ × n dimensions. αD B is a vector with n × 1 dimensions. When the order N = 1, the unknown variables αD B appear only on the right-hand side of Eq. (7). Hence, Eq. (7) can be rewritten as follows: The combined term A * αD (1) B is the slope of linear regression between τ and g 1 (h, τ ) − 1, which is rewritten as: Once the slope (Sl) is determined via the linear regression between τ and g 1 (h, τ ) − 1 data, the unknowns (αD (1) B ) could be calculated from Eq.(13) by using a specific algorithm of image reconstruction.
When the order N > 1, the unknown variables αD B appear on both sides of Eq. (8). Hence, the BFI calculation is implemented according to the iterative procedures. The final solution is given in below form: A * αD New variables X and B are defined as X = αD B and B = Sl, thus equations (13) and (15) can be expressed by the following linear equations:  Based on previous investigations [25], a hybrid algorithm, namely, Bregman-TV, was used to solve the above problems. Here the Bregman algorithm provides a solution for non-differentiable equation. TV algorithm can make the non-edge region more uniform, meanwhile preserving the edge well.

D. DCT MODEL AND S-D CONFIGURATION
For both computer simulations and phantom experiments, a rectangular DCT model at size of 80 mm × 80 mm × 30 mm was established to evaluate the impact of reconstruction framework on flow imaging (Figure 1(a)). In order to create flow contrast, the rectangular model is embedded with a cylinder tube (5.7 mm in diameter and 80 mm in length) centered at depth of 5.7 mm from the surface (Figure 1(b)).
An optical probe with specific S-D configuration was placed on the surface of DCT model, so as to emit and collect the photon signals from the tissue (Figure 1(b)). The optical probe consists of 8 source fibers and 48 detector fibers, covering an area of 80 × 80 mm 2 on the tissue model. The source and detector fibers are cross-distributed in the array (Figure 2), and each source is associated with 6 detector fibers at varied S-D separations. This S-D array permit a maximal S-D pairs of 384 (i.e., 8 × 48) and penetration depth of 1.5 cm.
Two image reconstruction frameworks, i.e., the FEM and NL approach, were applied to the same DCT model with the above S-D configuration. Figure 3 shows the meshing of FEM (a) and NL approach (b) on the rectangular DCT model, respectively. For a fair comparison, we matched the unknown number for both models. As such, the node number for FEM and the voxel number for NL approach reach 8315 and 8624 respectively.

E. COMPUTER SIMULATIONS
For a specific S-D pair (m, j) depicted in Section II (D), the g 1 (τ ) curve is generated according to the following equations [24].
Eq. (17) is the discrete form of the temporal autocorrelation of light electric field, here w(q,m,j) and s(i, q,m,j) are the weight factor and the path length of the qth detected photon packet within ith tissue, which are estimated from Monte Carlo (MC) simulation with the known optical properties (µ a , µ s , g and n), by using an open-source software (MCVM) [29]. The BFI value (i.e., αD B ) of anomaly is set to be 2 and 6 times of the background BFI over the rectangular geometry (1.0 × 10 −8 cm 2 /s), with aim to create a minor and a major flow contrast. The optical properties are set to be homogeneous (µ a = 0.05 cm −1 and µ s ' = 8.0 cm −1 ).
To mimic the realistic DCT data, standard deviation of the noise (σ (τ )) was added to autocorrelation function g 1 (τ ) at each delay time (τ ), according to the following equation [30], [31]: Here, T is the correlator bin time, is the decay rate, and I is the detected photon count rate (i.e., light intensity). Figure 4 exhibits the computer simulations of temporal light electric field autocorrelation functions (i.e., the g 1 curves) at two different S-D separations. As clearly seen, the intensity of light signals collected at larger S-D separation attenuates more, resulting in larger noise level in g 1 curve. These simulation outcomes agree with the real DCT data collected from the phantom or in vivo experiments.

F. PHANTOM EXPERIMENT
The setup of phantom experiment was consistent with those for computer simulations, and the procedures are elaborated elsewhere [28], [31]. Briefly, the liquid phantom, comprised of distilled water, India ink and intralipid solution, was contained in a rectangular aquarium. India ink solution (Chenguang Inc., China) provides light absorption and intralipid provides light scattering, both at the target values (µ a = 0.05 cm −1 , µ s ' = 8.0 cm −1 ). The intralipid particles also mimic the motions of red blood cells, i.e., the microvasculature blood flow.
As with the DCT model depicted in Section II (D), a cylinder tube is made with thin glass to represent the anomaly in the phantom experiments. The tube was filled with the same liquid solution as the background in the rectangular aquarium. To manipulate the flow, the glass tube is connected to a syringe pump via rubber hose at both ends. The pump speed is set to be 200 ml/h and 600 ml/h, in order to create a minor and a major flow contrast, respectively, to the background flow. For the purpose of cross validation, the anomaly positioning and the S-D configuration match with the setup in computer simulations, both are depicted in Section II (D).
A DCT instrument constructed in our laboratory ( Figure 5(a)) was used to collect the light autocorrelation data (i.e., g 1 curve) from the phantom experiments. This system is similar to those reported in previous studies [22]. Briefly, the instrument consists of a continuous-wave laser (785 nm, DL-785-120-SO, Crystalaser, Inc.) at long coherence length (>5m), eight single-photon-counting photodiodes (APDs) (SPCM-780-13-FC, Excelitas Inc., Canada), and an eight-channel correlator (flex05-8ch, Correlator.com, USA). The optical probe is confined in black foam, with the source (S) and detector (D) fibers being formed an array (depicted in Figure 2).
The g 1 curve at varied S-D separations is illustrated in Figure 5(b). It can be seen that the g 1 curve decays more slowly at the smaller S-D separation. At larger S-D separation, the g 1 curves decay faster and with more noise. These features coincide with those in computer simulations.

G. EVALUATION CRITERIA
For computer simulations with the known BFI values, the mean error of flow imaging could be quantified by the following equation: where αD i B and αD i B,0 are the reconstructed and true BFI values in ith unknown (element node or voxel), respectively. n is the number of samples. A small mean error value indicates that the reconstructed BFI value is more accurate.
Besides the accuracy, the homogeneity of flow imaging is defined as: where the 'σ ' represents the standard deviation of BFI value. A smaller CV value indicates that the reconstructed flow is more homogeneous over a specific volume (background or anomaly).

III. RESULTS
In this section, the outcomes of computer simulations and phantom experiments are presented, through which the impact of the FEM and NL approach on imaging quality is quantitatively evaluated and compared. All calculations were performed on a desktop PC (Intel i7-6700 3.4G Hz CPU and 16G memory).
A. COMPUTER SIMULATIONS Figure 6 shows the cross-section of the reconstructed flow imaging at relatively deep layer (5 mm in depth), while the   When the FEM is adopted, the tubular anomaly is found to appear with broken segments in the image. By contrast, the anomaly appears more consecutively and the edge is smoother when the NL approach is adopted. Besides, the NL approach generates smaller mean error values than FEM (Figure 7(a)), and the flow image is much more homogeneous (i.e., smaller CV) over either the anomaly or background volume (Figure 7(b)). When the anomaly flow was increased to 6 times of background flow, the reconstructed image was improved with both FEM and NL approach, featured by the enhanced contrast and alleviated discontinuity (Figure 8). Nevertheless, there is still minor gap appeared in the anomaly reconstructed by FEM (Figure 8(a)). Likewise, the NL approach generates more accurate and homogeneous flow images in both anomaly and background, as evidenced by smaller mean error and CV values (Figure 9). The computation time for FEM and NL approach are 385 seconds and 186 seconds respectively.

B. PHANTOM EXPERIMENTS
For better comparison, the cross-section selected from phantom experiment images keeps the same as computer simulations (i.e., 5 mm below the surface). At the low pump speed (200 ml/h), it is hard to identify the anomaly reconstructed by FEM because the target appears more scattered in the image (Figure 10(a)). By using the NL approach, the reconstructed   anomaly is much more visible (Figure 10(b)), except surrounding by slight artifacts. The high pump speed (600 ml/h) improve both the image quality and flow contrast (Figure 11), making it easy to identify the anomaly. However, discontinuity in anomaly (i.e., a gap) was still observed in the flow images reconstructed by FEM algorithm.
As shown in Figure 12, the CV values also indicate that NL approach leads to better homogeneous flow images than FEM, especially for anomaly image. Moreover, the image homogeneity is found to be negatively affected by the pump speed. Taking the anomaly as an example, the CV values are 0.231 and 0.125 with FEM and NL approach respectively ( Figure 12(a)) at the 200 ml/h pump speed. These values increases to 0.360 and 0.263 when the pump speed is elevated to 600 ml/h (Figure 12(b)). Generally, the outcomes derived from the phantom experiment agree well with those from the computer simulations.

IV. DISCUSSION AND CONCLUSION
Noninvasive imaging of tissue blood flow would provide essential information to understand the physiological functions or help diagnosis of various diseases. Currently, ASL-MRI is a primary modality to image the deep tissue hemodynamics in clinic, particularly for assessment of cerebral perfusion. However, MRI is an expensive technology and the routine use of this technology is restricted by the high cost and low mobility. Moreover, it is not easy to use MRI technology for dynamic measurements, which is critical for evaluations of physiological functions such as cerebral autoregulation capability. DCT is a novel technology newly emerging in recent years for blood flow imaging in deep tissues, up to several centimeters. Because of low cost, noninvasiveness, portability and readily for dynamic measurements, the DCT has gained rapid development and applied to a variety of tissues such as rat brain and human breast [19]- [21]. In the past, the image reconstruction algorithm was established based on the analytical solution to PDE, which requires a regular geometry of target tissues (e.g., slab), which does not match the real tissues with small volume or large curvature (e.g., small animal brain, human breast). Hence, a FEM-based image reconstruction algorithm was proposed to overcome this limit. The FEM is a well established numerical approach through which the tissue geometry is taken into account. Additionally, the FEM framework enable adaptation of the open-source software package (e.g., Nirfast) for DCT blood flow imaging. Despite of the merits, only the autocorrelation data (i.e., g 1 (τ )) at single delay time could be used for image reconstruction by FEM approach, thus susceptible to the data noise.
Recently, we proposed a novel NL algorithm, which is based on integral form of light electric field, to extract the blood flow information by incorporating the information of tissue heterogeneity and geometry. Through iteration linear regression (Eq.14), the autocorrelation data are fully utilized, subsequently enhancing the accuracy and robustness of the blood flow measurements. The blood flow quality of DCT relies on both the hardware configuration (e.g., S-D array) and image reconstruction algorithm, which is determined by the physical modelling of the light autocorrelation functions. Hence, two imaging frameworks, i.e., the FEM representing the PDE form of autocorrelation functions, and the NL algorithm representing the integral form of autocorrelation functions, were selected in this study to investigate the impact of reconstruction algorithms on DCT blood flow imaging. These two imaging frameworks were then extensively evaluated though the computer simulations and phantom experiments. Computer simulations provide quantitative evaluation criterion through calculating the error rate between the reconstructed image and the target image with the known values. The phantom experiments allow for quality assessment (e.g., homogeneity, contrast) of the flow images derived from the real DCT data. For fair comparison, the setup for computer simulations and phantom experiments are all the same to both imaging frameworks. The two imaging frameworks also have different basic unit, i.e., element node for FEM and voxel for NL approach. We tried our best to maintain the balance in the unknown numbers (i.e., 8315 for FEM and 8624 for NL approach) as well as the spatial resolution.
The outcomes from both the computer simulations and phantom experiments exhibit the advantage of NL approach over the FEM algorithm. Specifically, the NL approach generates more accurate and homogeneous flow images, and the contrast between the anomaly and the background are more evident and easy to identify. In some cases, we found that NL approach and FEM generate similar error (e.g., the mean error values of anomaly in Figure 7(a) and Figure 9(a)). Visually, however, the anomaly images have large difference. For example, there are big gaps in the anomaly image reconstructed by the FEM algorithm. By contrast, the anomaly image generated by the NL approach is more continuous. These observations are consistent with the theories and principles of both imaging framework. As mentioned earlier, with FEM, only the g 1 (τ ) data at single delay time could be used for image reconstruction. Hence, the reconstructed images are susceptible to the data noise and lead to unstable outcomes. The NL approach could fully utilize the DCT data, thus generating more reliable outcomes of flow imaging. Moreover, the higher contrast between the anomaly and background leads to better images for anomaly identifications (see Figure 6 and Figure 8), which is reasonable and anticipated. Additionally, the NL approach greatly shortens the imaging time when compared with FEM algorithm.
To conclude, we investigated the reconstruction algorithm impact on DCT blood flow imaging in this study. For this purpose, computer simulations and phantom experiments are comprehensively set up with the anomaly embedded. The outcomes demonstrate that the integral function-based imaging framework (i.e., NL approach) perform better than the PDEbased one (i.e. FEM). The underlying reason is due primarily to the distinct ability to denoise the DCT signals. Hence, this study would assist in design of the DCT reconstruction algorithm through optimal use of the optical data. Nevertheless, other factors (such as tissue absorption and scattering) would also contribute to the quality of the flow imaging, which will be the subject of future studies. His research interests include image reconstruction and biomedical optical spectroscopy/tomography. VOLUME 8, 2020