Smooth Polynomial Approach for Microwave Imaging in Sparse Processing Framework

We developed a novel qualitative imaging algorithm based on a polynomial approximation of the unknown contrast and sparse ( $l_{1}$ ) regularization. Contrary to previously published results, we defined polynomial basis functions on subdomains that divide the investigation domain. Moreover, we formulated constraints that ensure the continuity of the contrast on subdomain borders. We showed that the proposed algorithm improved imaging resolution, particularly in multiple target scenarios. We demonstrated that partitioning the investigation domain together with contrast continuity formulation enhanced the numerical stability and reduced the computation time. The obtained results were significantly less sensitive to the regularization parameter values than those obtained using the standard polynomial approximation. Namely, smaller domains allow lower polynomial orders, which are numerically more favorable. Continuity constraints reduce the search space and mitigate the occurrence of false solutions. Another contribution of this study is a novel strategy for regularization parameter selection. We considered different figures of merit and numerical scenarios to study the influence of various parameters involved in the imaging process, such as the polynomial order and number of subdomains. An extensive analysis proved the robustness of the approach against noise. The proposed algorithm was designed for two-dimensional geometry. However, generalization to three-dimensional space is straightforward. The algorithm can also be used with other types of regularization such as the $l_{2}$ regularization. Potential applications include medical microwave imaging, in which high resolution and noise immunity are vital features.

Irrespective of the applied regularization type, expanding the contrast or field sources into a series of basis functions reduces the search space and mitigates the solution's nonuniqueness [10]. Different basis functions have been utilized in the literature to reduce the number of unknowns. In [26], the unknown permittivity distribution inside a cylindrical scanner was modeled using the Zernike polynomials; the expansion coefficients were obtained through the nonlinear optimization. In [27], the Fourier-Jacobi (FJ) basis functions were incorporated into the particle swarm optimization (PSO) in order to retrieve the unknown permittivity profile. In [28], the contrast source inversion (CSI) algorithm was improved by utilizing the wavelet expansion for the unknown sources and contrast. The Haar wavelets have been integrated into a microwave imaging algorithm based on Born approximation [25]. Wavelets have been applied to model the inhomogeneous breast tissue in a nonlinear inverse scattering problem [29], [30]. Another application of wavelets in microwave medical imaging was found in [31], where the Haar wavelet expansions have been combined with the Born iterative method (BIM). Gaussian basis functions were also considered for medical MWI applications [32], [33]. B-spline modeling was proposed in [34] to reduce the problem complexity. Truncated cosine basis functions and B-spline were utilized in [35]. Spherical harmonics expansion, together with sparse processing, was utilized for complex-shaped target imaging in [17], [21], and [24].
Here, we proposed a novel microwave imaging algorithm that combines sparse processing with the polynomial approximation of the contrast. Unlike in [26], [27], [28], [29], [30], [31], [32], [33], [34], and [35], where the support of the basis functions covered the whole search space, we defined the polynomials on partitions of the search space. In this way, we achieved better accuracy when imaging multiple targets of different sizes or contrast. Namely, if the polynomials are defined on a single domain, their order has to be very high, which causes numerical instability and strong dependence on the regularization parameter value. To ensure contrast continuity on the domain borders, we formulated additional constraints, which were included in the optimization algorithm.
The algorithm was defined for the two-dimensional (2D) geometry, but the extension to the three-dimensional (3D) geometry would be straightforward. To study the effect of the polynomial order and number of domains on the image reconstruction, we considered several figures of merit, such as the Dice similarity coefficient (DSC) and Matthews correlation coefficient (MC) [36], [37].
The sparse processing brings quality to the proposed algorithm by suppressing the artifacts and noise while representing the contrast with polynomials overcomes the drawback of the sparse processing to produce pixelated images. We showed that the latter feature may be significant for target recognition in scenarios with pronounced multiple scattering as it increases the reconstructed portion of the target.
In the proposed method, all data are processed jointly, which is important when working with noisy measurements.
In contrast, many sparse processing algorithms dealing with electrically large targets utilize separate groups of measurements associated with close transmitters. Thus, they obtain the final image by combining the partial results [17], [21], [24], [38].
We have also compared the performance of the presented algorithm to that of the truncated singular value decomposition (TSVD), considered the gold standard in qualitative imaging [39], [40], [41], [42]. The advantages of the proposed approach compared to the TSVD are high-resolution, robustness against noise, and better performance in scenarios with pronounced multiple scattering and a limited number of measurements. In addition, the polynomial approximation of the contrast allows presenting images with arbitrary pixel density without increasing the number of unknowns. The downside is the longer computational time compared to that of the TSVD. The linear sampling method (LSM) [43], [44], [45], also widely used for qualitative imaging, yielded poor results for considered scenarios due to the insufficient number of transmitters (the rank of the multistatic matrix was smaller than the number of significant singular values); hence, we have not included those results.
Another contribution of this paper is a novel procedure for selecting the regularization parameter. Typically, L-curve is utilized for this purpose [46]. In [47], a more precise and demanding method has been proposed, which uses the derivative of the L-curve. Here, we described a simple method that infers the optimal value of the regularization parameter based on the function f (γ i ) = s i , where s i is the length of the segment of the L-curve between two adjacent values of the regularization parameter (γ i , γ i+1 ).
The paper is organized as follows. The theoretical background of the inverse scattering was given in Section II. The polynomial expansion was described in Section III. In Section IV, we described the optimization function. In Section V, we gave brief description of the methods used for comparison. The results obtained for various scenarios were reported in Section VI. Performance analyses, including investigating the robustness against noise and computational time study, were explained in Section VII. Some final remarks were given in the Conclusion section. Fig. 1 illustrates an imaging scenario consisting of elongated objects with constant cross-sections along the z-axis and a circular array of sensors. In this quasi-2D scenario (we consider real antennas instead of infinitely long current sources and targets with finite lengths) the approximate expression for the differential transmission coefficient between the ith and the jth antenna due to the presence of scatterers is given by [48]:

II. STANDARD MICROWAVE IMAGING FORMULATION
where s ij and s 0 ij are the transmission coefficients computed when the scatterer is present and absent, respectively, ε is the permittivity of the scatterer, ε b is the permittivity of the background medium (a vacuum in our case), ε is the contrast, E i is the incident electric field vector, r is position vector of a point inside the scatterer, v is the domain occupied by scatterers, a i is the complex amplitude of the incident power wave at the ith port, r i , is the position vector of the ith antenna, M is the size of the array, and ω is the angular frequency. When the reconstruction is performed in only one cut, we have where S is the union of objects' cross-sections. For finite length targets, we cannot determine the exact permittivity distribution due to the lack of measurement data in other cuts. We can estimate the target image, with pixel intensities approximately proportional to the unknown contrast. After discretizing (2), the measurement model becomes where L is the system matrix (4), as shown at the bottom of the page, t l is the location of the center of lth pixel, l = 1, . . . , L, s is the known measurement vector and k is the unknown vector whose elements are proportional to the contrast of the related pixel

III. CONTRAST APPROXIMATION A. POLYNOMIAL APPROXIMATIONS
Without loss of generality, we assume a rectangular imaging space, divided into N x × N y smaller domains, as illustrated in Fig. 2. In each domain, the unknown dielectric contrast is approximated by 2D polynomial basis functions of the generic form where Q is the polynomial order, c kl , k, l = 0, . . . , Q, are the polynomial coefficients, and x andy are local coordinates, To apply (7), we consider a uniform grid of n x × n y matching points, where n x and n y are the numbers of points along the x-axis and y-axis, respectively (Fig 3).
In the domain (m, n), m = 1, . . . , N y , n = 1, . . . , N x , the dielectric contrast at the matching points relates to the unknown polynomial coefficients through a matrix relation where • k m,n is a (n x · n y ) × 1 vector containing the unknown contrast values in the domain (n, m), • c m,n is a (Q + 1) 2 × 1 vector containing the unknown polynomial coefficients in the considered domain, and • H is the (n x · n y ) × (Q + 1) 2 transformation matrix. The order of the matching points in k m,n is defined by the red curve from Fig. 3. Thus, the Cartesian coordinates of the ith matching point are: where mod (a, n) denotes a modulo n, and · denotes the floor function. Thus, according to (7)-(10), the elements of the transformation matrix H are calculated as where i = 1, . . . , n x · n y and j = 1, . . . , (Q + 1) 2 .
In the whole search space, the unknown contrast is related to all polynomial coefficients as where • k is a N x · N y · n x · n y × 1 vector containing the unknown contrast in the whole search space, • c is a N x · N y · (Q + 1) 2 × 1 vector containing the all unknown polynomial coefficients, The vectors k and c are obtained by concatenating the vectors k m,n and c m,n , m = 1, . . . , N y , n = 1, . . . , N x , respectively, in the order defined by the red curve in Fig. 2. The total transformation matrix is When (12) is inserted into (3), we obtain the final system of equations relating the measurements and the polynomial coefficients

B. IMPOSING CONTRAST CONTINUITY
To improve the reconstruction quality, we defined the constraints that enforce the continuity of the contrast across the domain borders. Fig. 4 shows the edge common to domains (m, n) and (m, n + 1) and the corresponding matching points on the edge. The edge is parallel to the y-axis, so we have where y i is y-coordinate of the ith matching point In the matrix form, we have where H + y and H − y are n y × (Q + 1) 2 matrices and vectors c m,n and c m,n+1 are defined in (8). To impose continuity along all domain boundaries parallel to y-axis, we set the condition where we define a N x · N y · n y × N x · N y · (Q + 1) 2 matrix H y as Similarly, we want to ensure the continuity of the contrast along the x-axis. In this case, the condition is VOLUME 10, 2022 or in the matrix form where and is the x-coordinate of the ith matching point on the boundary between domains (m, n) and (m + 1, n). To impose the continuity along all boundaries parallel to x-axis, we set the condition

IV. OPTIMIZATION
We find the solution of the system of equations (14), with constraints (20) and (28), by solving the following optimization problem: where γ is the regularization parameter that balances between the l 2 norm of the error and l 1 norm of the solution In addition, we normalize the system matrix L by dividing its columns with their square norms. In this way, the range of useful values of the regularization parameter is set to 0 ≤ γ ≤ 1 [24]. We note that the l 2 norm can also be utilized instead of the l 1 norm in (31). In this way, the applicability of the method can be further extended to arbitrary targets.
To select the optimal value of γ , the L-curve [46] is typically utilized. The L-curve is obtained by plotting the solution norm versus the error norm, y L (x L ), for a discrete set of values of the regularization parameter. The optimal value of the regularization parameter is usually associated to the knee of the L-curve. Instead of the L-curve, we propose the following approach (S-curve). First, we define a variable where and γ is the increment of γ . To infer the optimal value of the regularization parameter we used the plot s (γ ), as we detailed in Section VI. For the optimization (31), we utilized the convex programming package CVX [49]. Once the polynomial coefficients have been found, the unknown vector, proportional to the contrast, is determined aŝ To assess the quality of the approximation, we considered different quality measures such as the accuracy parameter (ACC), the Dice similarity coefficient (DSC), and Matthews correlation coefficient (MC), all described in [36] and [37]. Those metrics are defined for binarized images, which are obtained by setting the pixels whose value is above the adopted threshold to one, and setting to zero remaining pixels. The accuracy parameter (ACC), Dice similarity coefficient (DSC), and Matthews correlation coefficient (MC) are defined, respectively, as: In the above expressions TP stands for true positive, TN for true negative, FP for false positive, and FN for false negative.
The selection of the threshold primarily influences the maximal value of the metrics and, to a smaller extent, their dependence on the regularization coefficient. To obtain fair comparisons, we utilized the threshold values that maximized the metrics. (We were not interested in the absolute value of the metrics; the goal was to compare the results obtained with different polynomial orders and different values of the regularization parameter.) We have also considered the root-mean-square error between the estimated (normalized) contrast vector and the (normalized) true contrast.
where N tot = N x · N y · n x · n y ,k n =k/ max(k), and k n = k/ max(k). To make this error metrics compatible to previous quality metrics, we defined the variable C ε = 1 − E.

V. COMPARISON METHODS
For comparison purposes, the TSVD and standard sparse processing were used. For the sake of completeness, we will briefly describe the implemented algorithms, defined in the same way as in [24].

A. TRUNCATED SINGULAR VALUE DECOMPOSITION
In the TSVD, the target image is obtained from (42) and (43), as shown at the bottom of the page, where L is the system matrix, s is the known measurement vector (defined in (5)), and f is the unknown vector (target image). The TSVD solution of the system is where u i , v i are the singular vectors of matrix L, σ i are the corresponding singular values, and i max is the truncation index, obtained from the condition 10 log 10 σ i max /σ 1 < −20 dB.

B. STANDARD SPARSE PROCESSING (SSP)
In contrast to the TSVD, partial systems of equations are formed for each transmitter. When the ith antenna is transmitting, the system of equations reads (45)- (47), as shown at the bottom of the page, where L (i) is the partial system matrix, e (i) is the corresponding measurement vector, and f (i) is the unknown vector. The data obtained from P transmitting antennas are processed together. When P = 2, the resulting system of equations becomes The minimization function iŝ and the final image is obtained as the superposition of the partial resultsf (i) . For the regularization parameter, we used the value slightly smaller than the knee-curve value.

VI. NUMERICAL EXAMPLES
In this section, we presented imaging results obtained for several numerical scenarios. In all cases, the array response was computed using the full-wave electromagnetic solver WIPL-D Pro [50].

A. IMAGING CONFIGURATION 1
In the first example, we investigated the effect of the polynomial order on the reconstruction quality with a fixed number of subdomains. An additional point of interest was the selection of the regularization parameter. We compared two approaches for regularization parameter selection: the established L-curve and the proposed S-curve method. The imaging objects were two identical dielectric cylinders made of a dielectric with the relative permittivity ε r = 5, as shown in Fig. 5. The cylinders had elliptical cross-sections, where the lengths of the semi-minor and semi-major axes were a = 1 cm and b = 2 cm, respectively. The centers of the cylinders were at (−3 cm, 0, 0) and (3 cm, 0, 0).  The antenna array consisted of 18 half-wavelength dipoles, uniformly distributed on a circle of the radius R = 0.6 m. The operating frequency of the array was f = 3 GHz. We assumed that one antenna transmitted at a time, while all antennas were receiving signals. The obtained data were corrupted by the additive white Gaussian noise (AWGN), with the signal-tonoise ratio SNR = 10 dB.
The search space was a square area of the size −0.2 m ≤ x, y ≤ 0.2 m, divided into N x = N y = 5 domains. In each domain there were n x × n y = 20 2 matching points (or 100 2 matching points in the whole search space).
To study the influence of the polynomial order on the reconstruction accuracy, we utilized ACC, DSC, MC, and C ε metrics. We computed those metrics for polynomials of the order 2 ≤ Q ≤ 6, utilizing regularization parameter values from the range 0.01 ≤ γ ≤ 1, with a step γ = 0.01. The obtained data indicated that the accuracy increased monotonically with polynomial degree up to Q = 4. Higherorder polynomials, Q = 5, 6, yielded a better solution only in a narrow range of values of the regularization parameter; otherwise, overfitting occurred. Table 1 gives the statistics for C ε and Table 2 gives the statistics for MC coefficient. The latter was computed using 40% binarization threshold.   The regularization parameter value has a significant influence on the imaging results. Typically, the optimal value of the regularization parameter is associated with the knee of the L-curve [46]. Fig. 6a shows the L-curve and the knee-point (red circle), computed for Q = 4. In Section IV, we defined the S-curve, which is more sensitive to the values of the regularization parameter. Fig. 6b shows the corresponding S-curve, denoted as s, in which we distinguish three intervals: γ < 0.1 characterized by a steep slope, 0.1 < γ < γ knee , characterized by a small ripple, and a flat part for γ > γ knee . (The knee-point of the L-curve is easily identified on the S-curve as the last point before s drops down to zero.) Fig. 6b also shows all considered quality metrics. Fig. 7 shows the reconstruction results computed for Q = 4 for several characteristic values of the regularization parameter. Fig. 7a shows the image obtained for γ 1 = 0.1, which was the regularization parameter value at the end of the first interval on the S-curve. Targets' footprints were distorted with artifacts due to the insufficient value of the regularization parameter. Fig. 7b shows the image obtained for γ 2 = 0.35, which maximized the quality metrics. The targets were clearly visible and the artifacts were suppressed. Visually similar results were obtained for γ 3 = γ knee = 0.43, as shown in Fig. 7c. (Small difference in the values of quality metrics computed for γ 2 and γ 3 was due to the binarization). Finally, Fig. 7d, shows the image obtained for γ 4 = 0.6, related to the vertical part of the L-curve (or horizontal part of the S-curve). The targets were easily identified but the image was not sharp due to the large value of the regularization parameter.    8 shows the S-curve (together with quality metrics) computed for Q = 6. Fig. 9 shows the associated reconstruction results. The images were obtained for γ 1 = 0.25 (the end of the steep interval of the S-curve), γ 2 = 0.43 (maximizes the quality metrics), γ 3 = 0.5 (knee-value), and γ 4 = 0.6 (horizontal part of the S-curve).
The quality of the images obtained for Q = 4 and Q = 6 was very similar, confirming the findings from Table 1 and  Table 2.
Finally, Fig. 10a shows the results obtained using the standard sparse processing. The regularization parameter value, γ = 0.45, was chosen to be slightly less than the knee value [24]. The number of jointly processed signals was set to P = 5. (This choice was a result of the compromise. Large values for P tend to produce overly sparse image, while small values make the solution more sensitive to noise.) The targets' contours were fairly good estimated but the image was pixelated. Fig. 10b shows the results obtained using the TSVD with −20 dB threshold. Compared to the standard sparse processing, the TSVD reconstructed whole targets, but the image was blurred and full of artifacts.  In this subsection, we studied the role of the polynomial order on the quality of imaging of closely-spaced sparse targets. Numerical simulations showed that increasing the degree of polynomial approximation improves the resolution but only to a certain degree, after which overfitting may occur.

B. IMAGING CONFIGURATION 2
In the following two examples, we considered the array comprising 240 half-wavelength dipoles, uniformly distributed on a circle of the radius R = 1.76 m, as shown in Fig. 11. Out of 240 antennas, only 8 operated as transmitters (denoted as red circles in Fig. 11).
The angular distance between adjacent transmitters was ϕ Tx = 45 • and the angular distance between the adjacent receivers was ϕ Rx = 1.5 • . The operating frequency was f = 6 GHz. The unknown targets were dielectric cylinders with circular cross-sections shown in Fig. 12.
As the second example, we considered the cylinders from Fig. 12a. The focus of this study was the influence on the number of the subdomains on the performance of the algorithm. The centers of the cylinders were at (−3.12 cm, 2.34 cm, 0) and (−1.56 cm, − 1.56 cm, 0).  Their radii were R 1 = 1.17 cm and R 2 = 2.34 cm, and their relative permittivity were ε r1 = 2.32 and ε r2 = 1.47.
The search space was a square of the size 25 cm × 25 cm with 64 2 matching points in the whole investigation domain. We considered subdivisions of the search space into: N x = N y = 4, N x = N y = 2, and N x = N y = 1 domains.
The experiments showed that for N x = N y = 4, n x × n y = 16 2 , the polynomials with the order Q ≤ 2 could not resolve two targets. Also, increasing the polynomial order beyond Q > 4 did not improve the accuracy. Table 3 and Table 4 show the corresponding statistics for MC and C ε coefficients. Fig. 13a and Fig. 13b show the obtained images computed for γ 1 = 0.15 (the small-ripple region on the S-curve) and γ 2 = 0.5 (horizontal part of the S-curve), respectively. The cylinders were easily identified and more intensive color referred to the cylinder with a larger contrast.     Finally, if there was only one domain, N x = N y = 1, n x ×n y = 64 2 , the polynomial order had to be at least Q = 16. This caused severe instabilities, which manifest as strong dependence on the regularization parameter value. As an illustration, Fig. 15 shows the images computed for two consecutive values of the regularization parameter. The numerical instability is also observed on the L-curve (Fig. 16a) and S-curve (Fig. 16b). To conclude this part of the study, we showed in Fig. 17 the results obtained using the standard sparse processing and the TSVD method. As before, the sparse processing produced pixelated image, while the TSVD image was characterized by artifacts.  In the last example, we analyzed the scenario with pronounced multiple scattering (Fig. 12b). The Cartesian coordinates of the centers of the cylinders were (2.54 cm, 1.36 cm, 0) and (−2.35 cm, 1.56 cm, 0), respectively. The radii of the cylinders were R 1 = 1.95 cm and R 2 = 1.76 cm. The relative permittivities were ε r1 = 3.44 and ε r2 = 2.99, respectively. The search space was divided into N x = N y = 4 domains, with n x × n y = 16 2 matching points in each domain.
As in previous examples, we computed quality metrics for different polynomial orders in the whole range of regularization parameter values. The coefficient C ε was almost independent of the polynomial order, while the best results for ACC, MC, and DSC metrics were obtained for Q = 2. Table 5 and Table 6 show the results for selected quality metrics. Fig. 18 shows the S-curve and quality metrics computed for Q = 2. Fig.19 shows the obtained images, computed for:          21 shows the results obtained using the sparse processing and the TSVD in which actual target shape is hardly recognizable.
Compared to the previous example (Fig. 12a), this reconstruction was more difficult due to the larger dimensions of the cylinders and the more significant contrast. By keeping a low-order approximation, we estimated the number and shape of the targets, which was impossible with the TSVD and standard sparse processing.

VII. PERFORMANCE ANALISYS A. ROBUSTNESS AGAINST NOISE
To study influence of the noise on the performance of the algorithm, we computed images of the target from Fig. 5 using different signal-to-noise ratios (SNR). We calculated SNR with respect to the power of differential signal. Fig. 22 shows the obtained results. Two targets were discernable even in the case of severe noise. For SNR < −5 dB, the image quality started to deteriorate significantly. Fig. 23 shows the results obtained using the TSVD, which were much more sensitive to noise. (The truncation threshold used to regularize TSVD was set to −15 dB. ) We have also calculated the statistical parameters of MC coefficient (MC), using regularization parameter values 0.1 ≤ γ ≤ 0.5, with a γ = 0.01 step. The results shown in Table 7 supported previous discussion. Namely, the calculated values of MC coefficients were stable for SNR ≥ −5 dB.
We observed similar behavior in the other two examples. Thus, we omitted those results.

B. COMPUTATION TIME
At a given frequency, the image resolution primarily depends on the number of unknown coefficients, N x N y (Q + 1) 2 .  We showed that when the number of subdomains is decreased, the order of polynomial approximation has to be increased to keep similar resolution. Moreover, our analysis indicated that using large polynomial orders on a single domain causes numerical instability, which was particularly pronounced for sparse targets imaged with small number of measurements.
In this section, we studied the influence of the number of unknowns on the computation time. To that end, we considered the reconstruction of the target from Fig. 5 using three different divisions of the investigation domain: N x = N y = 1, N x = N y = 2, and N x = N y = 5. Fig. 24 shows the results of the analysis. The shortest computation time was observed for the largest number of subdomains, N x = N y = 5. Fig. 25 shows the representative images for all three cases. The respective computational times, were t = 24 s (625 unknowns), t = 125 s (484 unknowns), and t = 255 s (289 unknowns). In the last case (N x = N y = 1), the results diverged.  Fig. 12a). Fig. 26 shows the computation times for targets from Fig. 12a. Again, the shortest time was observed for the largest number of domains, N x = N y = 4. However, the difference among three cases was smaller compared to the previous example. The computation times for 256 unknowns were t = 46 s (N x = N y = 4), t = 59 s (N x = N y = 2), and t = 120 s (N x = N y = 1).
Generally, boundary constraints on subdomain borders effectively reduce the number of unknowns and, consequently, lessen the computation time and improve the stability. The computation times of the TSVD were of the order of seconds on the same computer. Therefore, higher resolution and immunity to noise were paid by longer execution times.

VIII. CONCLUSION
We have developed an algorithm that combines the polynomial approximation of the contrast and sparse processing. Unlike the previously published results, the support of the basis functions stretches over subdomains that divide the search space. To preserve the continuity of the contrast across subdomain borders, we formulated constraints, which were included in the optimization function. The algorithm's performance was tested in several numerical scenarios using different quality metrics.
Another contribution of this work was a novel procedure for selecting the regularization parameter. The regularization parameter value, associated with the knee of the L-curve, typically used as the optimal, suppresses weak scatterers and leaves the strongest scatterer in the image. Instead, we defined the S-curve, which provides information on the lengths of the L-curve segments associated with an increment of the regularization parameter value. We showed that by selecting the regularization parameter from the S-curve region characterized by small ripples improves the image accuracy. Moreover, the shape of the S-curve indicated the sensitivity of the solution to the regularization parameter value. Namely, the larger the region with small ripples, the more stable was the solution of the inverse problem.
Extensive testing demonstrated the robustness of the proposed method against noise. Accurate images were obtained under SNR as low as −5 dB for the array with a smaller number of sensors. Noise suppression and high resolution are the qualities stemming from the l 1 regularization, whose application was made possible by partitioning the investigation space. At the same time, the polynomial approximation of the contrast eliminated the pixelated nature of images and, thus, extended the range of application of sparse processing.