Diffraction From Metallic Objects in Radar Imaging—Theory, Simulation, and Experimental Verification

The paper describes the modelling of diffraction effects in radar imaging scans at and near sharp-edged metallic objects as the aperture dimension is close to the dimension of the object to be scanned. We evaluate the Fresnel-Kirchhoff integral within the process of radar imaging for an $\mathbf {1}$D synthetic aperture radar. The resulting compact imaging model describes the diffraction effects in a single convolution-based formula. This is the basis for the simulative analysis in which the derived model is compared to a numerical radar imaging model. Further, we test our derived diffraction model experimentally. It confirms the developed theory and opens up additional areas of application for diffraction effect impacted radar imaging. This might be: high precision cross-range measurement with micrometer accuracy; highly precise surface reconstruction unveiling surface uncertainties in the low double-digit micrometer range; and model-based radar image enhancement.


I. INTRODUCTION
Starting from airport scanners, millimeter wave imaging has become increasingly interesting for object scanning and nondesctructive testing [1]. Beside the dynamic range of each channel, image quality is determined by wavelength and aperture size. The closer the desired resolution gets towards the used wavelength or the dimension of the aperture to the dimension of the object, the more coherent imaging effects such as diffraction have to be taken into account. The scope of this work is on the modelling and compensation of diffraction effects, with an emphasis on the influences of diffraction effects on the phase information of an FMCW radar. That is thematically located in an intersection between the fields of radar imaging, phase-based radar measurement and diffraction theory. Therefore, unusually, a brief outline of the prior art will be given below for each subfield. The introduction will conclude with the objective of the present work.

A. INTEGRATED RADAR AND RADAR IMAGING
Supported by an increasing level of integration the use of radar imaging for industrial sensing becomes more and more feasible. Especially for non-destructive testing and contactless measurements small and cheap radar sensors are currently starting to compete with other technologies, such as optics or ultra-sound. Depending on the used frequency, bandwidth, and aperture size, radar is capable of imaging within millimetre range. Wherever high precision measurements for screening, monitoring and detecting in combination with a harsh measuring environment are needed, radar serves as a robust and reliable alternative to common optical imaging. For height measurements, space scanning, environment surveillance in 3D, radar sensors have proven to be suitable for compact and effective measurements [2]. Furthermore, the increasing integration of radar sets a path towards systemon-chip approaches [3] and [4], enabling smart solutions integrated on consumer electronic platforms to provide images based on radar scanning technology [5], [6] and [7].

B. USE OF THE PHASE INFORMATION
However, the evolution on the hardware side is only one part of the story. Making effective use of the specific data, innovative radar image processing algorithms provide new quality levels for sensing and imaging. So far radar images are typically visualized only based on the amplitude information. Using the phase information available for each point of the radar imaging enables sub-wavelength accuracy reaching down to several micrometers. Distance measurements, based on phase information, achieved a distance error of less than 4 µm, using an 80 GHz radar system [8], [9], [10], [11]. In [12] a highly accurate radar based vibrometer achieved a detection of audio frequencies up to 16 kHz using phase information. In [13] and [14] the potential of 2D phase evaluation for precise 3D surface imaging was demonstrated. Further the phase is used for tomographic imaging of composite materials [15] or to detect smallest defects of metallic surfaces in near-field radar scans [16]. In consequence, phase-based measurements reduce the precision limits restricted by the radar bandwidth by a large extend.

C. DIFFRACTION IN OPTICS AND FIRST APPEARANCE IN RADAR
However, phase and amplitude information are affected by diffraction effects in a synthetic aperture radar (SAR) scan. Especially for imaging purposes, diffraction effects blur the final image and disguise the ground truth information. However, diffraction patterns heavily depends on the properties of the illuminated object [17]. In optics, this is used to determine microscopic displacements of a few nanometers [18], [19], the thickness or refraction index of thin films under predefined phase steps [20], [21], [22], the coherence length or the spectral profile of a given illuminating source [23], [24] and parameters of dielectric objects [25]. What those applications in optics have in common, is a mathematical theory that enables a deeper understanding of diffraction effects and describes physical parameter dependencies [26], [27], [28]. The first study of SAR imaging involving diffraction effects was presented in [29]. The authors presented a vector network analyzer (VNA) based SAR setup studying surface scattering effects and isolate them from specular reflection via spatial-spectral filtering. In [30] the authors recovered the deformation of large reflector antennas via near field amplitude data of an unmanned aerial vehicle (UAV) mounted radar and a physical optic model. Typical SAR applications do not deal with edge diffraction as the aperture is sufficiently larger than the object and therefore no edge diffraction occurs. However, edge diffraction effects are of great interest as they are influenced inter alia by the object properties and therefore contain information about those objects; or disguise phase based radar images near the object edges which may be compensated.

D. OBJECTIVE AND OUTLINE OF THE PRESENT WORK
The objective of the present work is an analytical model of diffraction effects at and near object edges for the final radar image with focus on the phase data; and to provide an understanding of the dependence of these diffraction effects on measurement, image and object properties. It is, to the best of the authors knowledge, the first approach which models theoretically the diffraction in radar imaging and which is verified by data from simulations and experiments. The red physical optics model is based on the Fresnel-Huygens principle and the corresponding Fresnel-Kirchhoff diffraction integral [31], [32]. Finally, the diffraction integral will be simplified to obtain a convolution based formula, which may be implemented efficiently. Only the simplified representation enables us to apply this formula.
As a first step we recall concepts from SAR imaging and calculate the diffraction effects in the final image. In the simulation section, the derived theory is compared to a full radar imaging model. Finally, the experimental verification is given, presenting the potential of the developed theory in different application areas.

II. THEORY
In what follows, we consider a monostatic FMCW (frequency modulated wave) radar in a one-dimensional SAR imaging setup, i.e., the radar is moved along a one-dimensional trajectory to create a corresponding radar aperture. This is done intentionally because we will later consider diffraction at the edges, and for this only the imaging properties in this dimension are of importance. Furthermore, the notation is considerably simplified. Then we will investigate the diffraction pattern of a one-dimensional metallic object in the final radar image generated with a modified reconstruction algorithm. While for single antenna element the far-field approximation is still valid in the analyzed case, the object is in the near-field of the imaging array. Hence we will use a spherical wave approximation instead of plane wave approximation.
Our starting point for the full radar imaging process is illustrated in Fig. 1. We can see the 1-D antenna array at r a with a single transmitter-receiver element highlighted in red, the scatter area, idealized by Huygens sources at r s and the reconstruction grid at r b . For a single antenna element, we approximate the transmitted wave by a spherical wave as the distance to the object is sufficiently large and higher multipole moments may be discarded. In this case we obtain for monostatic radar and a single scatterer the commonly used relation: Here R is the distance between the scatterer and the antenna, f (t ) = f 0 − (B/T )t describes the linear frequency chirp of the radar (with start frequency f 0 , bandwidth B and chirp time T ) and c denotes the speed of light in vacuum. Note that we have neglected the amplitude factor R −2 , as we will observe that only the phase term will be of interest. In the case of a metallic object with front surface F , the signal may be described by the superposition of the signals corresponding to each scatterer of the front surface. This implies, This corresponds exactly to the physical optics (PO) model or Huygens-model of scattering. Here r a denotes the antenna position and r s the corresponding scatter position. Moreover, O( r s ) denotes the reflectivity of the scatterer. We note that in optics this setting is comparable to the diffraction in the near-field regime at an aperture with a subsequent focusing at a lens [33]. In the present case the focusing will be achieved digitally at arbitrary imaging points by a corresponding reconstruction algorithm. There exist various reconstruction techniques (range migration etc.) and we will focus ourselves in the present case to the well-known back-projection algorithm. The algorithm acts as a sort of deconvolution for the reflectivity function O( r s ), i.e., we correct the phase of each signal and use a subsequent summation. Hence, we obtain Here A denotes the corresponding antenna aperture. As we consider a linear frequency chirp f (t ), the inner integral may be simplified using the Fourier transform S( f , r a ) of the signal s(t, r a ) with respect to t. This implies, (4) Note that this form corresponds to the standard implementation of the algorithm. Calculating the Fourier transform of (2) (which leads to the corresponding range profiles) and inserting the resulting term in (4), gives finally: Here f c = f 0 − B/2 is the center frequency of the radar. In what follows we assume O ≡ 1 on F and, due to the specific interest as mentioned in the introduction, consider only the phase. Hence, where k c = (2π f c )/c. It is not analytically solvable in the given form, we have to impose assumptions and use further approximations. As we consider only a 1-D setting we define the aperture A and the scatterer surface F in cartesian coordinates. This approach extends the usual backprojection approach by reconstructing on the scattering surface rather than in a plane. Therefore follows: Here ψ (x s ) = d + φ(x s ) in which d denotes the approximate distance to the object and φ(·) describes red a rough and bumpy structure. Hence, Using a similar approach as for the Fresnel diffraction, i.e., a corresponding Taylor approximation, we obtain Moreover, we want to reconstruct on the scatterer surface, i.e., we set Applying an analogous approximation to the term which is the necessary term to model the diffraction effects. Inserting this expression into the double integral (6) and calculating the integral with respect to x a gives us: This can be reformulated as a convolution: where: Here rect(α, β ) is the rectangular function of the metallic plate with the left and right border α and β, and L the length of the scanning aperture. If the radar parameters are fixed, the functions f depends only on the plate parameters, whereas g includes the aperture size and the distance between object and aperture. The formula in (14) describes the diffraction effects and the dependencies on introduces parameters. The diffraction occurs mainly from the dimension-limited aperture whose size is similar to the object size.

A. LIMITS OF THE APPROXIMATION
We briefly want to discuss the validity of the approximation.
To this end, we use the method of stationary phase, which asserts that the main contribution of a rapidly oscillating integrals as in (5) will arise from the stationary points of the phase function [31]. A short calculation shows that the stationary points satisfy This corresponds exactly to the geometrical optics limit. Hence, for a surface profiles ψ we may assume that x s ∼ x a ∼ x b which justifies the use of the Fresnel approximation in the case that the aperture size is not significantly larger than the object size. In contrast to ray tracing approaches Geometrical Optic (GO) or Geometric Theory of Diffraction (GTD), PO and Physical Theory of Diffraction (PTD) use the Kirchhoffformula, which gives us a closed integral representation of the reflected signal for multiple antenna positions (2). The advantage of the PO approach is that it may easily be further developed to obtain the simple convolution integral (14). For a more precise approximation of the Kirchhoff-formula, the PTD can be used. It makes use of an additional term [32], which has only a significant contribution if the antenna is positioned near the edges of the object. As we consider only small apertures and as for the imaging algorithm mainly the phase of the final signal is of interest, we use the PO ansatz. To summarize, we are mainly considering the diffraction effects caused by the imaging algorithm for small apertures than of the object itself. We note that our derived model covers the near-field and the far-field. However, imaging is only suitable in the near-field because they conserve the geometrical and surface properties. Hence, we derived an analytic solution by solving one integral analytically and transforming the another one into a computational cost-efficient convolution expression, which is advantageous for the upcoming simulation and measurement section.

III. SIMULATION
In the following section, we compare the mathematical model of (4) with our derived approximation in (14). Simulating a diffraction profile using (4) takes 1.37 s in contrast to our derived model which takes 0.031 s in the same hardware environment (Intel(R) Xeon(R) CPU E3-1505 v5 2.80 GHz, 48 GB RAM, MATLAB 2020a). This will be advantageous in terms of evaluation efficiency in a later evaluation section.
In the simulation section, we focus on the grade of agreement/congruence between simulation results of (4) and (14). Furthermore, the influences of the measurement properties on the diffraction patterns are investigated. The following simulation parameters are assumed: a radar center frequency of 80 GHz with a bandwidth of 24 GHz, a scanning aperture L of 0.190 m and an ideal metal plate at a distance d of 30 cm with edge positions at α, β±7 cm.

A. PERFECTLY EVEN SHARP-EDGED OBJECT
An ideal sharp-edged object results in the diffraction pattern depicted in Fig. 2 and Fig. 3. The amplitude in Fig. 2 is normalized to the values of the mid-segment of the object, since this area represents the spot which is less influenced by diffraction. For material characterization, the phase information is significantly important, it contains the height information of the surface. Starting from here we will focus on this property. In Fig. 3 we can see the influences of diffraction on the ground truth object. Because of visualization purpose, only the most left edge is shown. The diffraction patterns along the plate can be clearly seen. The black graph represents the ground truth. As it can be seen, the diffraction disguises the ground truth and therefore restricts high-quality surface  monitoring. Our approximation matches well with (4), especially at the edges of the object as originally intended. In the mid-segment of the object Fig. 4, the derived approximation does not match. Considering the simplification steps and the order of magnitude of a few single digit µm, the derived approximation is fits nicley at the edges of the metallic plate.

B. INCLINED PERFECTLY EVEN PLATE
In an experimental setup the object is typically never perfectly aligned with the scanning synthetic aperture. This causes a different diffraction pattern because the edges of the object are closer to, or further apart from the synthetic aperture. To visualize this effect an ideal metallic plate is simulated which is inclined. Here we assume a position and inclination of the plate with ψ (x s ) = d − 0.05x s . This results in Fig. 5. We observe that the diffraction pattern differs from the previous simulation as can be seen easily. Recall that in the backprojection algorithm we assumed a reconstruction on those voxels which correspond to the location of the plate. That is why we do not notice any overall inclination. This inclination  becomes visible in the asymmetry of the graph at the edges of the image in Fig. 5. The left edge is closer resulting in a stretching of the graph. In contrast, the segment on the right is more compressed because it is further away from the scanning aperture. Still, the derived approximation matches well with (4).

C. NON PERFECTLY MATCHING RECONSTRUCTION AREA
In a controlled simulation environment, the reconstruction area for the redback-projection (BP) algorithm matches the location of the object exactly. However, in the experiment the reconstruction area does not match the location of the object perfectly. In this section, we analyze the influences of non-matching reconstruction areas. The mismatch was only applied to the full model simulation. In Fig. 6 we can see the result if the distance of the reconstruction area differs about 50 μm from the set distance of 30 cm. This implies an overall offset but the diffraction effects are remain unchanged. In the upcoming technical approach, this offset must be removed. A similar effect can be seen if an inclined object is reconstructed with a slightly different slope of the reconstruction area. A difference of the reconstruction slope of just mx = 0.001 results in Fig. 7. We notice an inclination of the image due to the mismatch which can easily be removed via detrending. The slope information of the object is still preserved in the shape of the diffraction. In comparison to the previous section the stretching and compressing at the edges is still recognizable.

IV. EXPERIMENTAL VERIFICATION
To validate the derived theory, we decided to scan a rectangular plate and determine all in the theory defined parameters. The plate dimensions are good parameters for the verification because they can be measured in advance with a caliper. As mentioned in the introduction, diffraction effects disguise the object structure characteristics. Therefore, we compensate diffraction effects and analyze the profile structures at the edges of the object. Furthermore, to proof our model we resharp a phase based radar image, which is affected by diffraction effects as well. At first, the experimental setup is explained followed by the method description. The experimental verification is closed by the discussion of the experimental results.

A. EXPERIMENTAL SETUP
For verification purposes the test environment in Fig. 9 was used. The corresponding sketch for the experimental setup can be seen in Fig. 8. The radar is attached to a sledge, which moves automatically in x and y direction via a linear motor, as it can be seen in Fig. 9. In this particular case, the stepsize of the movement was set to 1.6 mm. The radar has a center frequency of 80 GHz and a bandwidth of 24 GHz with a mounted 120 • open waveguide antenna. The radar is based on the work of [10]. The collected signal s(t, x, y) for each scanning position, as illustrated in Fig. 8, is saved for further processing steps. As measurement object a metallic, rectangular plate was used whose dimension are 12.675 cm in length and 12.71 cm in height. The metal plate has a reflectance value of −1 and an absorption constant of 0. The white color is a thin layer  which is transparent for the given radar frequency. The whole object setup was placed 30.37 cm away from the radar and as parallel as possible. The synthetic aperture was scanned with an array size of 0.19 m centered at the midpoint of the plate. Fig. 10 shows the flow chart of the algorithm, which we will discuss in more detail below. We start with a initial BP, where the collected s(t, x, y) is used to reconstructed the image data set I (x, y, z) in Cartesian coordinates. Using the image data a first approximation of the location of the measurement object is calculated. To this end the following approach will be used: In a first step, the object is projected onto the respective two-dimensional planes resulting in three different contour images. The smallest rectangles encompassing each of the contour images are determined. The rectangles are used to refine the location of the measurement object and crop the original image data set I (x, y, z) to the desired size. After an interpolation in the range direction (based on zero-padding) we may calculate a first approximation of the surface by (x, y) = argmax z I (x, y, z).

B. EVALUATION ALGORITHM
In a next step we approximate the calculated surface by a plane for suitably chosen parameters a 0 , a 1 , a 2 ∈ R. Then we want to apply the backprojection algorithm to reconstruct on the plane (x, y). This results in a data set I (x, y) Hence, we may easily detect the edges of the objectα x,y ,β x,y and the corresponding width and length via |I (x, y)|. Estimated width, length, corresponding edge position, the data set I (x, y) and the estimated location (x, y) are collected at a very first step after the preprocessing. To calculate the height, width and the location of the object we use the one-dimensional theory developed in II. To this end we fix y 0 ∈ R (the case for fixed x ∈ R follows likewise) and set ψ (x) = (x, y 0 ). The parameter d is estimated by the mean distance of ψ (x). The loop calculation contains a fitting, a define and a detrend step. Since the profiles were extracted from a two-dimensional data set, the profiles I (x, y 0 ) or I (x 0 , y) contain not only diffraction contributions from the opposite edges, but also from perpendicular edges. This results in a kind of beat on the profile, which can be compensated by a polynomial detrend. At the same time, the tolerances and weightings for the final fitting are defined under the assistance of the previous estimated parameters. Here the tolerances for the edges are α, β = ±1 cm, for the distance d = ±3 mm, for the inclination of the object m ± 0.2 and L± =2 mm for the antenna aperture. Since the approximation model applies at the edges, the measured values near the edges are higher weighted. Especially the definition of the tolerances is of utmost importance. The function to be minimised has a large number of zeros crossings. Those are outputted as undesired global minima if the fit function is not properly weighted and the tolerances are not well defined. We assume that the position of the edges have the greatest impact on the diffraction. Thus, wet set a greater tolerance for the edge values and ensure that these values have the largest margin within the fitting process. The parameter distance and antenna aperture are within the technical capabilities of the radar imaging setup. Finally, the derived theoretical model (14) I mod is minimized against the prepared and extracted profiles I Mes (I (x, y 0 ) or I (x 0 , y)) via minimisation of the least square error [34]. Within the above analyse algorithm, the limiting factor is the sampling distance of I mod and I Mes .
For example, if the to be determined true edge value lies in between two sampling values, the fitting is limited by the sampling distance. To ensure a method depended accuracy the corresponding profiles were interpolated I Mes and I mod ideally to λ c /4 which corresponds to the physical resolution limit. The fitting process is the most time consuming step. Here, the convolution expression of (14) comes in quite handy. Assuming the worst case, the fitting process for one profile is cancelled automatically after 500 iterations resulting in a total fitting time of 15.5 s using (14), in contrast to (4) where it takes 11 min 25 s.

1) HIGHLY PRECISE CROSS-RANGE MEASUREMENT
In Fig. 11 the final image is depicted. It represents the front view of the scanned object as amplitude values along the xand y-axis. The characteristic diffraction pattern propagates along both dimensions. At the top, the diffraction pattern is more stretched, indicating that the plate was leaned towards the radar, in contrast to the x-dimension where a nearly parallel alignment was achieved. To achieve the upcoming results, the data in Fig. 11 are processed for each column (x-position) and row (y-position) separately. In Fig. 12 we can see the result for the width parameter for each y-position.  It shows a nearly constant width distribution except for the borders which correspond to the end of the metallic plate. Exclude these border samples, we can obtain an average width of 12.605 cm with a standard deviation of 138.11 μm. This is a good result for the measured width of 12.675 cm. As comparison, an edge detection (by mean of a first derivative) obtained an width of 13.28 cm and a height of 9.94 cm since it searches for a local maximum of the image which does not necessarily matches the true edge position. The slope in the x-direction is determined to −0.0012 which equals −0.072 • with a standard deviation of 0.170 • as it is shown in Fig. 13. Outliers at position 0.39 m and 0.42 m are assumed to be optimization artifacts in which the algorithm found another local minimum. Good results as well could be achieved in the height direction. Those results can be seen in Fig. 14 and 15.
With an average of 12.713 μm and a standard deviation of 246.67 μm the determined height is in very good agreement with the measured value. Further, the described inclination is determined with an average slope of −0.0789 which equals

2) DIFFRACTION COMPENSATION FOR SURFACE MONITORING
Another technical application is the compensation in a resulting image. As mentioned in the theory section, diffraction effects disguise the underlying ground-truth e.g. the surface original roughness. In Fig. 16 we can see the phase information of a profile of the first metallic plate. Especially at the edges of the object diffraction distorts the ground-truth. To enhance the image and get closer to ground truth our model can be subtracted from the measurement resulting in the red graph. Before doing so the signal is cropped regarding to determined edge position. Additionally, the signal is processed with a moving average filter causing a smoothing effect which was lost by neglecting the sinc in (5). As result, the image is compensated revealing further diffraction effects caused by the true surface roughness.

3) DIFFRACTION COMPENSATION FOR IMAGE ENHANCEMENT
The compensation can be used to enhance images. Fig. 17 shows a metal plate with 40 µm deep inscription and the resulting phase image in Fig. 18. For the resulting image, the aperture was chosen large enough to enforce diffraction only at the top of the object. As a visualization, we used the red relief image approach. Originally used for the amplitude data of lidar in topography, it shows great potential in the visualization of SAR phase data [35]. In Fig. 18, diffraction disguises the inscription, especially the logo and the middle section of each letter. Using our derived theory we subtract the model from the original data which leads to Fig. 19. Both images are processed with the same image algorithm and corresponding parameters. However, the description in the second image is much clearer and richer in contrast. Our model matched the upper diffraction and by subtracting it reveals the original disguised inscription.

V. CONCLUSION
In this paper, we derived an approximation for the radar imaging process to describe diffractive phenomenae. Our approximated model describes the process from mechanical scanning to image reconstruction. We compared our derived model with (4) and discussed the influences of several important parameters. To verify our model, we scanned an almost perfectly symmetric plate and determined size and location. Considering cross-range resolution of λ c /4 we determined the height and width of the object to an average of 12.605 cm and 12.713 cm with a standard deviation of 138.11 µm and 246.67 µm which agrees good with the measured values of 12.675 cm and 12.71 cm. Compared to derivative based edge detection algorithm our method achieved better cross range accuracy which could be used in the quality assurance of metalworking industries. Furthermore, we could measure the inclination in the horizontal and vertical direction, proving a parallel alignment in the width direction and negative inclination in the height direction. Furthermore, we showed the possibility of phase image enhancement via diffraction compensation which shows good potential for technical applications in terms of high precision surface monitoring. So far, the 1D model is applied to a 2D data set. Future work will focus on the extension of the theory into the second dimension. This may improve the fitting results at the corners of the object, where each edge contributes equally to a given profile. To determine the edge positions of an object with a more complex shaped objects and/or apertures, the theory can be easily reduced to a single side fitting approach. At the moment, the model fits the diffraction effects at the edges of the object, as mentioned at the beginning. Future work will focus on an analytical approach to enable the model to describe diffraction effects far away from the edges. A possible extension may also include dielectric objects by using a complex reflection function. Using this approach, diffraction effects in SAR imaging, experimentally investigated in [36], might be corrected similarly to applications in microwave ellipsometry [37].