Introduction
Radio frequency (RF)-based sensing systems have long been used for aircraft monitoring, meteorological radar, and synthetic aperture radar (SAR) imaging [1], [2], [3]. With the use of millimeter wave (mmWave) and terahertz (THz) frequency bands, radar systems have started to receive significant attention in indoor applications, such as in-cabin monitoring, occupancy sensing, gesture recognition, smart factory, healthcare, and home robotics [4], [5], [6], [7], [8]. The reasons are twofold. First, wide bandwidth can provide sufficient resolution for applications in indoor scenarios [9], [10], [11]. Second, radio-based sensors are believed to be better than camera-based ones for protecting privacy and robustness [12].
To design and optimize wide-band indoor RF systems, the knowledge of channel models is essential [13]. However, the channel characterizations of indoor radar systems are different from the outdoors for two main reasons. First, because of the dense multipath in indoor environments, the influence of clutter is more severe than in outdoors [14]. Second, with the rise of data-driven-based applications, e.g., human motion detection and gesture classification, the modeling of Doppler and micro-Doppler in dynamic scenarios is becoming a new requirement [15]. Furthermore, the emerging integrated sensing and communication (ISAC) technology will be a key vertical in the coming B5G communications era. As a result, an understanding of the radar channel features, although not limited to indoor, is being sought by a wider audience [16].
Conventionally, the channel characteristics can be obtained by field measurements and simulations. However, measurements can be time-consuming and expensive, especially in high-frequency bands. Consequently, efficient channel simulation techniques offer a compromise [17]. The geometry-based simulations are popular for the balance of accuracy and time consumption. Some of them include the deterministic ray tracing (RT) [18], [19], stochastic propagation graph [20], [21], and the hybrid semi-deterministic methods [22], [23]. They make use of optical-ray-based geometrical principles to mimic electromagnetic wave propagation. Those channel simulation methods have long been used and validated in wireless communications [24], [25] and source localization [26], [27]. However, conventional simulators concentrate on wireless parameters and lack the emulation of radar characteristics like Doppler and micro-Doppler. Because communication systems have a preferential focus on large-scale coverage and channel coherent time, therefore wireless communication channel simulators usually simulate discrete positions of the users’ track [28], [29], [30], while cannot satisfy the target-centric radar requirements in some emerging research trends, e.g., the abovementioned ISAC and radio-based AI applications.
In this article, we aim to provide a radar simulator capable of generating both the multipath of indoor environments and the dynamic Doppler and micro-Doppler due to non-point target motion.
A. Related Works of Radar Simulation
To achieve Doppler changes due to realistic motion, Krebs et al. [34] and Pavlakos et al. [35] utilize field measurement and computer vision, respectively, to record skeleton velocities of typical human motions and set up motion-capturing (MoCap) databases. Those MoCap databases can be used to generate synthetic radar signals. However, MoCap databases lack consideration of the environment, while the multipath due to the environment cannot be omitted for indoor applications, especially considering realistic radio propagation.
To address this, some recent studies try to utilize animation software, e.g., Blender [32], [36], [37] and Optix [31], to model the environment. As a follow-up, Costa et al. [33] enhance the Blender-based models by including the multibounce rays in the simulation, i.e., the multipaths via more than one-time reflections/scattering by the target and environment. However, gaps still exist in the state of the art. Many works are motivated by the automatic driving industry and focus on outdoor scenes, e.g., the OptiX-based ones [31] and FEKO [38]. Thus, they simulate fewer multipath. Further, most of them fall short of generating micro-Doppler due to gesture motions of the targets [31], [32], [33], [36].
B. Contributions of This Article
In this article, an image rendering-based mmWave multiple inputs and multiple outputs (MIMO) radar simulator is developed for indoor applications built on Blender. By importing the MoCap database and using the animation tools, we can capture even the slight changes in propagation paths due to the gestures and motions of the target as well as the interactions of the environment. A summary of the differences between the existing literature and the proposed work is summarized in Table I. The detailed contributions of this work include as follows.
Using Blender extension to import the realistic human motions in AMASS [39] and rendering propagation paths containing both velocity and environment effects.
Identification of appropriate Blender outputs and subsequent mapping of the frame rates of Blender animation with the fast time and slow time of frequency modulation continuous wave (FMCW) waveform. The sampled beat-signal models of the commonly used orthogonal MIMO modes are derived based on the RT outputs of each frame, e.g., time-division multiplexing (TDM), code-division multiplexing (CDM), and frequency-division multiplexing (FDM).
Applying a virtual array generation method to accelerate MIMO antenna simulation based on the array geometry, the field-of-view (FoV) range settings, and the FoV resolution.
Validating via field measurement of pedestrian scenarios in both anechoic chamber and office corridor using mmWave band TDM-MIMO sensors. The important radar images, i.e., the range-angle map (RAM), range-Doppler map (RDM), and the continuous time-Doppler velocity are measured, simulated, and compared. Besides, the clear observation of the micro-Doppler phenomenon due to the swings of arms and legs is presented in the simulation.
This study can serve as a guide in generating dynamic digital maps for wireless channel simulations. For RT simulation frameworks like [36], the number of propagation paths may change in different frames, therefore it is challenging to identify the velocity of each path. The proposed method exploits the fixed number of pixels in each image to generate the same number of paths in each frame and calculate velocity due to target motions, i.e., the velocities are obtained by differentiating the distances of each pixel. Since the work does not assume identical paths, the calculated velocity is perturbed due to pixel migrations. Further, we use a velocity filtering method to reduce the noise due to pixel migrations.
The rest of this article is organized as follows. Section II introduces the simulation chain and the outputs of Blender. Section III derives the sampled beat signal models and maps with Blender outputs. Section IV gives the measurement-based validation and Section V concludes this article.
The following notations are used throughout this article: lower-case (upper-case) bold characters denote vectors (matrices), particularly,
Image Rendering-Based Radar Simulation Chain
Fig. 1 illustrates the process considered in this article to generate FMCW signals that will be detailed in the sequel. However, a cursory look indicates the two-step methodology wherein, an effective depiction of the scenario using the critical target parameters, i.e., range, angle, Doppler, and reflectivity is first carried out using image rendering and these parameters are subsequently used to generate FMCW signals using a model-based approach. The different steps in image rendering are elaborated below; Section III deals with the generation of radar signals.
A. Rendering Methodology
1) Scenario Modeling Using Blender:
The first step of scenario modeling requires emulating the objects in the real scene in software by appropriately defining the material properties, their geometric size, orientation, and motion [40]. Blender is an open-source animation tool [41] offering flexibility in incorporating such scenes using native animation tools or through well-defined interfaces. In addition to the pre-set scene, the view settings define the perspective. The light source and camera of Blender are regarded as the transmitter (Tx) and receiver (Rx), respectively, where the FoV is regarded as the ideal Rx antenna pattern. Here, the Rx is used as an idealized directive antenna with unit gain throughout the FoV and no reception outside the FoV (i.e., without sidelobes outside the FoV). In this article, we deal with monostatic radars, and hence the light source and camera are colocated. However, if the positions of the light source and camera are separated, the method can be extended to simulate bistatic systems.
2) Image Rendering Using RT:
The operation of the second step is image rendering, where the dynamic 3-D scene is converted to sequential 2-D images. Each image consists of a certain number of pixels and each pixel can be regarded as a point object. The RT embedded in Blender is utilized to trace the propagation paths among Tx, Rx, and each point object, from which the desired parameters like the spatial, angular, and Doppler information can be extracted. Henceforth, in this article, a rendered image is also called a frame; this forms the basic processing unit in Blender.
Each frame is rendered to be an image represented by
B. Image Rendering: Parameters, Their Derivation, and Visualization
1) Distance and Strength Matrices:
In a dynamic scenario, the distance and strength of each pixel vary with the frames. To obtain these parameters, we begin by denoting \begin{equation*} \mathbf {l}_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}=\left [{{x_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}\ y_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}\ z_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}}}\right ]^{\text {T}}. \tag {1}\end{equation*}
Let \begin{align*} R_{n_{\text {frame}},n_{\text {el}},n_{\text {az}} }& = & \dfrac { \|\mathbf {l}_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}-\mathbf {l}_{\text {tx}}\|_{2} }{2} + \dfrac { \|\mathbf {l}_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}-\mathbf {l}_{\text {rx}}\|_{2} }{2} \tag {2}\end{align*}
Let \begin{equation*} P_{r_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}} = \dfrac {P_{t} G_{t} \sigma _{n_{\text {frame}},n_{\text {el}},n_{\text {az}}} }{ \left ({{4\pi }}\right)^{2} { \left ({{R_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}/2}}\right)^{4} } } A_{\text {eff}} \tag {3}\end{equation*}
2) AoA of Each Pixel:
Let \begin{align*} {\Theta }_{n_{\text {el}},n_{\text {az}}}& = \theta _{\text {bw}}\left ({{ -\dfrac {1}{2}+ \dfrac {n_{\text {az}}-1}{N_{\text {az}}-1} }}\right), \quad N_{\text {az}} \gt 1 \\ {\Phi }_{n_{\text {el}},n_{\text {az}}}& = \phi _{\text {bw}}\left ({{ -\dfrac {1}{2}+ \dfrac {n_{\text {el}}-1}{N_{\text {el}}-1} }}\right), \quad N_{\text {el}} \gt 1 \tag {4}\end{align*}
3) Visualization of Image Rendering:
The rendering results of distance and strength matrices can be mapped with the AoA matrices using (2)–(4), respectively. For example, Fig. 2 represents the heat map of the distance matrix1 as a function of elevation and azimuth angles. Fig. 2(a), (c), and (e) show the Blender models of pedestrian walking, turning, and an office scenario, respectively. Fig. 2(b), (d), and (f) illustrate the heat map of angle-distance matrices of each scenario, respectively, where the x-, y-, and z-axes denote the azimuth angle, elevation angle, and the distance of each pixel, respectively. Furthermore, multiple bounces can also be rendered and the user can define their maximum order in the rendering engine, e.g., in the indoor office scenario shown in Fig. 2(f), the mirror images of the human and furniture due to multiple scattering can be observed clearly on the left and back walls. It is also worth mentioning that the one-bounce simulation is constrained by the FoV of the camera. However, the multiple bounces are not necessarily constrained by the FoV and the simulator is applicable in multiple target scenarios. This rendering offers a reference for the appropriate images subsequently created using radar signals.
Modeling in Blender and rendering outputs are represented by the heat map as a function of elevation and azimuth angles, where the color bar denotes the distance in meters. (a) Walking toward the camera. (b) Heat map of distance matrix. (c) Turning direction. (d) Heat map of distance matrix. (e) Human walking in an office scenario. (f) Heat map of rendering distance matrix after multiple bounces.
In summary, we first define the FoV of the camera (i.e., the image view as seen by the camera) and the number of pixels in this image. This allows us to calculate the AoA from each pixel to the camera using (4). The AoA of the multipath is obtained similarly using the pixel that is involved in that multipath.
C. Velocity Calculation and Mitigation of Pixel Migration
The difference in distance can be used for velocity calculation. Let \begin{equation*} V_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}} = \dfrac {-2\left ({{R_{n_{\text {frame}},n_{\text {el}},n_{\text {az}} } - R_{n_{\text {frame} -1},n_{\text {el}},n_{\text {az}} } }}\right)}{ T_{\text {frame}} } \tag {5}\end{equation*}
Velocity and strength heat maps with
As objects move, the same pixel in subsequent frames tends to represent different scatterers; we refer to this as pixel migration. Setting short frame intervals, we assume a particular pixel in a frame continues to represent the same scatterer in the subsequent frame (i.e., we assume that the motion is confined within the pixel). Then the difference in distance for the same pixel in subsequent frames (i.e., pixels with identical AoA indices), obtained from Blender, can be used for velocity calculation. To avoid pixel migration based on this discussion, several settings in Blender need to be carefully chosen. As a case in point, a higher number of pixels for a given frame size lead to a lower size of each pixel thereby enhancing pixel migration; further, faster frame rates
Nevertheless, pixel migration is inevitable and results in large velocity, especially at the edge of the target. Taking the human walking scenario in Fig. 3(a) as an example, it can be observed that the velocities of some edge pixels are much greater than those corresponding to a human walking, which is typically less than 2 m/s. Considering that micro-Doppler velocity may be greater than the main body’s velocity, a threshold of 6 m/s is applied. The velocity image after filtering is shown in Fig. 3(b), where the extremely large velocities due to pixel migration are filtered out. It is also worth mentioning the signal strength image in Fig. 3(c), where the signal strength of edge pixels is rather weak compared to the trunk, this feature prevents severe noise even if the pixel migration is not totally filtered.
Radar Signal Generation
Having obtained the key parameters from Blender, this section elaborates on the methodology to generate radar signals from the derived parameters. It begins by discussing the applicability of optical source-based results from Blender into RF-based radar systems, identifies the key missing component of MIMO, and provides a mechanism to generate MIMO radar signals based on FMCW waveforms.
A. Applicability of Parameters
The RT renders the scenarios based on optical-ray principles, which can result in inaccurate powers of simulated paths. For purposes requiring accurate path loss predictions, e.g., wireless communication base station deployment, elaborated measurements are needed to calibrate RT parameters. However, for purposes of utilizing RT to identify the geometry parameters of propagation paths, e.g., sensing and environment mapping, accurate strength models are not necessary [45]. In this radar simulation, we are concerned about the radio path trajectory from RT, the geometry parameters, e.g., range, AoA, and the Doppler velocity due to target motions. Since the RT is based on the optical setting, the absolute reflected power would be different than mmWave propagation. However, users can define parameters like material and additional scattering loss in Blender. The simulation can obtain relative power strengths of targets and clutters based on the material settings and bouncing orders.
B. Virtual MIMO Generation Based on Steering Vector
The Blender-based rendering offers a single-input single-output (SISO) perspective. However, this article is interested in generating radar signals beyond the SISO architecture. Toward this, and to enable MIMO, the rendering must be repeated for the position of each transmit–receive antenna pair. Such an exercise is resource-consuming and a simplified alternative is provided below.
Consider a colocated unified linear array (ULA) radar system with M Txs and N Rxs shown in Fig. 4, where the Rx elements are placed at a distance of
1) Distance Matrix of Virtual Channel:
Let \begin{align*} {\mathbf {R}_{n_{\text {frame}}}}_{\text {m,n}} & = \mathbf {R}_{n_{\text {frame}}} + \left ({{m-1}}\right)N \dfrac {\lambda }{2} \sin {\boldsymbol {\Theta }_{n_{\text {frame}}}} \\ & \quad + \left ({{n-1}}\right) \dfrac {\lambda }{2} \sin {\boldsymbol {\Theta }_{n_{\text {frame}}}} \tag {6}\end{align*}
2) Signal Strength Matrix of Virtual Channels:
Considering the short spacing of the MIMO array in mmWave, the signal strengths of reference antennas are used for all the pairs of MIMO.
3) Doppler Velocity Matrix of Virtual Channels:
Let \begin{equation*} \mathbf {V}_{n_{\text {frame}},m,n} = \dfrac {-2\left ({{\textbf {R}_{n_{\text {frame}},m,n } - \textbf {R}_{n_{\text {frame}-1},m,n } }}\right)}{1/R_{\text {frame}}}. \tag {7}\end{equation*}
\begin{equation*} \mathbf {V}_{n_{\text {frame}}} = \dfrac {-2\left ({{\textbf {R}_{n_{\text {frame}} } - \textbf {R}_{n_{\text {frame}-1}} }}\right)}{1/R_{\text {frame}}}. \tag {8}\end{equation*}
Finally, generating virtual MIMO channels in the aforementioned avoids significant processing delays caused by rendering all the paths in a multiantenna system, one at a time.
Having obtained the relevant parameters, the sequel now discusses the generation of appropriate radar signals based on FMCW. Extensions to other waveforms are provided in Appendixes B and C.
C. TDM-MIMO FMCW Radar Transmissions
FMCW is widely used in radar systems, due to the low-cost and efficient de-chirp techniques [8]. Fig. 5 shows the typical FMCW signal, the Tx transmits a linear frequency-modulated waveform sequentially. Each coherent processing interval (CPI) consists of L chirps, with each chirp time
The signal transmitted by the radar is a function of time t and chirp index l, and is obtained by the superposition of the transmitted signals of all the antennas as \begin{equation*} s\left ({{t;l}}\right) = \sum _{m=1}^{M} s_{m} \left ({{t -\left ({{l-1}}\right)T_{b} -\left ({{m-1}}\right)T_{p}}}\right) \tag {9}\end{equation*}
\begin{equation*} s_{m} \left ({{t;l}}\right) = \sqrt {\frac {P_{0}}{2}}\exp \left ({{j\phi \left ({{ \tilde {t} }}\right)}}\right) \tag {10}\end{equation*}
\begin{equation*} \tilde {t} = \left ({{t- \left ({{l-1}}\right)T_{b} - \left ({{m-1}}\right)T_{p}}}\right) \tag {11}\end{equation*}
\begin{equation*} \phi \left ({{\tilde {t}}}\right) = 2\pi \left ({{f_{l} \tilde {t} +\frac {1}{2}\mu {\tilde {t}}^{2}}}\right)-\phi _{0} \tag {12}\end{equation*}
D. Beat Signal Model for TDM-MIMO FMCW Radar
The received signal at the nth antenna is \begin{equation*} r_{n}\left ({{t;l}}\right) = \sum _{m=1}^{M} \sigma _{m,n,l} s_{m} \left ({{ \tilde {t} -\tau _{m,n,l}}}\right) \tag {13}\end{equation*}
\begin{equation*} \tau _{m,n,l} = 2 \dfrac {R_{m,n,l} + v_{r} \tilde {t} }{c} \tag {14}\end{equation*}
\begin{equation*} R_{m,n,l} = R_{0,l} + \left ({{m-1}}\right)Nd\sin {\theta } + \left ({{n-1}}\right)d\sin {\theta } \tag {15}\end{equation*}
After the dechirp process on the receive side, the phase of the IF signal is \begin{equation*} \Delta {\psi }_{n}\left ({{t;l}}\right) = \sum _{m=1}^{M} 2 \pi \left ({{ f_{l} \tau _{m,n,l} + \mu \tau _{m,n,l}\tilde {t} - \dfrac {1}{2} \mu \tau _{m,n,l}^{2} }}\right). \tag {16}\end{equation*}
\begin{equation*} \Delta {\psi }_{n}\left ({{t;l}}\right) = \sum _{m=1}^{M} 2 \pi \left ({{ \left ({{\dfrac {2 ~\mu R_{m,n,l}}{c} - f_{D} }}\right) \tilde {t} + \phi _{1} }}\right) \tag {17}\end{equation*}
\begin{equation*} f_{D} = -2 \dfrac { v_{r} }{c} f_{l} \tag {18}\end{equation*}
\begin{equation*} \phi _{1} = \dfrac {2 R_{m,n,l} f_{l}}{c}. \tag {19}\end{equation*}
Every chirp block of the de-chirped signal is then sampled with the sampling frequency \begin{align*} \mathbf {Z}_{n}\left ({{n_{s},l}}\right) & = \sum _{m=1}^{M} \sigma _{m,n,l}\exp \left ({{j2\pi \left ({{ \dfrac {2 ~\mu R_{m,n,l}}{c}\dfrac {n_{s}-1}{F_{s}}}}\right.}}\right. \\ & \qquad \qquad \qquad \qquad \left.{{\left.{{-\,f_{D}\left ({{l-1}}\right)T_{b} \vphantom {\left ({{ \dfrac {2 ~\mu R_{m,n,l}}{c}\dfrac {n_{s}-1}{F_{s}}}}\right.}}}\right)\vphantom {\left ({{j2\pi \left ({{ \dfrac {2 ~\mu R_{m,n,l}}{c}\dfrac {n_{s}-1}{F_{s}}}}\right.}}\right.}}}\right). \tag {20}\end{align*}
E. Beat Signal Simulation for TDM-MIMO FMCW Radar
In Blender-based radar simulation, the following assumptions are utilized.
Each frame in Blender is regarded as one CPI, where there are L chirps within, hence the duration of CPI is
, and the chirp duration isT_{\text {frame}} = 1 / R_{\text {frame}} . An illustration of how Blender frames correspond to FMCW CPIs is provided in Fig. 7. The virtual slow time is generated by slicing one Blender frame to L chirps.T_{p} = T_{\text {frame}} /L In each chirp, the beat signal is obtained by sampling the IF signal with a sampling frequency
. SinceF_{s} is much larger thanF_{s} , the variations in the latter are ignored within sampling intervals.f_{D} Each pixel of the Blender output image can be regarded as a target in radar detection, hence the received signal models used for simulation are the summary effects of
pixels.N_{\text {az}} N_{\text {el}}
Recalling the definition of \begin{align*} \mathbf {Z}_{n}\left ({{n_{s},l}}\right) & = \sum _{n_{\text {az}}=1}^{N_{\text {az}}}\sum _{n_{\text {el}}=1}^{N_{el}} \sum _{m=1}^{M} P_{r_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}} \exp \left ({{ j2\pi \vphantom {\dfrac {2 ~\mu R_{n_{\text {frame}}, n_{\text {el}},n_{\text {az}}} } {c}}}}\right. \\ & \quad \left.{{\times \left ({{ \dfrac {2 ~\mu R_{n_{\text {frame}}, n_{\text {el}},n_{\text {az}}} } {c}\dfrac {n_{s}-1}{F_{s}}}}\right.}}\right. \\ & \quad \left.{{\left.{{-2 \dfrac {f_{l} V_{n_{\text {fame}},n_{\text {el}},n_{az} }}{c} \left ({{l-1}}\right) \dfrac { T_{\text {frame}} }{L} }}\right.}}\right. \\ & \quad \left.{{\left.{{+ \dfrac { \left ({{\left ({{m-1}}\right)N +n-1}}\right) \Delta d }{\lambda } \sin { \theta _{n_{\text {el}},n_{\text {az}}}} }}\right) }}\right) \tag {21}\end{align*}
In TDM-MIMO, the radar signal of different Txs can be separated based on the chirp index l at each Rx side, i.e., one TDM block contains M time orthogonal chirps. The signal from the mth Tx to the nth Rx \begin{equation*} \mathbf {Z}_{m,n}\left [{{\hat {l},:}}\right ] =\mathbf {Z}_{n}\left [{{ m+ \left ({{\hat {l}-1}}\right)M,:}}\right ] \tag {22}\end{equation*}
\begin{equation*} \hat {l} = 1, 2, \ldots, \dfrac {L}{M}. \tag {23}\end{equation*}
In the preceding development, the beat signal for TDM mode radar with FMCW transmissions, denoted by
Validation
The developed radar simulator is validated by a human2 walking in both anechoic chamber and office corridor scenarios, where the field measurements with mmWave Texas Instruments (TI) sensors are conducted. The validation procedure is outlined in Fig. 8. First, a field measurement is carried out to collect raw data based on the TI mmWave TDM MIMO sensor. Important sensor configurations and measurement descriptions can be found in Table II, where these sensor configurations are applied in all subsequent measurements and simulations except those specifically mentioned. Subsequently, the environment and human motions are mimicked in Blender to obtain simulated data. Finally, we apply the same estimation algorithms on both the measured and simulated data to compare the estimated range, angle, Doppler, and micro-Doppler results, and discuss the performance of the proposed simulator.
A. Measurement Campaign of Anechoic Chamber
1) Measurement Description:
A picture of the measurement environment and sensor is shown in Fig. 9. A human walks in the predefined routes is illustrated in Fig. 10. A human walks from the Site d via Site a to Site c, and walks from Site a straight to Site b, then walks back to Site a.
Description of environment, sensor deployment, and walking routes in the anechoic chamber.
2) Calibration in Measurement:
A static measurement is conducted to calibrate the power level, range, and angle offset of measurement, where a human is standing still at the Site a in Fig. 10. Then the RAM is obtained, where the estimated range and angle of the target are around 3.38 m and -2.5°. In the ground truth as shown in Fig. 10, the range and angle should be 3.3 m and 0°, hence concluding that the systematic error of measurement in range and angle is within 0.1 m and 3°, respectively. To keep identical simulation and measurement power levels for a fair comparison, each radar image is normalized by the highest power of each image. Hence the highest power is 0 dB in all the figures shown later.
3) Angular Resolution of Virtual MIMO in Simulation:
We simulate a human stand at Site d using different numbers of MIMO antennas to compare the simulator’ angular resolutions. Fig. 11 shows the RAMs obtained via fast Fourier transform (FFT) of 1 by 4, 2 by 4, and 4 by 4 antennas in a ULA, respectively. We can observe that the angular resolution of 1 by 4, 2 by 4, and 4 by 4 ULA is around 40°, 20°, and 10°, respectively. As the number of antennas in the virtual array increases, the angular resolution increases, and the sidelobes are suppressed. It validates the simulator’s capability of generating MIMO radar signals. The virtual MIMO array generation method keeps the properties of MIMO radars.
Simulated angular resolution using different numbers of MIMO antennas: the RAMs obtained via FFT. (a) Simulation of 1 by 4 antennas. (b) Simulation of 2 by 4 antennas. (c) Simulation of 4 by 4 antennas.
B. Comparison of the Anechoic Chamber Scenario
1) Comparison of the RAMs:
Considering the angular resolution of 2 by 4 ULA obtained by FFT is around 20° in Fig. 11. We apply Capon [52] to both simulated and measured data. The normalized RAMs at Site d, Site a, and Site c are shown in the first, second, and third columns of Fig. 12, respectively, where the simulated and measurement results obtained via Capon [52] are shown in the sub-figures of the first and the second row, respectively. For each sub-figure, the point with the highest power is labeled with a red dot.
Range-azimuth angle comparison of simulation and measurement via TDM-MIMO sensor. (a) Site d: Simulated RAM obtained via Capon. (b) Site a: Simulated RAM obtained via Capon. (c) Site c: Simulated RAM obtained via Capon. (d) Site d: Measured RAM obtained via Capon. (e) Site a: Measured RAM obtained via Capon. (f) Site c: Measured RAM obtained via Capon.
Generally, the simulation is in line with the measurement data, where both simulated angles and measured angles at Site d, Site a, and Site c are around 15°, 0°, and -15°, respectively. According to the trigonometric ranging formula, the theoretic absolute values of the AoAs at Site c and Site d can be calculated as
2) Comparison of RDMs:
The human walks from Site a to Site b and then walks back to Site a is used to compare the Doppler velocity. For the description convenience, the movement from Site a to Site b is called Site ab in the following discussion, and the movement from Site b to Site a is called Site ba. The estimated normalized RDM examples of Site ab of measurement, simulation with clutter, and simulation without clutter are shown with 30 dB dynamic range in Fig. 13(a)–(c), respectively.
Comparison of RDM obtained via 2-D FFT: from Site a to Site b simulation. (a) Measurement via TDM-MIMO sensor. (b) Simulation considering clutter. (c) Simulation without clutter.
Typically in radar, we define the velocity of the target moving close to the radar to be negative, and moving away to be positive [53]. The estimated velocity of the simulation matches exactly with the measurement data of the movement Site ab, where the detected speed of the main body is around -1 m/s. Besides, both the measurement and simulation results show dispersion in the ranges and micro-Doppler due to the walking motions. It gives a more clear observation of motion effects. Furthermore, we could also observe that the measurement results and simulation considering clutter contain some zero-Doppler returns, which are multipaths due to clutters in the measurement environment.
3) Comparison of Continuous Time-Doppler and Micro-Doppler Results:
To have a further analysis of the Doppler and micro-Doppler results, we plot the continuous time-Doppler results based on each RDM, i.e., by choosing the velocity bins of the range index with the maximum value of the RDM and its neighbors to plot the continuous time-Doppler plots. This method is a commonly used way to observe the micro-Doppler [54]. The continuous time-Doppler plots of measurement, simulation with clutter, and without clutter are shown in Fig. 14(a)–(c), respectively.
Continuous time-Doppler plots comparison: the hamming window is applied to eliminate sidelobes for each RDM. (a) Field measurement. (b) Simulation considering clutter. (c) Simulation without clutter.
For both the measurement and simulation: 1) when the human walks toward the sensor, the Doppler velocity is around -1 m/s; when the human walks away from the sensor, the Doppler velocity is around 1 m/s and 2) the quasi-periodic micro-Doppler due to periodic arm and leg swings during walking is observed. Simulation shows better micro-Doppler results than that obtained from measurement since Blender can capture any slight changes in motion, especially without clutter. Meanwhile, it could also be observed that the hardware has some limitations in the measurement result, e.g., the noise, however, the simulation approach shows good anti-interference performance.
C. Measurement Campaign of the Office Corridor Scenario
1) Measurement Description:
The measurement environment of the office corridor is shown in Fig. 15(a). The width of the corridor is 2 m, and the sensor is at the center facing a terminal located at a distance of more than 5 m. A human walks along the predefined yellow line as shown in Fig. 15(a) from Site A to Site B and returns. The sensor continuously measures the multipath signals from the moving human and the interaction with the environment. The detailed environment geometry and distance information are illustrated in Fig. 15(b), and the sensor configuration is provided in Table II.
One frame description of the office corridor scenario in both measurement and simulation. (a) Picture of the measurement. (b) Top view illustration. (c) 3-D model of the office corridor scenario used in simulation. (d) Image rendering by considering two-bounce reflections: white pixels in the image denote a reflecting one-bounce or multi-bounce paths, the black pixels denote no multipaths in that direction. (e) Rendering outputs: each pixel is featured by distance, azimuth, and elevation angles, where the colorbar shows the total tracing distance. (f) Recovered point clouds of up-to two-bounce paths, where one point corresponds to one pixel that denotes one reflecting path, the color shows the normalized scattering strength of that path.
2) Simulation Settings:
The 3-D environment is built in Blender according to the geometrical size of the measurement scenario, as shown in Fig. 15(c). The left and right walls are set to be reflectors, and the terminal wall is set to be a diffuse scatter. Up to two bouncing paths are traced in this simulation setting. The human’s walking velocity is around 1 m/s, where the whole motion process consists of 300-frame video with a frame rate of 30 Hz. This means the time difference between two images is 1/30 s.
3) Rendering Results:
An example of the image rendering result is shown in Fig. 15(d), where both the one-bounce and two-bounce paths are presented. Based on the strength of each pixel, the propagated distance corresponding to each pixel can be calculated in Blender as elaborated in [33], i.e., 1) the distance for one-bounce path: light source
D. Comparison of the Office Corridor Scenario
1) Comparison of Multi-Bounce Effects in RAMs:
As the human walks from Site A to Site B, the distance between the human and the sensor changes from around 4.5 to 2 m. We choose the measured RAMs at a distance of every 0.5 m in the motion from Site A to Site B, i.e., the human at the ranges of 4, 3.5, 3, and 2.5 m, to compare with the simulations. Those four measured RAMs are shown in Fig. 16(a)–(d), respectively. The return walks from Site B back to Site A at the range of 2.5, 3, 3.5, and 4 m are shown in Fig. 16(i)–(l), respectively. The simulated RAMs of walking from Site A to Site B at the range of 4, 3.5, 3, and 2.5 m are shown in Fig. 16(e)–(h), respectively, and the return walks at the range of 2.5, 3, 3.5, and 4 m are shown in Fig. 16(m)–(p), respectively.
RAMs comparison between simulations and measurements. (a) Measurement: A to B at
Some RAMs are not identical and the following reason can be ascribed for the differences. The measurement and simulation are dynamic. The simulated human motions are imported from the MoCap database, which is recorded from real human activities; hence the motion is not strictly symmetrical and contains random movements. Those dynamic factors also exist in measurements. Therefore, at a given distance, the gesture, orientation to the radar, and movement of part of the body may differ. The superposed signal contains the reflection of the walls, which can enlarge those differences. For comparison in the given dynamic process, the focus is laid on the macroscopic information concerning the target’s location and the strong reflections from the left, right, and back walls. In the figures, these important and similar components are labeled as triangles, squares, and circles, respectively, in all the RAMs. Consider the measured Fig. 16(a) and simulated Fig. 16(e) as an example for comparison, we observe
The one-bounce reflection of the human is highlighted at the range of 4 m with an angle of -10°.
At the range of around 4.2 m, the reflection component by the left wall is around -25°.
At the range of around 4.5 m, the reflection component by the right wall is around 35°. The angle difference matches with the measurement environment because the human is closer to the left wall.
The terminal wall reflection is around 5.5 m with the angle 0° in all the RAMs.
2) Comparison of the RDMs:
Using the common notion in radar that the velocity is defined to be negative when the distance between the target and sensor decreases, and positive when the distance increases, the measurement RDMs of the motion from Site B to Site A at the range of 3 m is illustrated in Fig. 17(a) and the corresponding simulation is illustrated in Fig. 17(b).
RDMs comparison between simulation and measurement. (a) Measurement: B to A at
There are two differences in Fig. 17(a) and (b). First, the RDM of measurement is more focused while the simulation is more spread. An identical walking velocity between the imported MoCap database and the measurements cannot be guaranteed and hence the velocity of the torso in the simulation and measurement is different. In Fig. 17(a), the resulting human velocity is around 0.5 m/s, though the subject tried to walk at a velocity of 1 m/s in the measurement, while in Fig. 17(d), the imported simulation velocity is around 1 m/s. The superposition of reflection signals can enlarge those differences. These effects require calibration. In this context, we reduce the Blender frame rate in (5) by a factor of two, i.e., calibrated velocity in simulation is halved, and the shape of the simulated Doppler in Fig. 3(c) becomes more focused and more similar to the measurement. This constant calibration is used for all frames of the scenario.
Second, the simulated reflection components are stronger than the measurement. By refining the additional 10 dB loss in the target material, we can observe that the simulated RDM in Fig. 3(d) matches better with the measurement.
The non-zero velocities of RDMs are in the ranges of around 3–3.5 m, where the human is at the range of around 3 m with a velocity of around 0.5 m/s; the human reflections at the range of around 3.5 m and with the velocity up-to 1.5 m/s. Another strong zero-velocity component of RDMs is mapping with the terminal wall at around 5.5 m at RAMs.
Compared with RDMs in anechoic chambers, the velocity becomes more dispersed, because of the superposition of multipath components.
3) Summary of the Office Corridor Scenario:
Generally, we can conclude that the simulated geometrical results, i.e., the distance, angle, and velocity of multipath components, are in line with the office corridor measurements. Besides, the strengths of some reflection components are not identical, thereby requiring material-specific calibration. We also find the interesting Doppler reflection phenomenon. Nevertheless, in this article, we emphasize the general simulation framework for dynamic channels given the material reflectivity values. Calibrating the material loss parameters and quantifying the Doppler reflection are left for future modeling work.
Conclusion
This article developed an FMCW MIMO radar channel simulation based on the Blender scenario animation. The simulator can generate time-varying radar signals with the consideration of MIMO modes. Field measurement of indoor pedestrians using mmWave sensors shows the validity and merits of the developed simulation tool, including the enhanced estimation of range, angle, Doppler, and micro-Doppler of the targets. Further, the simulation-based method also performs well for micro-Doppler assessment when compared with measurement in anechoic. This provides a general approach to generating radar channels, which can be useful for AI-based identification applications and the coming ISAC era.
ACKNOWLEDGMENT
The author Yuan Liu would like to thank Dr. Thomas Stifter from IEE S.A., Luxembourg, for his comments and help provided in this work.
Appendix ADerivation of (17)
Derivation of (17)
Since \begin{align*} \Delta {\psi }_{n}\left ({{t;l}}\right) & = \sum _{m=1}^{M} 2 \pi \left ({{ 2 \left ({{ f_{l} + \mu \tilde {t} }}\right) \dfrac {R_{m,n,l} + v_{r} \tilde {t} }{c} }}\right), \\ & = \sum _{m=1}^{M} 2 \pi \left ({{ \dfrac { 2 R_{m,n,l} f_{l} }{c} + \dfrac { 2~f_{l} v_{r} + 2 ~\mu R_{m,n,l} }{c} \tilde {t}}}\right. \\ & \quad \left.{{+ \dfrac { 2~\mu v_{r} \tilde {t}^{2} }{c} }}\right). \tag {24}\end{align*}
Appendix BSimulation of CDM-MIMO
Simulation of CDM-MIMO
In FMCW, CMD refers to slow-time CDM waveforms, where the commonly used ones are Doppler division multiplexing (DDM) [55] and binary phase multiplexing (BPM) [56]. The transmitted FMCW signals can be represented as \begin{align*} s_{C}\left ({{t;l}}\right) & = \sum _{m=1}^{M} w_{m,l} s_{C,m} \left ({{t;l}}\right) \\ & = \sum _{m=1}^{M} w_{m,l} \sqrt {\frac {P_{0}}{2}}\exp \left ({{j\phi \left ({{t - \left ({{l-1}}\right)T_{p}}}\right)}}\right) \tag {25}\end{align*}
\begin{equation*} {w^{\text {DDM}}_{m,l}} = \exp \left ({{ j2\pi \dfrac {\left ({{m-1}}\right)\left ({{l-1}}\right)}{M} }}\right). \tag {26}\end{equation*}
\begin{equation*} \mathbf {W}^{\text {BPM}}_{2^{k}} = \mathbf {W}^{\text {BPM}}_{2} \otimes \mathbf {W}^{\text {BPM}}_{2^{k-1}} \tag {27}\end{equation*}
\begin{align*} \mathbf {W}^{\text {BPM}}_{2} = \begin{bmatrix} 1& 1 \\ 1& -1 \end{bmatrix}. \tag {28}\end{align*}
\begin{align*} {Z_{C_{n}}}_{l, n_{s}} & =\sum _{n_{\text {az}}=1}^{N_{\text {az}}}\sum _{n_{\text {el}}=1}^{N_{el}} \sum _{m=1}^{M} w_{m,l} P_{r_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}} \exp \left ({{ j2\pi \vphantom {\dfrac {2 ~\mu R_{n_{\text {frame}}, n_{\text {el}},n_{\text {az}}} } {c}}}}\right. \\ & \quad \times \left.{{\left ({{ \dfrac {2 ~\mu R_{n_{\text {frame}}, n_{\text {el}},n_{\text {az}}} } {c}\dfrac {n_{s}\!-\!1}{F_{s}} \! -2 \dfrac {f_{l} V_{n_{\text {fame}},n_{\text {el}},n_{az} }}{c} \dfrac {l\!-\!1}{L} T_{\text {frame}} }}\right.}}\right. \\ & \quad \left.{{\left.{{+ \dfrac { \left ({{\left ({{m-1}}\right)N +n-1}}\right) \Delta d }{\lambda } \sin { \Theta _{n_{\text {el}},n_{\text {az}}}} }}\right) }}\right) \tag {29}\end{align*}
In CDM-MIMO, decoding strategies are simplified based on the transmitted codes, hence it would be difficult to obtain a general decoding strategy. Coincidentally, the two-Tx DDM and two-Tx BPM share the same code. Here, we derive the general formula of CDM-MIMO in Blender-based radar signal simulation and take two-Tx DDM/BPM as an example to explain the decoding procedure. Consider two-Tx DDM/BPM, \begin{equation*} \mathbf {Z}_{C_{n}} = \mathbf {Z}_{C_{1,n}} + \mathbf {Z}_{C_{2,n}} \tag {30}\end{equation*}
\begin{equation*} \mathbf {Z}_{C_{1,n}} \approx \exp \left ({{ j \left ({{l-1}}\right)\pi }}\right) \exp \left ({{\dfrac {\Delta dN \sin {\theta }}{\lambda } }}\right) \mathbf {h}_{C_{2,n}}. \tag {31}\end{equation*}
The channel of Tx\begin{equation*} {Z_{C_{1,n}}}_{\tilde {l},n_{s}} = \dfrac {1}{2} \left ({{ {{Z}_{C_{n}}}_{2\tilde {l}-1,n_{s}} + {Z_{C_{n}}}_{2\tilde {l},n_{s}}}}\right) \tag {32}\end{equation*}
\begin{equation*} {Z_{C_{2,n}}}_{\tilde {l},n_{s}} = \dfrac {1}{2} \left ({{ {Z_{C_{n}}}_{2\tilde {l}-1,n_{s}} - {Z_{C_{n}}}_{2\tilde {l},n_{s}}}}\right). \tag {33}\end{equation*}
In BPM/DDM-MIMO, we could obtain individual channels to enable MIMO, however with some limits: 1) we only get
Appendix CSimulation of FDM-MIMO
Simulation of FDM-MIMO
For FDM-MIMO, the orthogonality is in frequency, i.e., Txs simultaneously transmit signals with non-overlapping frequency bands. The transmitted signal of FDM-MIMO can be represented as \begin{align*} {s}_{F}\left ({{t;l}}\right) & = \sum _{m=1}^{M}{s}_{F,m}\left ({{t;l}}\right) \\ & \quad \times \sum _{m=1}^{M} \sqrt {\frac {P_{0}}{2}}\exp \left ({{j\phi _{F,m}\left ({{t - \left ({{l-1}}\right)T_{b}}}\right)}}\right) \tag {34}\end{align*}
\begin{equation*} \phi _{F,m} \left ({{t}}\right) = 2\pi \left ({{\left ({{f_{l}+\left ({{m-1}}\right)B}}\right) t+\frac {1}{2}\mu t^{2}}}\right)-\phi _{0} \tag {35}\end{equation*}
The received signal at the nth antenna can be represented as \begin{equation*} {} {r}_{F,n}\left ({{t;l}}\right) = \sum _{m=1}^{M} \sigma _{m,n,l} {s}_{F.m}\left ({{t -\tau _{m,n,l};l}}\right). \tag {36}\end{equation*}
The dechirp processing of FDM is more complex than TDM and CDM in the hardware and RF processing, the received signal \begin{align*} {z}_{F_{m,n}}& \left ({{n_{s};l}}\right) = \\ \sigma _{m,n,l}& \exp \left ({{j2\pi \left ({{ \dfrac {2 ~\mu R_{m,n}}{c}\dfrac {n_{s}-1}{F_{s}} -f_{D_{F},m}\left ({{l-1}}\right)T_{p} }}\right) }}\right) \tag {37}\end{align*}
\begin{equation*} {} {f}_{D_{F},m} = -2 \dfrac {\left ({{f_{l} +\left ({{m-1}}\right)B }}\right)v_{r} }{c}. \tag {38}\end{equation*}
Applying the assumptions of Blender-based radar simulation used in TDM-MIMO. Let the beat signal model between the mth Tx and the nth Rx antennas used for FDM-MIMO simulation be \begin{align*} {Z_{F_{m,n}}}_{l,n_{s}} & = \sum _{n_{\text {az}}=1}^{N_{\text {az}}}\sum _{n_{\text {el}}=1}^{N_{el}} \sum _{m=1}^{M} P_{r_{n_{\text {frame}},n_{\text {el}},n_{\text {az}}}} \\ & \quad \times \exp \left ({{ j2\pi \left ({{ \dfrac {2 ~\mu R_{n_{\text {frame}}, n_{\text {el}},n_{\text {az}}} } {c}\dfrac {n_{s}-1}{F_{s}}}}\right.}}\right. \\ & \quad -2 \dfrac { \left ({{f_{l} +\left ({{m-1}}\right)B }}\right) V_{n_{\text {fame}},n_{\text {el}},n_{az} }}{c} \dfrac {l-1}{L} T_{\text {frame}} \\ & \quad \left.{{\left.{{+ \dfrac { \left ({{\left ({{m-1}}\right)N +n-1}}\right) \Delta d }{\lambda } \sin { \Theta _{n_{\text {el}},n_{\text {az}}}} }}\right) }}\right) \tag {39}\end{align*}
For FDM-MIMO, the channels of different Txs have already been separated in (39), which is distinguished from the (21) in TDM and (29) in CDM. We do not need extra decoding approaches.