24 GHz FMCW MIMO radar for marine target localization: a feasibility study

Radar detection and tracking of targets in the marine environment are common tasks performed to ensure the safe navigation of ships or monitor traffic in harbor areas. More recently, radar technology has been proposed to support the collision avoidance system of autonomous surface vehicles, which are characterized by severe constraints in terms of payload and space. The paper investigates the performance of a small and lightweight 24 GHz Frequency Modulated Continuous Wave (FMCW) Multiple-Input Multiple-Output (MIMO) radar, originally developed for automotive applications, to localize marine targets at short range. A complete signal processing strategy is presented combining MIMO radar imaging, detection, and tracking algorithms. The validation of the proposed signal processing chain is firstly performed thanks to numerical tests based on synthetic data. After, results of experimental trials carried out in the marine environment are reported. These results demonstrate that the considered radar together with the adopted signal processing strategy allows the localization of static targets and the tracking of moving targets with satisfactory performance, thus encouraging its use in marine environments.


I. INTRODUCTION
The use of Autonomous Surface Vehicles (ASVs) for the exploration of water and seabed has grown significantly in recent years [1]- [2]. Indeed, ASVs are cheaper to deploy compared to manned vessels and can be used in dangerous situations while offering at the same time extended operation periods. Owing to their benefits, ASVs are currently used in military applications, environmental monitoring, oil exploration, and serve as research units capable of providing information on various aspects of the marine environment.
A crucial task to perform during an ASV mission is avoiding collisions with moving or static objects located on the sea surface to guarantee the vehicle's safety and the ability to complete the mission. At this aim, ASVs must be equipped with autonomous navigation systems that can provide remote control, obstacle detection, tracking, and mapping [3]- [4]. Different sensing technologies, such as Light Detection and Ranging (LIDAR), optical cameras, thermal cameras, and radar systems may be used to pursue this goal. Each sensor has its advantages and disadvantages as pointed out in [5] and the most reliable approach for ASV navigation is the multi-sensorial one where the information produced by different types of sensors is combined before being sent to the anti-collision module [5]- [6].
Among the available sensing technologies, this paper focuses on radar, which is an electronic device capable of measuring the distance from a target by evaluating the time of flight of an electromagnetic signal transmitted and subsequently reflected by the target [7]- [8]. Furthermore, based on the frequency shift of the received signal caused by the relative motion between the target and the radar (Doppler effect), it is possible to estimate the target radial velocity. As well-known, radar is characterized by an excellent range coverage (from a few meters up to tens of km), is capable of operating regardless of weather and light conditions, has moderate costs, and is well-suited to operate in the marine environment [9].
The application of radar technology to support ASV navigation is a relatively new research field. Radar systems commonly installed on ships, e.g. nautical radars operating in the X-band (8)(9)(10)(11)(12) of the electromagnetic spectrum, are the natural solution to the problem [10]- [17]. The first documented use of an anti-collision radar onboard an ASV is reported in [14], where the double-hulled autonomous robot ROAZ II was equipped with a commercial X-band Furuno pulsed-type radar. Another anti-collision system for ASV based on X-band marine radar is reported in [15], where a commercial pulsed radar manufactured by Raymarine was used. Frequency Modulated Continuous Wave (FMCW) X-band marine radars have been also preferred to pulsed radar [5], [16]- [17] due to their advantages such as hardware simplicity, low peak power, detectability of very close targets, accurate range measurements, etc.
Recently, a 24 GHz automotive radar has been proposed as an anti-collision sensor onboard an ASV [18]- [19]. The radar has a compact size, low weight, and is well-suited to be installed on small ASVs with a limited payload mass. Moreover, it can perform better than classical nautical radars in detecting very close targets, as required when navigating in inland waters or narrow port areas. Automotive radar is a rapidly expanding research area and a complete review of the related literature is beyond the scope of this article. Interesting review/tutorial articles with a focus on Multiple Input Multiple Output (MIMO) architectures and signal processing algorithms have been recently published (e.g. see [20]- [24]).
Inspired by the idea proposed in [19], this paper presents a feasibility study on the application of a 24 GHz automotive FMCW MIMO radar to detect and track marine targets. A complete signal processing pipeline based on spatial-domain beamforming [25], Constant False Alarm Rate (CFAR) detection [26], and multi-target tracking [27] is herein proposed and validated by numerical simulations referred to ideal scenarios. Moreover, an experimental assessment of the system, i.e. radar plus data processing, is carried out in the marine environment.
The novelty of this manuscript compared to the current state of the art is threefold. Although radar is an assessed technology in the marine environment, e.g. radars operating in S-band (3 GHz) or X-band (10 GHz) are commonly used for target detection and tracking (e.g. see [12]- [13]), higher frequencies are less considered. This paper investigates the possibility of detecting and tracking multiple targets in the marine environment using a 24 GHz FMCW MIMO radar specifically designed for automotive applications. To the best of our knowledge, this is the first time that an automotive radar is employed for target tracking in a marine environment. In this frame, it is worth pointing out that the use of high frequencies makes the radar sensitive to sea wave contributions less significant or negligible when X or S-band radar systems are employed, such as capillary waves occurring also with a light breeze. These sea wave contributions give rise to phenomena affecting the backscattered signal, especially by small-sized targets, thus they may impact the radar performance. Accordingly, the detection and tracking performance of a 24 GHz FMCW MIMO radar in the marine environment is not obvious and worth being investigated.
Furthermore, even if the idea of using an FMCW MIMO radar in the marine environment was recently introduced in [19], built-in detection software was used and higher-level signal processing such as target tracking was not considered in that work. Conversely, a complete signal processing chain based on spatial-domain beamforming, CFAR detection, clustering, and multi-target tracking is herein provided. Therefore, the paper addresses overlooked topics in FMCW radar literature (e.g. see [24]), and thus also aims at providing a comprehensive reference that handles all the pertinent technical information about FMCW radar, starting from basic operation principles up to higher-level signal processing topics such as target detection, clustering, and tracking.
Last but not least, from the application perspective, this feasibility study can pave the way for the application of FMCW MIMO radar as an anti-collision sensor to support the navigation of small and lightweight ASVs requiring the detection of close and possibly small-sized targets. This last requirement can be hardly satisfied by conventional pulsed marine radars. The performance of the FMCW MIMO radar technology in the marine environment is an open issue that deserves interest. Note that, here, we emphasize the suitability of the technology in the marine environment while its integration on an ASV is left as future work.
The paper is organized as follows. Section II describes the radar platform and the system architecture. Section III deals with the radar signal model and, in Sec. IV, the signal processing pipeline is introduced and its main phases are discussed in detail. Section V reports a numerical validation of the data processing approach while an experimental assessment of the system by field trials in the marine environment follows in Sec. VI. Concluding remarks are reported in Sec. VII.

II. RADAR SYSTEM DESCRIPTION
The radar system is the evaluation platform RadarBook2, manufactured by Inras Gmbh for automotive purposes [28]. A detailed description of the platform is reported for the reader's convenience based on the technical documentation from the manufacturer. RadarBook2 is a compact and lightweight device with an approximate size of 13.5 cm × 4 cm × 11 cm and about 0.5 Kg weight. The platform implements an FMCW MIMO radar equipped with 2 transmitting (Tx) and 8 receiving (Rx) antennas operating in the 24-24.25 GHz frequency range (Kband). The assembled system and the antenna configuration together with the radar coordinate reference system are shown in the left and right panels of Fig. 1, respectively. Every antenna consists of a linear array of 8 series-fed vertically polarized metallic patches printed over a dielectric substrate (Rogers RO-435, thickness 0.25 mm). The antennas have a narrow beam in the vertical plane as per the array factor and a broad radiation pattern in the horizontal plane, which is determined by the element pattern.
The maximum gain of each antenna is equal to 13.2 dBi and the 3 dB beamwidth is equal to 12.8° in the vertical plane and 76.5° in the horizontal plane. The level of the sidelobes of the antenna pattern is 18 dB less than the radiation maximum. The spacing between the Tx antennas is equal to 7 /2 ( is the free-space wavelength at the center frequency), while the spacing between the Rx antennas is fixed at = /2. Accordingly, as discussed in Sec. III, the 2×8 MIMO array is equivalent to a virtual array of 16 antennas (channels), with an overlap between the eighth and ninth elements. Figure 2 displays the block diagram of the RadarBook2 architecture. The system is composed of an RF front-end board and a baseband board, which is connected to a laptop via an Ethernet connection to manage the data acquisition. The RF transceiver is based on a direct conversion architecture where the transmitted chirp is used to demodulate the signals received on the different radar channels. The monolithic microwave integrated chip ADF5901 contains a two-channel voltage-controlled oscillator (VCO), that is used in conjunction with the frequency synthesizer ADF4159 to generate the FMCW waveform. The ADF5901 chip produces two outputs with programmable power to feed the Tx1 and Tx2 antennas thanks to embedded power amplifiers. These outputs are activated according to the Time Division Multiplexing (TDM) scheme. The activation sequence can be programmed with the synchronization unit and the MIMO sequencer implemented in the Field Programmable Gate Array (FPGA) contained in the baseband board (see Fig. 2). The received signals are demodulated at the baseband by the 4-channel ADF5904 downconverter. The local oscillator used for demodulation is derived from the signal transmitted by the VCO. The acquisition and conversion of the baseband signals into digital form is carried out by the analog-to-digital converter (ADC) chip AFE5801. Once the conversion is done, the digital operations (multiplexing, filtering, data transfer, etc.) are implemented in the FPGA. The RF front-end supports synchronization functions for the system clock, the ADC clock, the synthesizer, and the synchronization of the frequency ramp. The baseband board is a powerful processing unit equipped with a dual-core ARM processor, an FPGA (Arria 10 SoC), and DDR3 RAM. Its main features include high memory bandwidth (> 12 GBit/s), recording up to 16 parallel channels with 40 MSPS, and 1 GByte onboard memory for signal  Tx2 Rx This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. processing and buffering of measurement data. Moreover, the system is capable to run embedded signal processing algorithms to reduce the data rate in real-time applications.
The main electrical parameters of the radar are summarized in Tab. I. The system has a range coverage of 75 m for a target with an RCS equal to 0 dBm 2 when a single chirp is processed. However, an additional processing gain is obtained by processing coherently more pulses [7]. As regards the spatial resolution of the radar, i.e. the ability to discriminate two closely spaced targets, the range resolution is determined by the classical formula [24], [26] = /2 where is the speed of light in free space and is the bandwidth. Accordingly, the range resolution in eq. (1) is equal to Δ =0.6 m. As reported in [24], [29] and detailed in Subsect. IV B, the angular resolution can be roughly evaluated by the formula in which =15 is the number of non-overlapping channels in the virtual array and is the target direction. The trend of spatial resolution versus is illustrated in Fig.  3. As can be seen, the finest resolution (about 7.6°) is achieved when the target is illuminated at the radar boresight ( =0°). Moreover, the resolution degrades progressively as the target moves away from boresight reaching a value around 30° at the end of the nominal azimuth field of view (± 75°).

III. SIGNAL MODEL
This section describes the FMCW MIMO radar signal model, which is preliminary to the definition of the signal processing chain introduced in Sec. IV and is useful for its numerical assessment (see Sec. V).
The top panel of Fig. 4 shows the geometric arrangement of the 2×8 MIMO radar antennas, which are aligned along the x-axis of the reference system. The transmitting antennas 1 and 2 are spaced by a distance of 7 while the receiving antennas , = 1, … ,8, have uniform spacing equal to , with = /2. The target position is described by the coordinates ( , ), where is the radial distance and the direction (angle from the y-axis). The target is located in the far zone of the antennas such that the received echo at each antenna can be modeled as an incoming plane wave propagating along the direction . As shown in the top panel of Fig. 4, the distance 7 between 1 and 2 causes a phase shift 7 sin( ) between the transmitted signals along the radar-target path, being the propagation constant at the central frequency. Furthermore, for a fixed transmitting antenna, the echoes recorded by two adjacent receiving antennas have a phase shift ΔΦ = sin( ). Consequently, the 2×8 MIMO array is equivalent to a virtual array of 16 antennas where the eighth and ninth elements overlap. This concept is graphically represented in  This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication.  Fig. 4, where the virtual antennas in red refer to 1 , the virtual antennas in blue are associated with 2 and the overlapping elements are enclosed in the dashed rectangle. In this study, the number of processed channels is defined by the set of indices = 0, … ,15, ≠ 8. Figure 5 shows the transmitted and received FMCW waveforms over successive acquisitions for a single radar channel n. In each pulse repetition interval (PRI) with duration T, the transmitted signal (solid line) with duration (chirp time) is linearly modulated over the frequency band = 2 − 1 : where is the signal amplitude and = / is the chirp rate. In each scan m, the received signal (dashed line) on channel n is a delayed version of the transmitted signal is the travel time related to a target at range (Fig. 4) and is the amplitude of the received signal depending on the antenna pattern, the target radar cross-section (RCS), and propagation losses. The received signal is demodulated by using the transmitted signal ( ) as a local oscillator and then, once the double frequency terms have been filtered, the baseband signal is expressed by . The phase term 2 in eq. (6) is very small and can be neglected in practical cases since ≪ [30]- [31]. Therefore, the expression of the baseband output writes as representing a co-sinusoid at the intermediate frequency (IF) Based on eq. (8), it turns out that the frequency of the demodulated signal varies with the target position and the radar channel. The signals ( ), = 0, … ,15, ≠ 8, constitute a data matrix where time varies along rows and radar channel varies along columns.

IV. DATA PROCESSING CHAIN
The processing chain here proposed to perform target detection and tracking is illustrated in Fig. 6. It is based on a focusing in the spatial (range-angle) domain of the raw signals in eq. (7) through a beamforming algorithm [24]- [25]. For each scan m, the baseband data frame ( ), = 0, … ,15, ≠ 8, underdoes a double Fast Fourier Transform (FFT), the first one along time and the second one along the channels. At this stage, the achieved information is the focused image ( , ), i.e. a spatial map of the reflectivity of the scene where bright spots indicate the presence of targets. After, the targets in the focused image are automatically detected by a CFAR detection algorithm [26], which produces a binary image ( , ) where each pixel can take on two values (1 target present, 0 targets absent). Through a segmentation (clustering) process, adjacent unitary pixels are grouped to produce the detected objects. The positions of the detected objects represent the measurements given in input to a multi-target tracking algorithm [27], which estimates the state of the targets (position and velocity) reducing the number of false alarms. The aforementioned data processing steps repeat over time as soon as a new data frame m is recorded.
It is opportune to stress that the building blocks of the processing chain in Fig. 6 are known methods in the broad

A. FOCUSING ALONG THE RANGE
Consider the baseband signal ( ) in eq. (7) and compute its Fourier transform on the chirp interval [0, ] According to eq. (9), the Fourier transform of the baseband signal is the superposition of two sinc(·) functions centered at frequencies ± . Since the target is in the far-field zone of the antenna, it is possible to approximate in eq. (8) as Therefore, the target range r is related to the frequency corresponding to the peak of the baseband signal spectrum, i.e.
The expression of the range profiles for all radar channels can be simplified by considering only the positive frequencies in eq. (9) The first null of the sinc(·) function in eq. (12) yields the range resolution according to the Rayleigh criterion [32] which gives the well-known formula [26] According to eq. (12), the range profiles are computed for every channel n by performing the FFT of the based band signals ( ) with respect to t.

B. FOCUSING ALONG THE AZIMUTH
Range profiles in eq. (12) allow the determination of the target range but do not provide information on the target direction . To meet this goal, we rewrite eq. (12) as and calculate the Discrete Fourier Transform (DFT) of eq. (15) with respect to n. Upon performing computations, it follows that The series appearing in eq. (16) is a geometrical series whose sum can be expressed in compact form as Accordingly, eq. (16) can be rewritten as In eq. (18) and under the hypothesis Δ → 0, the approximate estimate is found [24] and [29]: which corresponds to eq. (2) by setting = /2. The function ( , ) is computed as a double FFT of the raw signals in eq. (7) and allows the determination of the target position ( , ) thanks to eq. (11) and (19). In particular, ( , ) = | ( , )| is the focused image of the scene reflectivity considered for the subsequent object detection stage (see Fig. 6).

C. OBJECT DETECTION
The scope of the object detection is to establish whether a scatterer at a given cell (pixel) of the image ( , ) represents a real target or a disturbance produced by noise or unwanted targets (clutter). To this end, the energy of every single cell ( , ) of ( , ) is compared with a threshold ℎ. The detection produces a binary image ( , ) whose elements are defined by the rule In this work, classical CFAR detection is considered in the processing pipeline. CFAR detection schemes are based on a local (adaptive) detection threshold for every cell under test to guarantee a constant probability of false alarm [26]. CFAR detection determines the value of the detection threshold starting from an estimate of the clutter/noise power in the neighborhood of the cell under test ( , ). Specifically, the noise power is evaluated by considering cells in a reference window. A guard window is used to exclude cells immediately adjacent to the cell under test since they account for both clutter/noise and target energy. The noise observations in the reference cells, denoted by the vector [ 1 , 2 , … , ], are used to estimate the threshold ℎ, which depends on the and the noise power . The elements of the noise vector are assumed to be independently and identically distributed (i.i.d.) zero-mean complex-valued Gaussian random variables. The threshold ℎ is expressed by the relation [26]: where is a scale factor. Different versions of the CFAR detector have been developed and the Cell Averaging (CA) type is exploited in this work. The CA-CFAR detector estimates the power of the disturbances as the sample mean of the observation vector: and the probability of false alarm [26]: is independent of the statistical properties of the Gaussian noise model. Given eq. (25), the scale factor is related to the by the formula: Starting from the set of detections , a clustering or segmentation operation is performed to group the detections that are likely to be produced by the same targets. The clustering is based on the concept of group connectivity of the detections, i.e., the spatial relation of a detection with its neighbors. More specifically, two detections , and , belong to the same cluster (target) if they are "8-connected" [33], that is if , ∈ 8 ( , ) 8 ( ) denotes the set of 8neighbors of x, i.e., the set of its horizontal, vertical, and diagonal neighbors. A point target model is considered [15], [34] and the centroid of each cluster produced by the segmentation operation, i.e. the set of positions = { 1, , 2, , … , , }, are assumed to be the target measurements given in input to the tracking procedure.

D. MULTI-TARGET TRACKING
The tracking algorithm is an automatic procedure that, at each time m, provides an estimate of the state (position and velocity) of the targets starting from the set of measurements provided by the detection stage. The tracking allows to partially mitigate the false alarms present after the detection and generates tracks marked with an identifier through which it is possible to distinguish one target from another. A multitarget tracking algorithm performs the following operations:  gating + assignment: gating is the function that allows selecting, for each track, the most suitable measurement to update the status of the track, discarding the unlikely ones. In practice, a validation region (gate) of the track is defined and only the measurements within that region are valid candidates for updating the track. This allows for reducing the computational complexity of the subsequent assignment phase, which selects one or more measurements in the gate to update the track.  track management: it regards various operations such as initialization, confirmation, and deletion of tracks. The initialization phase involves generating a tentative track when a new measurement is not associated with an existing track. The confirmation operation transforms a tentative track into a confirmed track while the deletion operation removes a track. Both track confirmation and deletion are performed based on pre-established criteria.  filtering: it is the phase that updates the status of the tracks.

1) Target motion and measurement models
In this subsection, the target motion model and the measurement model underlying the tracking algorithm are described. The state of a generic target at scan time m is described by the state vector in Cartesian coordinates This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. where , and ̇, ̇ are the position and velocity components along x and y. The motion of the target is described by the nearly constant velocity model [27]: where and are the state transition matrix and the gain matrix respectively defined as In eq. (28) is the measurement matrix and is a white Gaussian measurement noise term with zero mean and covariance matrix: in which 2 e 2 are the variances along x and y.

2) Tracking algorithm
The tracking procedure is based on the Global Nearest Neighbor (GNN) single-hypothesis assignment method [35] which, at each time m, assigns the closest measurement to a given track to update its state. GNN algorithm is the simplest assignment method, which has a low computational cost and provides acceptable performance for tracking sparse targets. More advanced data association methods such as Joint Probabilistic Data Association (JPDA) or Track-Oriented Multiple Hypothesis Tracking (TOMHT) [27], only to mention a few, are available but their application is out of the scope of this work. The track management is based on the M/N logic meaning that a track is confirmed if at least M associations are obtained in the last N scans, otherwise, it is canceled. The filtering phase is carried out using the Kalman filter (KF) [27]. where is the gating threshold and = | −1 + is the residual covariance matrix. To perform the data association, the algorithm calculates the distances between the existing tracks and the measurements in the corresponding gating regions, forming a cost matrix whose elements represent a generalized statistical distance between the trace i and measurement j [35] = + ln(| |) In eq. (36), is the "Mahalanobis distance" between the track i and the measurement j and the term ln(| |) is the logarithm of the determinant of the residual covariance matrix , which is introduced to penalize the tracks with greater prediction uncertainty. The assignment problem is formulated as a minimization problem of the cost function [35] = ∑ ∑ =0 Furthermore, 0 = 1 is the hypothesis that track i is not associated with any measurement and, similarly, 0 = 1 is the hypothesis that observation j is not associated with any track. The constraint in eq. (39) implies that each measurement cannot be associated with more than one track, while the constraint in eq. (40) means that each track cannot be assigned to more than one measurement. The minimization problem defined by (38)-(40) is solved with the Munkres algorithm [36], which guarantees an optimal solution. The assignment algorithm divides the measurements and tracks into three groups: trackmeasurement pairs with one-to-one assignments, unassigned measurements, and unassigned tracks. Unassigned measurements initialize new tracks (tentative tracks). These tracks are updated for the next N scans after which they are confirmed or discarded according to the M/N logic [27]. Similarly, the unassigned tracks are updated for the subsequent R scans awaiting new measurements to be assigned and, at the end of the R scans, they are confirmed or discarded according to the P/R logic. Tracks with an assigned measurement are updated by the classical Kalman filter [27].

V. NUMERICAL RESULTS
Numerical tests are performed to assess the effectiveness of the processing pipeline shown in Fig. 6. To this end, synthetic data are generated according to eq. (7) for the ideal case of targets in a free-space scenario. These data are corrupted by additive white Gaussian noise (AWGN) with a SNR equal to 10 dB. The simulation parameters adopted for the generation of the raw data are summarized in Tab. II. An incoherent integration approach is implemented to mitigate the effect of the noise. In particular, the focused image used to extract target detections is achieved by summing incoherently Np=128 images, where each image is obtained by processing the echoes corresponding to a single chirp interval. Moreover, for every Tint=0.5 s, new detections are produced for an overall observation window Tw = 10 s. The parameter settings for the CA-CFAR detector and GNN tracking algorithm are summarized in Tabs. III and IV, respectively. The first numerical test performed concerns a point target T1 moving in the plane along the path defined by the equations: According to eq. (42), the target moves at a constant speed along x and under a uniformly accelerated motion along y. The trajectory followed by the target in the time interval [0, 10] s is shown in Fig. 7 for sake of clarity.
The results displayed in Fig. 8 are the focused radar images corresponding to the times t = 0, 2.5, 5, 7.5 s. The images are characterized by the presence of a well-defined spot in correspondence with the target location. Furthermore, following the theoretical resolution performance (see eq. (1) and (2)), the spot always has the same size along the range    The images in Fig. 9 show the CFAR detection maps corresponding to the images in Fig. 8 and the output of the clustering process, i.e. the estimates of the target positions (measurements) provided by the centroid of the cluster. Note that the CFAR algorithm detects the target at any time, even those not shown in Fig. 9. Figure 10 shows the trend of the target positions estimated by the GNN tracker comparing them with the ground truth. Note that, apart from an initial delay of 1.5 s required to activate the track, the target position estimated by the tracking algorithm is in good agreement with the true trajectory and a slight deviation is observed when the target moves away from the boresight direction as the target position estimates tend to be less accurate due to the worse angular resolution. The results in Fig. 11 show the comparisons between the velocity profiles along the x and y-axis estimated by the GNN tracker and the true target velocities. In this case, a good agreement is observed between the velocity profiles along x and a fair agreement along y.
In order to quantify the localization accuracy at the output of the tracking algorithm, Tab. V summarizes the minimum, maximum, and Root Mean Square (RMS) values of the positioning error along the trajectory. As can be observed, the system is characterized by a positioning accuracy of less than 0.1 m when the target is at boresight and by a maximum error of 0.75 m, which occurs when the target is far from the boresight. The next numerical test aims at assessing the system performance in a more challenging scenario characterized by the simultaneous presence of three targets T1, T2, and T3. In particular, T1 moves along the trajectory defined by eq. (42), T2 moves according to the law while T3 is a static target located at (-20, 40) m. The multitarget scenario just described and shown in Fig. 12 (solid lines) reveals that targets T2 approaches T3 around t=2.5 s while T1 and T2 are spatially close around t=5 s. This figure also reports the results provided by the tracker confirming a good agreement between the true and estimated trajectories. A fair agreement of the target velocity profiles is also achieved as confirmed by the curves plotted in Fig. 13. It must be pointed out that the accuracy of the position and velocity estimation depends on the positions of the targets and, in general, the results tend to be less accurate when the targets are close such as around t=2.5 s when T2 approaches T3 and around t=5 s when the target T2 approaches T1. This is a direct consequence of the fact that the system tends to resolve with major difficulty closely spaced targets within the same angular resolution cell.  The positioning errors summarized in Tab. VI highlight an accuracy in line with the results obtained for the single target scenario. Indeed, the minimum error is less than 0.1 m while the maximum error exceeds, though slightly, 1 m because when the targets approach each other and are far from the boresight, the measurements of their positions tend to be more uncertain because of the limited angular resolution.

VI. EXPERIMENTAL VALIDATION
This section shows the results of two experimental trials aimed at assessing the operative use of the radar prototype and its detection and tracking capabilities in the marine environment. The tests were performed on July 13 th, 2021 at the Acquamorta bay in the municipality of Monte di Procida, Napoli, Italy. A satellite picture of the area under test, provided by Google Earth Pro, is shown in Fig. 14. The area is characterized by a small inlet with three breakwaters The radar was installed into a waterproof case and mounted on a tripod at a height of 2 m above sea level and about 2 m far away from the water with the antenna boresight pointed towards the sea (see Fig. 15). The geographical coordinates of the radar sensor (40.794661° N, 14.043521° E) and the direction of the antenna boresight to the north (280° NW) were respectively evaluated by using the GNSS receiver and the magnetic compass of a smartphone. This latter information turns out to be useful to georeference the target tracks and check the positioning capabilities of the system in the case of static targets present in the investigated area (breakwaters, quay) whose locations are known from the satellite images. Note that the true positions of the moving targets are not available since they were not cooperative during the tests. The radar configuration parameters used for the experiment are those listed in Tab. II except for Tint and Tw, which have been set at 2 s and 30 s, respectively. As regards the signal processing parameters, the probability of false alarm of the CFAR detector was set at = 7 × 10 −3 and the guard and training windows were respectively set equal to 5×13 and 7×15 to achieve a good compromise in terms of correct detections and false alarms. As regards the tracker settings, the adopted parameters are those listed in Tab. IV. The signal processing software has been implemented under MATLAB 2019 environment and run in post-processing mode on a laptop equipped with an Intel(R) Core(TM) i7-8565U CPU and 16.0 GB DDR3 RAM. Two moving targets were present on the sea surface during the first test: an inflatable boat and a life jacket as seen in Fig. 16. These targets were characterized by different dimensions and materials and, thus, different RCS values. In particular, the life jacket floating on the sea surface was a particularly challenging testbed for the radar due to its small size compared to the boat.   During the trials, the ground truth of the scene was recorded by the video camera of a smartphone. In the first trial, as highlighted by the video frames at times t = 6, 16, 26 s (see Fig. 17), the boat (gray arrow) headed towards the life jacket (orange arrow), which floated on the sea surface. The distance between the boat and the jacket at the end of the radar acquisition was in the order of a few meters. The images in Fig. 18 display detection maps at times t = 6, 16, 26 s produced by CFAR detector. The maps show the presence of static and mobile scatterers. In particular, the group of static targets around x = -80 m are originated from returns due to the port quay; the target at x = 63 m, y = 142 m is the breakwater closer to the radar, i.e. the one visible in the middle of Fig. 14. The detections marked by arrows represent the boat (white arrow) and the life jacket (orange arrow). Regarding the motion of the boat, the results in Fig. 18 are consistent with the corresponding frames in Fig. 17. Indeed, the boat heads towards the life jacket, which moved very slowly during the acquisition due to the low intensity of sea waves. This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3186052 To verify the correct target positioning, the tracks provided by the GNN-tracker expressed in the radar coordinate system have been georeferenced and superimposed on the Google Earth picture of the area as shown in Fig. 19. In the figure, the tracks associated with the various scattering objects in the scene are identified by different symbols. Despite the absolute positioning accuracy achievable from the Google Earth image is in the order of a few meters [37] and the GNSS accuracy of the smartphone used was about 3-4 m, the estimated positions of the static targets (quay and breakwater) are quite well superimposed on the real objects. Moreover, the radar system can detect the quay up to a distance of about 250 m and the first breakwater on the left at a distance of about 150 m. Figure 19 also highlights the reconstructed trajectory of the boat (gray dots), which moves towards the life jacket (orange square) in agreement with the video recorded by the smartphone. During the first trial, the range of the boat varied in the interval of 50-60 m while the lifejacket was about 50 m far from the radar. In the following, we report the results of a more challenging trial characterized by the presence of several moving targets. The frames of the scene at t = 6, 18, 28 s reported in Fig. 20 show the presence of four targets: the inflatable motorized boat (white arrow), two kayaks indicated by a blue arrow (kayak 1), and a green arrow (kayak 2), and the life jacket (orange arrow). During this second trial, the boat and kayak 2 moved very slowly for a few meters, kayak 1 was moving towards the boat and the life jacket floated on the sea surface. The CFAR detection maps achieved at times t = 6, 18, and 28 s are illustrated in Fig. 21 together with the centroids of the detected targets. Similar to the detection maps of the first trial (see Fig. 18), the group of scatterers around x = -80 m is associated with the returns from the quay; the target at x = 63 m, y = 142 m is the central breakwater. The detections closer to the radar, marked with colored arrows following the notation adopted in Fig. 20, are the moving targets. It is interesting to note that the boat and the two kayaks are always detected on all the maps while the life jacket is not visible on the map at t = 28 s. A possible explanation for this phenomenon is that the jacket scattering returns are now affected by the glint and scintillation phenomena that are caused by sea waves induced by the motion of the targets in the surroundings. The achieved target tracks are superimposed on the satellite image in Fig. 22. Also, in this case, the static targets (quay and breakwater) are correctly detected. In addition, the trajectories of the three major targets (boat and kayaks) are reconstructed by the tracking algorithm in agreement with the ground truth provided by the video camera (Fig. 20). As for the life jacket, however, the tracker logic did not activate the corresponding track since, as above mentioned, the detections were not persistent during the observation time window.  This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3186052

VII. CONCLUSION
This paper has dealt with the possible exploitation of a compact 24 GHz FMCW MIMO radar, developed for automotive purposes, for the detection and tracking of targets in the marine environment. An ad-hoc signal processing strategy has been proposed, and its effectiveness has been tested firstly with numerical experiments and then by performing preliminary field tests. The experimental validation has been carried out by placing the radar system on the shoreline and observing static and moving targets in a bay. The achieved results demonstrate for the first time the possibility of using a 24 GHz FMCW MIMO radar system, specially designed for automotive purposes, to detect and track marine targets even small-sized and at very short ranges. This is a fundamental starting point suggesting that the proposed system can be considered a suitable technological solution for supporting collision avoidance operations of ASVs. However, it should be remarked that the performance of automotive radar sensors for collision avoidance in the marine environment is still an open issue and further investigations should be carried out. Moreover, the proposed measurement configuration with a single radar module has a limited field of view in the horizontal plane (azimuth-range). Therefore, to overcome this issue, the radar system should be mounted on a rotating platform or integrated with additional sensors. To assess further the performance of the proposed solution in operative conditions, future research work will regard the installation of the radar prototype over an ASV and its integration with other sensors, i.e. optical cameras. The combination of microwave and optical technology, together with the development of suitable data integration, is expected to ensure more complete awareness of the surveyed scenario.
GIANLUCA GENNARELLI received the M.Sc. degree (summa cum laude) in Electronic Engineering and the Ph.D. in Information Engineering from the University of Salerno, Salerno, Italy, in 2006 and 2010, respectively. From 2010 to 2011, he was a Post-Doctoral fellow at the University of Salerno. Since 2012, he has been a Research Scientist with IREA-CNR, Naples, Italy. In 2015, he was a Visiting Scientist with the NATO-CMRE, La Spezia, Italy. He has coauthored over hundred publications in international peer-reviewed journals, conference proceedings, and book chapters. He serves as a reviewer for several international journals and conferences. He was Guest Editor for a special issue in the MDPI Remote Sensing journal. His research interests include microwave sensors, inverse scattering problems, radar imaging and signal processing, diffraction problems, and electromagnetic simulation.
CARLO NOVIELLO received the M.S. degree in Telecommunication Engineering in 2011, and the Ph.D. degree in Telecommunication Engineering in 2015 both from the University of Naples Federico II, Naples, Italy. Since 2012, he has been collaborating with the Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (IREA-CNR), Naples, Italy, where he is currently an Associate Researcher. His research interests include statistical radar signal processing, with emphasis on of Unmanned Aerial Vehicle (UAV), Airborne and Spaceborne platforms, Synthetic Aperture Radar (SAR) interferometry, Inverse SAR imaging techniques. Since 2013 he is member of IEEE (Institute of Electrical and Electronics Engineers) scientific community and he takes part in the most important Scientific Conference in the Remote Sensing field of context and also he serves as a regular referee for the main IEEE scientific journals.