PolSAR Target Detection via Reflection Symmetry and a Wishart Classifier

Detection of man-made targets using polarimetric synthetic aperture radar (PolSAR) data has become a promising research area. The reflection symmetry is gradually being applied to man-made target detection algorithms as a physical property that can distinguish between man-made targets and natural clutter. However, the two terms related to the reflection symmetry property in the polarimetric coherency matrix, namely, the <inline-formula> <tex-math notation="LaTeX">$C_{12} $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$C_{23} $ </tex-math></inline-formula> terms, are not fully exploited by the traditional methods. To fully exploit the polarization information of the two terms, an image fusion strategy based on the position and scale information of the scale-invariant feature transform (SIFT) key points is proposed in this paper. Then, a new Wishart classifier based on the patch-level Wishart distance is used to realize automatic target detection of the fused image. The experimental results on measured data show that the proposed method can enhance the contrast between targets and clutter. In addition, the detection performance of the proposed method under different target-to-clutter ratios (TCRs) are verified on the synthetic data and measured data.


I. INTRODUCTION
Synthetic aperture radar (SAR) has become one of the most advanced remote sensing tools due to its all-weather and allday capabilities, together with a fine spatial resolution. SAR images can be used to extract man-made targets from natural backgrounds, for example, in ship detection and vehicle monitoring [1], [2]. Man-made targets generally contain partial dihedral and trihedral structures, as well as metallic components, therefore, the backscattering coefficients of manmade targets are stronger than those natural backgrounds [3]. Accordingly, man-made targets appear as bright areas in the SAR image plane. Many methods based on the amplitude information of SAR images for target detection for singlechannel SAR image data have been proposed.
For single-channel SAR image data, detecting man-made targets such as ships or vehicles is a complicated task. Speckle noise and the presence of natural phenomena, e.g., atmospheric fronts and ocean backscattering changes hampers SAR image interpretation and may result in false positives [4].
PolSAR has established itself as a capable and indispensable earth remote sensing instrument in recent decades, since polarimetric SAR data contain the complete electromagnetic The associate editor coordinating the review of this manuscript and approving it for publication was Larbi Boubchir . scattering characteristic information of man-made targets and natural objects [5]. Man-made target detection from polarimetric SAR data has been widely used in many applications, such as disaster assessment, city change detection, and military target surveillance [6]. For instance, several methods based on physical mechanisms have been applied to polarimetric SAR image data for ship detection [7]- [9].
Natural scenarios usually possess the symmetry property in geophysical remote sensing [10], while man-made targets do not have the reflection symmetry property in geophysical remote sensing. As a feature distinguishing between manmade targets and natural environments, the reflection symmetry of scatterers has many applications in the detection and identification of man-made targets in polarimetric SAR images [11]- [13]. The C 12 and C 23 terms in the polarization covariance matrix C of the scatterer are related to the reflection symmetry of the scatter.
Different polarizations contain different scattering information of targets. To obtain more comprehensive information of the target, such as the contour of the target, this paper proposes a fusion method based on the structural information of man-made targets for the C 12 and C 23 terms. Then, we implement man-made target detection by classifying the polarimetric SAR data after the image fusion operation.
The complex Wishart distribution model is suitable for homogeneous regions in PolSAR data, while the K-Wishart distribution [14], and other mixture models are suitable for heterogeneous backgrounds. Many PolSAR image classification methods are proposed based on the complex Wishart distribution. Lee et al. [15] proposed a distance measure based on Wishart distribution for land classification. Then, Lee et al. [16] iteratively applied Cloude-Pottier decomposition and the complex Wishart classifier. Cao et al. [17] combined SPAN with H/α/A and used the Wishart test statistic to perform agglomerative hierarchical clustering to obtain the segmentation results with different numbers of clusters. These unsupervised classifiers are generally applied to land use classification tasks. Wei et al. [18] combined SPAN with the complex Wishart classifier, and defined an iterative termination criterion to realize automatic ship detection. Large region of the sea with different polarimetric scattering characteristic would produce some false alarms [18]. Within a patch (i.e., a small region), the sea region has no obviously different polarimetric scattering property [18]. To avoid false alarms and improve the classification accuracy, a complex Wishart classifier should be applied at the patch level.
To extract more complete target information, in this paper the reflection symmetry components of different polarimetric channel are fused, based on the structural information of the targets. Target detection is automatically achieved by a complex Wishart classifier. For this classifier, the Wishart distance of two pixels is defined based on the length of the patch centered on the current pixels. The classifier is iteratively run and stops when the changing rate of the target class or clutter class is less than the set threshold. The proposed method was tested with PolSAR data and compared with traditional constant false alarm rate (CFAR) target detector and other target detection methods, and its performance was quantitatively evaluated by the receiver operating characteristic (ROC) curve and area under the curve (AUC). This paper is organized as follow. Section II reviews the polarimetric SAR target detection methods and concept of PolSAR and reflection symmetry. Section III describes the target detection method using reflection symmetry and the complex Wishart classifier. Section IV presents the experimental results and comparisons with traditional approaches. The discussion and conclusion are given in Section V.

II. RELATED WORK ON POLARIMETRIC RADAR TARGET DETECTION AND REFLECTION SYMMETRY
Polarimetric SAR data are widely used in ship target detection. Marino [7] proposed a series of detection methods for ship detection in polarimetric SAR images, including the geometrical perturbation polarimetric notch filter (GP-PNF) for the problem of partial-target detection. Then, Yang et al. [19] proposed a saliency detector for PolSAR images based on weighted perturbation filters; Gao et al. [20], [21] proposed a CFAR ship detector in nonhomogeneous sea clutter in PolSAR data based on the notch filter and proposed a ship detector in compact PolSAR data base on the notch filter. Wang et al. [8] proposed a new PolSAR ship detector based on superpixel-level scattering mechanism distribution features to improve the target detection performance under a low target-to-clutter ratio. Then He et al. [22] proposed an automatic ship detection method in PolSAR data based on superpixel-level local information measurement. Migliaccio et al. [3] proposed a simple filter to extract artificial metallic targets at sea by full-resolution dual-polarized SAR data, which relied on physical principles instead of standard image processing. Wang et al. [11] theoretically analyzed the magnitude of the C 23 term of the polarimetric coherency matrix, demonstrated that this term reveals the difference between the reflection-asymmetric targets and natural clutter.
Full-polarization SAR systems characterize scattering targets by measuring the complex scattering matrix S. This matrix consists of the complex backscattering coefficient for each polarimetric channel, which can be written as where the subscripts H and V represent the horizontal and vertical polarizations, respectively. The first index represents the polarization mode of the transmitted antenna, and the second index represents the polarization mode of the received antenna. In the case of a single station, we can obtain S HV = S VH based on the reciprocity theorem, and the 3-D lexicographic target vector u can be formed as where the superscript T represents the matrix transpose operation.
To handle polarimetric scattering from a distributed and depolarized target in dynamically changing environment, the target can be described by the second-order moments of the fluctuations, which will be extracted from the polarimetric coherency or covariance matrices [5]. Here, the covariance matrix C is introduced under the backscatter alignment (BSA) convention: where n denotes the number of independent samples and · and * are the ensemble average and complex conjugate operations, respectively. C is a 3 × 3 Hermitian semidefinite positive matrix that consists of nine independent parameters. Since only the linearity and the reciprocity properties are assumed, (3) represents the most general polarimetric scattering mechanism [5]. When dealing with natural scenarios, the reflection symmetry assumption generally holds, and the correlation between the co-and cross-polarized complex scattering coefficients vanishes [12]; i.e. S HH · S * HV = 0 and S HV · S * VV = 0. The modulus r 1 of the correlation between the co-and cross-polarized scattering coefficient is and r 2 is When the values of r 1 and r 2 tend toward 0, the current scenario is characterized by the reflection symmetry property; when the values of r 1 and r 2 are greater than 0, the reflection symmetry assumption no longer holds.
To improve the theoretical background of the reflection symmetry properties for natural scenarios and man-made targets, two scenarios can be considered: the sea surface with and without man-made targets. The sea surface is a natural scenario which is characterized by the reflection symmetry property, which implies that r i (i = 1, 2) ≈ 0 is expected. The slight departure from zero mainly depends on the misalignment between the radar coordinates and the scene symmetry axis [12]. When dealing with man-made targets whose shape consists of complex structures, the reflection symmetry assumption is not expected to be still satisfied; therefore, r i values significantly larger than that of the free sea surface are expected, as demonstrated in [2].

III. TARGET DETECTION
Since r 1 and r 2 images contain different polarimetric information, they have different backscattering coefficients for the same target. As shown in Figure. 1, there are two man-made targets in the boxes in the two images, and the same target is located in boxes of the same color. As shown in the Figure. 1, for the same target, it has different response strengths for different polarimetric measurements. Therefore, the responses of the target structure extracted under different polarizations are combined to obtain a more complete target contour. In addition, the complete target contour contributes to target detection, target identification, and improved target recognition performance. An extraction and fusion algorithm based on the SIFT key points detector is proposed for the above consideration.
Lindeberg defined a blob-like structure as an area that is brighter or darker than the background and that stands out from its surrounding environment [23]. Since man-made targets usually consist of complex structures such as dihedral and trihedral structures, they present a series of bright regions in polarimetric SAR images. These bright regions can be seen as a blob-like structure. The SIFT key points detector can detect blob-like structures in images [24] and the detection performance is robust to speckle noise [25]. Therefore, we also use the SIFT point detector to extract the bright areas in the r 1 and r 2 images. These regions are stitched together to achieve a more complete target contour through certain fusion criteria.
The SIFT algorithm is primarily used to achieve image matching. Lowe [26] presented the SIFT method to perform reliable matching between different views of an object or scene. In recent years, some SAR image detection algorithms have also used this algorithm and achieved good performance [25]. Generally, the SIFT method uses difference-of-Gaussian (DoG) function which convolved with an input image I (x, y) to extract stable key points in scale-space, D(x, y, σ ), which can be computed from the difference of two nearby scales. The two scales are separated by a constant multiplicative factor k: where The idea of the DoG function is to smooth the image with different scale-space factors (standard deviation σ from a Gaussian distribution). Then, the difference between the smoothed images is determined. The pixels with large differences are the points with obvious features; that is, they may be key points. For all the key points obtained, some unstable key points that have low contrast or are localized along the edge are eliminated. More details can be found in [26]. Figure. 2 in Section III shows the SIFT key points extracted from the r 1 and r 2 images. It shows that the SIFT key points are localized in the center of the bright regions and are not affected by the speckle noise. We can label the bright regions in the r 1 and r 2 images. The locations of the labeled bright regions are determined by the SIFT key points. The scale of the labeled bright regions is determined by the scale parameter σ . In the two labeled r 1 and r 2 images, the label of the bright region is 1, and the label of the other region is 0. To make full use of the polarimetric information and to label the more complete structure of the target, the bright regions in the two label images are stitched together by the image OR operation.
To obtain more detailed information of the man-made target, we use the two labeled images as the index image to fuse the r 1 and r 2 images. The criteria for the fusion operation are defined as follows, where MAX {A, B} finds the maximum between A and B, MIN {A, B} finds the minimum between A and B, I 1 and I 2 represent r 1 image and r 2 image respectively, Index is an image produced by the two label images after image or operation and (x, y) represents a pixel in the xth row and yth column of an image, and I f is the fused image.

B. DETECTION METHOD BASED ON THE WISHART SIMILARITY
The proposed unsupervised detection method is a classifier based on the complex Wishart distribution for the polarimetric covariance matrix C or coherency matrix T [27]. For convenience of representation, complex matrix Z is used to represent the covariance matrix C and coherency matrix T .
Then, the probability density function for Z is as follows, where K (n, q) is a normalization factor, K (n, q) = π 1/2q(q−1) (n) , . . . , (n − q + 1), where q = 3 for the reciprocal case, and Tr is the trace of a matrix. Let the q × q independent Hermitian positive definite matrices X and Y be complex Wishart distributed; i.e., X ∈ When comparing two polarimetric covariance matrices, n = m is the typical case. In this paper, we select a 3 × 3 patch centered on the current pixel, so n = m = 9. By taking the logarithm of (7), lnQ = n (2q ln 2 + ln |X | + ln |Y | − 2 ln |X + Y |) (11) It can be verified that ln Q = 0 when the two complex matrices are same. Then, a normalized similarity of two pixels can be defined as From (11), d (X , Y ) ∈ [0, 1] is proportional to ln Q, and d x,y = 1 when comparing a pixel with itself. Then, the similarity measurement is used in the detection method. Figure. 2. shows the process details of the automatic manmade target detection method. The main steps are as follows: 1) Import the polarimetric SAR image data. Calculate the polarimetric covariance matrix C of each original PolSAR image pixel.
2) Select the median pixel intensity value of the fused image I f as the threshold, initially dividing the pixels in I f into two class. A pixel value larger than the median is assigned to the man-made target class. Otherwise, it is assigned to the natural clutter class.
3) The number of pixels in the man-made target class is recorded as m, the number of pixels in the natural clutter class is recorded as n.

5)
Calculate the ratio R c of the pixels that change the class label to the total number of pixels in the class. Check whether R c is less than a given threshold R T or not. If not, save the updated class and return to step 3. Otherwise, the iteration is stopped, and the current classification result is taken as the man-made target detection result.

IV. EXPERIMENT A. PRESENTATION OF X-BAND SAR DATA
To demonstrate the performance of our proposed method, we performed man-made target detection on real polarimetric SAR data. The experimental part uses two sets of polarimetric SAR data: one set is AFRL airborne X-band SAR data, and the other set is spaceborne RADARSAT-2 data. In the first section, the results of quad-polarimetric AFRL data are presented. The data provided is complex SAR imagery from an AFRL airborne X-band SAR sensor. In the scene observed, there are multiple buildings, vehicles, and trees. Figure. 3 (a) presents the Pauli RGB image of this scene. Due to the large spatial extent of Figure. 3 (a), we selected one of the areas containing man-made targets to see the man-made target more clearly in the Pauli RGB image. The area inside the red rectangle in Figure. 3 (a) is the scene we selected, with several vehicle targets that can be used to validate our proposed method. The enlarged image of the chosen area is shown in Figure. 4 (a).
To make a quantitative comparison of the detection performance of different methods, we need to use the ground truth of the scene. However, since we did not have an optical image of the scene, we do not have the real ground truth of the scene. Therefore, we artificially label the vehicles  in Figure. 4 (a) by referencing the previous literature [8]. We call the artificially labeled image the pseudo-ground truth. In the subsequent experiments, we use the pseudo-ground truth instead of the real ground truth to compare different methods. The artificially labeled pseudo-ground truth shown in Figure. 4 (b).

B. VISUAL INSPECTION
Performing SIFT key point detection on the r 1 and r 2 images is a critical step in the process. The position of the SIFT key points corresponds to the location of the potential target structure area. The SIFT key points detected from the r 1 and r 2 images are shown in Figure. 5 (a) and (b). As shown VOLUME 8, 2020 in Figure. 5 (a) and (b), for the same vehicle target, the SIFT key points on the r 1 and r 2 images are not the same. As mentioned in the previous section, the polarization data of different polarimetric channels contain different information about the target. Therefore, the SIFT key points extracted from the r 1 and r 2 images are different.
After the SIFT key points have been detected, the next step is to extract the potential target structure regions based on these points. Given the location and scale information of the SIFT key points, we can acquire regions of candidates that are potential target structure regions. When the scale σ of a SIFT key point is known, the size of the blob-like structure around the SIFT key point is 3.5σ . A detailed derivation can be found in [25]. A blob-like structure is an edge-closed structure, so both a rectangle and a circle belong to a blob-like structure. In this paper, we use a circle with a diameter of 3.5σ to approximate the blob-like structure, which is the potential target structure area. The blob-like structure extracted from the r 1 and r 2 images by the SIFT key points and their scale information are shown in Figure.  To extract more complete regions of potential targets, we performed an image OR operation on the two images. After performing an image OR operation on the two images, we obtain the index map and then use the index map to fuse the r 1 and r 2 images. The index map is shown in Figure. 5 (e). Based on the index image, the r 1 and r 2 images are merged by the fusion criterion in (8). The fused image of r 1 and r 2 images is shown in Figure. 6.
To compare the contrast changes between the fused image with respect to the original HH-channel image, the two images are shown in Figure. 6 (a) and (b). Visually, the fused image looks darker than the HH-magnitude image because the pixel intensity of the natural clutter region in the fused image decreases, and the display in the image means a lower brightness. In the fused image, the contrast between the manmade target area and the natural clutter area increases, and the outline of the vehicle target in the fused image is more complete than that in the original image.
To more intuitively observe the contrast changes between the two images, we display the three-dimensional image of the two images; the added dimension is the intensity of the pixels in the image. The three-dimensional images for the two images are shown in Figure. 6 (c) and (d). It can be more intuitively observed from the three-dimensional images that the pixel intensities in the natural clutter regions are suppressed, and the contrast between the man-made target and the natural clutter is enhanced. Figure. 6 (c) and (d) are shown for the qualitative analysis of the detection performance of the HH image and the fused image. In addition, based on the pseudo-ground truth, we also quantitatively compare the detection performance of the two images. The comparison method we chose was to plot the ROC curves of the two images, which are shown in Figure. 6 (e). Simply, the closer the ROC curve is to the upper-left corner of the coordinate axis, the better the detection performance of the image. The ROC curve will be described in detail in the next section.

C. QUANTITATIVE ANALYSIS CRITERIA
To quantitatively compare the detection performance of the fused image, we plot the ROC curve as a criterion for quantitative comparison. The ROC curve is a graphical plot that illustrates the detection performance of an image as its detection threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. In the radar detection problem, the TPR is called the target detection probability, and the FPR is called the false alarm probability. For the selection of the segmentation threshold, we use the pixel values of all the pixels in the current image as a threshold set. After the image is segmented, the area above the threshold is displayed in white, and the area below the threshold is displayed in black. We compare the binarized segment results with the pseudo-ground truth to obtain the detection probability P d and the false alarm probability P fa corresponding to the current threshold. P d and P fa are formulated as follows: (13) where N tt represents the number of real target pixels in the detected pixels, N t represents the total number of target pixels in the pseudo-ground truth image, N ct represents the number of false target pixels of the detected pixels, and N c represents the total number of clutter pixels in the pseudoground truth image. To a fixed image, different thresholds could generate different P d and P fa . By segmenting the image using all thresholds in the threshold set, the ROC curve can be plotted.
In the quantitative comparison, we chose the ROC curve of the test statistic image obtained by the CFAR detector test as a baseline. The CFAR detector extracts the mean and variance in the training window to normalize the test pixels. The test statistic image is composed of these normalized pixels. The sizes of the CFAR detector training window and the protection window have a direct impact on the detection performance. In practical applications, the window size is usually set according to the size prior information of the target to be detected. In our experiment, the training window size and the protection window size of the CFAR detector are fine adjusted to ensure good performance. To verify the improvement in the target detection performance in the image fusion step, we compare the ROC curve of the fused image with the ROC curves of the r 1 and r 2 images. The ROC curves of the r 1 image, the r 2 image, the test statistic image from the CFAR method and the fused image are shown in Figure. 7. To quantitatively compare the detection performance of the different images, we use the AUC to simplify the ROC curve to a single number. The value range of the AUC of the ROC curve is between 0 and 1. The larger the AUC of the ROC curve is, the higher the target-to-clutter ratio (TCR) of the image, and the better the detection performance in the subsequent detection. The AUCs of the fused image and the other images are given in table 1. The AUC of the fused image is higher than that of the r 1 and r 2 images. This finding shows that the contours and details of the man-made target are improved by combining the different polarization information by SIFT key point detection and image fusion.

D. EXPERIMENT ON RADARSAT-2 DATA
In the SAR image ship detection problem, some weak ship targets with low ship-sea contrast are often missed. To verify that the proposed method has better detection performance for ships with low ship-sea contrast, we artificially changed the TCR of the polarimetric SAR data. In addition, the results of different methods for detecting the polarimetric SAR images at different TCRs are presented.  A C-band RADARSAT-2 Fine-Quad-mode fully PolSAR dataset is used in the experiments. The resolutions are approximately 8 and 12 m in the azimuth and range directions, respectively, the data were acquired on August 4, 2010. The coherency matrices are computed using a 3 × 3 window. The Pauli RGB image is shown in Figure. 9. We use the RADARSAT-2 data in Figure. 8 (a) to verify the detection performance of the proposed method at different TCRs. Similarly, to quantitatively evaluate the detection performance of different methods, we obtained the pseudoground truth of the current scene by manual annotation. The pseudo-ground truth is shown in Figure. 9 (b). The pseudo-ground truth is also used when generating the simulated SAR image data at different TCRs.
To evaluate the detection performance of the proposed method on the RADARSAT-2 polarimetric SAR image data with different TCRs and to verify the detection performance of the method when the TCR is low, we generate synthetic images with different TCRs in the same way as in [8]. The operation is as follows. The TCR is defined as the ratio of the average span of the target and the clutter in decibels. We change the TCR by multiplying the target pixel measurements by several constants. Some images with different TCRs are shown in Figure. 9. Under a low TCR, it is difficult to distinguish between the ship and the sea. The images with low TCRs are shown in Figure. 9 to simulate the low ship-sea contrast condition. The proposed method is compared with the SPAN, the PWF [28], and the PTD [29] methods. To show the detection performance of different methods under different TCRs, we generate a set of image data with different TCRs. The TCRs of this set of data are 0 dB, 2 dB, 4 dB, 6 dB, 8 dB, and 10 dB, and AUC is used to indicate different methods for evaluating the performance on the image data for different TCRs. A higher AUC usually implies better detection performance. Figure. 10 shows the AUC versus the TCR for different methods. It can be seen in Figure. 11 that the proposed method is close to the detection performance of the PTD method, PWF method and other methods when the TCR is relatively high. In the case of a low TCR, the detection performance of the proposed method is higher than that of the other methods. This finding means that the method proposed in this paper also achieves a better detection performance when detecting weak targets at sea.
To more intuitively demonstrate the detection performance of the proposed method and the other methods on the ship target, we carried out the ship target detection on another RADARSAT-2 dataset and showed the detection results in the form of images.
The PolSAR image used in the experiment is shown in the form of a Pauli pseudo-color image in Figure. 11 (a), and the scene of the image is a sea surface near a port. There are 26 ships of different sizes and different ship-sea contrasts. Among these ships, some ships have a high ship-sea contrast, and some ships have a low ship-sea contrast. This dataset can also verify the detection performance of the proposed method for weak targets. The different colors in the Pauli pseudocolor map represent different scattering mechanisms, where blue represents odd bounce scattering; that is, the sea surface will appear blue in the image.
Since there are no optical data for the current scene, we do not have a real ground truth image. Instead, 21 high ship-sea contrast ship targets are marked according to the scattering mechanism and the shape of the target. We marked these high ship-sea contrast ship targets in the polarimetric SAR image with red rectangles, and the marked image is shown in Figure. 11 (b). In addition to these high-contrast targets, there are some small, low-contrast targets on the sea that are different from the sea surface scattering mechanism. Observing their scattering mechanisms and shapes, we consider five of them small ship targets and mark these targets with white ellipses. The marked image is also shown in Figure. 11 (b). To more clearly observe the targets of these five weak ships, we will enlarge these five targets and show them in Figure. 11 (c-f). In addition to the 26 marked ship targets, the Pauli pseudo-color image shows that there is still a type of object on the sea. Because the scattering mechanism is different from the scattering mechanism of the sea surface, it will cause false alarms in the detector.
The new method is compared with the PTD, the PWF, and SPAN methods. As revealed by the synthetic image data results shown in Figure. 10, the performance difference among these methods mainly lies in the detection of targets with low ship-sea contrast but not in the detection of targets with high ship-sea contrast. Therefore, we choose appropriate detection thresholds for these methods to try to detect all the weak ship targets; these methods can also detect all the strong ship targets.
The detection results of the different methods are displayed in Figure. 12. It is noticed that the new method has fewer false alarms than the PTD, PWF, and SPAN methods. Although the proposed method, the PTD method, and the PWF method all detect 26 targets, the proposed method appears to detect fewer false alarm pixels. The SPAN method does not detect all the weak targets: T4 is missed. In general, the new method achieves a better performance than the existing methods. The reasons are analyzed as follows.
The PTD detector has many false alarms near the coastline, which may be because the scattering mechanism of the false alarm clutter is different from the scattering mechanism of the sea surface. The PWF and SPAN methods are dependent on the scattering intensity. Therefore, weak target pixels may be missed, and strong clutter pixels may be detected.
Using the detectors presented in this paper, all 26 targets can be detected. There are still some false alarms in the test results. For the task of detecting large ships, the size of the target can be used as an indicator to eliminate false alarms that differ greatly from the ship size. However, for weak targets, due to their small size, it is difficult to eliminate false alarms by the size information. In a following work, we hope to extract the information of the target and clutter in other domains, such as the attribute scattering center, to distinguish between weak targets and clutter false alarms.

V. CONCLUSION
We propose a PolSAR target detection method based on the reflection symmetry and a Wishart classifier in this paper. To obtain more complete reflection symmetry information of man-made targets, we propose a fusion strategy based on the position information and scale information of the SIFT key points. The fused image is then classified by a classifier based on the Wishart distance between patches to achieve manmade target detection. The proposed method is first validated on AFRL X-band SAR data and compared with the CFAR detector. The experimental results show that the image fusion based on SIFT key points increases the contrast between the man-made target and the natural clutter in the image, so that a better detection performance is achieved. The subsequent quantitative analysis based on the pseudo-ground truth also verified this conclusion. RADARSAT-2 SAR data was then used to test the effectiveness of the proposed method with low TCRs. The results show that the proposed method is robust to different TCRs. Finally, its detection performance was quantitatively evaluated.
There will still be some false alarms in the detection results. It is difficult to remove these false alarms by the size information due to the presence of weak targets. In a follow-up work, we hope to introduce information from other domains to distinguish between weak targets and clutter false alarms.
MINGFEI GU received the B.S. degree in electronic and information engineering from Xidian University, Xi'an, China, in 2013, where he is currently pursuing the Ph.D. degree in signal processing with the National Laboratory of Radar Signal Processing.
His research interests include synthetic aperture radar (SAR) image change detection and polarimetric SAR target detection and discrimination. Her research interests include synthetic aperture radar (SAR) image change detection, SAR target discrimination, SAR image processing, and pattern recognition.
DONGWEN YANG received the B.S. degree in electronic and information engineering from Xidian University, Xi'an, China, in 2013, where he is currently pursuing the Ph.D. degree in signal processing with the National Laboratory of Radar Signal Processing. His main research interests include PolSAR target detection and SAR feature extraction.