Dark Area Classification From Airborne SAR Images Based on Deep Margin Learning

The dark area in the SAR image corresponds to the zero-energy electromagnetic echo region because specular reflection occurs when the emitted electromagnetic wave (EMW) comes into contact with a smooth object on the ground. In addition, EWM sometimes cannot measure the entire portion of the ground object due to the incident angle of the actual aperture radar sensor and the height of the object on the ground, which creates a dark area in the SAR image called a SAR shadow. Therefore, it is challenging to distinguish the dark area from shadows or some ground objects with smooth surfaces. To address this problem, this paper proposes a novel deep marginal learning algorithm (DMLA) to implement the dark zone classification of airborne SAR images. Specifically, a novel Snake Search Algorithm (SSA) with a chain code is deployed to generate margin slices of dark areas in SAR images, which is used as the input to the following classification network. In addition, a deep convolutional neural network with simple architecture is constructed to train the margin slices of the SSA, which outputs the final classification results with labels. Finally, experiments on three public airborne SAR image datasets output the average classification accuracy of 98.775% for the dark margin slices and an accuracy of 100% to identify dark regions in the original SAR images. The proposed method not only contributes to a high classification accuracy but also improves time efficiency.


I. INTRODUCTION
Carl Wiley first introduced synthetic aperture radar (SAR) in the 1950s [1], an active microwave imaging system to conduct remote sensing missions with self-generated electromagnetic waves. As is shown in Figure 1, the antenna of SAR transmits the electromagnetic wave and receives the echo from the ground to complete a high-resolution imaging task during the flight of the radar carrier, i.e., the radar platform keeps moving. Despite this unique measurement system promoting the imaging resolution with broader bandwidth by synthesizing aperture during the flight, several disadvantages occur during the imaging process. Specifically, speckle noise [2], geometric distortion [3], and shadow [4] are three main problems in the SAR observation system. The first two problems are mainly caused by the coherent imaging theory of SAR [5], which could be suppressed and calibrated by an artificial algorithm. However, for shadows in SAR The associate editor coordinating the review of this manuscript and approving it for publication was Weimin Huang . images, it is an inevitable phenomenon due to the geometric mechanisms of SAR observation.
The raw SAR data represents the return energy of electromagnetic waves emitted by an active remote sensor and is VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ typically stored by an 8-bit grayscale image. The brightness of each pixel in the SAR image depends on how much energy the SAR receiver can receive from the ground echo. As shown in Figure 2, due to the object's height on the ground, the emitted electromagnetic waves sometimes cannot reach the back of the object, which means that the receiver will not receive the echoes from this area. In this case, a dark region is formally produced in the SAR image, corresponding to the zero-energy echo region that the EMW cannot reach [6].
In addition, objects with smooth surfaces, such as roads, rivers, and lakes, may also produce a dark region in the SAR image due to the specular reflection of EM waves. However, the bright areas in SAR images reflect some objects with rough surfaces due to the diffuse reflection of electromagnetic waves. Figure 3(a) shows the spatial SAR image of Porto Velho, Brazil, with a resolution of 3 m * 3 m, taken by TerraSAR-X with HH polarization. Figure 3(b) shows the optical image of the area (a) corresponding to the SAR image. After comparing these two images, the information that can be obtained from the SAR image is that the dark area represents the Madeira River, which is an essential waterway through the country. Almost no echoes return from the river's surface, making the river area appear dark in the SAR image. Areas with rough surfaces like farmland are mapped by TerraSAR-X, corresponding to the surface information of the optical images. Due to the limitation of space SAR flight altitude, the shadowing problem in SAR images only occurs when SAR observes large objects, such as mountains and huge buildings [7]. The resolution of space SAR sensors cannot discriminate small-sized objects on the ground, such as trees and houses. In other words, most of the dark areas in the spatial SAR images are objects with smooth surfaces, not the shadows of the objects. This is why no prominent shadow areas are found in Figure 3(a). However, this phenomenon looks quite different from the airborne SAR because the flight altitude is much lower than the spatial SAR, causing the incidence angle to become more significant than that of the highaltitude carrier. As shown in Figure 2, SAR carriers flying at different altitudes above the same location point result in different shadow areas. The flight altitude of airborne SAR ranges from 3,000 m to 12,000 m. Recently, some miniature unmanned aerial vehicles (UAVs) with SAR sensors can fly at an altitude of 500 m to perform mapping tasks on targets on the ground [8]- [12]. The shadows in the airborne SAR images are more visible than those in the spatial SAR images. Figure 4(a) shows the airborne SAR sensor named ''I-Master'', which is the most advanced lightweight airborne SAR of the British Telesis Group in recent years. Figure 4(b) shows a crewless helicopter equipped with the I-Master airborne SAR sensor. By comparing the SAR image with the optical image, the shadow of an object on the ground can be distinguished from the human visual experience. As shown in Figure 4(c) and (d), the Porton Down area is plotted in (b), and the dark areas in this airborne SAR image consist of tree shadows and specular reflections from roofs, smooth concrete floors, and roads, as indicated by arrows. Figure 4(e) shows that the shadows of trees cover part of the road, as shown by the yellow arrows, making the road appear to be hidden in the dark area. And Figure 4 (f) shows the google map optical image but will distinguish the two very clearly. To address this phenomenon, we first proposed an emerging concept, SAR shadow contamination, to describe the occlusion of other objects in SAR images due to shadows. Therefore, for many SAR applications in military and civilian domains, it is crucial to classify these different dark areas.
Since the 1970s, massive classification methods on SAR images have been put forward. There are over 3,500 articles with keywords ''SAR Classification'' on the IEEE Xplore website, consisting of two majority research topics: Target Classification and Terrain Classification. Methods on SAR Terrain Classification have been developed through three main stages since 1980s [13]- [15], based on mathematics decomposition [16]- [23], pattern recognition [24]- [28], and artificial intelligence [29]. The year 2012 was a significant year for every science domain since Hinton first introduced deep learning technology [30]. More and more methods based on deep neural networks (DNN) have been applied to the SAR classification task, which profits from the DNN's strong capability of feature learning and generalization [31]- [38]. Not only the frequency of utilizing DNN is increasing, but also the dimension of the DNN is transferring from 2D to 3D [39]. Generally, the input of such methods on SAR terrain classification is one complete raw SAR image. After reading all the papers referring to this topic, a conclusion could be addressed that the most used input SAR image is the polarized spaceborne SAR image. Researchers prefer to design innovative network architecture to achieve a satisfying classification accuracy with the input of polarized spaceborne SAR images. Public PolSAR datasets like Flevoland and San Francisco Bay are the two most popular SAR terrain classification datasets offered by Jet Propulsion Lab-Caltech with the program AIRSAR from 1988 to 2004 [40], as shown in Figure 5. However, these two datasets are large scene spaceborne SAR images that the visible dark area only belongs to the zero-energy echo scattering objects. The reason for this phenomenon lies in the previously discussed in this paper that the active SAR sensor flight altitude is too high, which could not capture the small objects on the ground with the inherent resolution. Summary of development stages of SAR land surface classification is listed in Table 1.
Even though the number of researches on shadow effect in SAR terrain classification is not so many, the shadow for SAR target classification still drew researchers' attention.  For a single target in SAR image, shadows on the ground sometimes represent much more detailed information about the target's profile, which is a critical characteristic for target classification problems [41]- [45]. Besides, the disadvantages of SAR shadow in SAR terrain classification turns into the advantage for SAR target detection, which is used in application like ground moving target indication (GMTI) that researchers utilize the movement of target's shadow on the ground to detect the corresponding trace [46]- [49]. In addition, there are some researchers working on SAR images also play an important role in the development of SAR [50]- [56].
The above summary for the SAR classification problem indicates that shadow issues in SAR images are not fully discussed yet, especially for airborne SAR images. People could distinguish dark areas in a single polarized airborne SAR image by human visual experience. However, for the SAR interpretation system, it is a big challenge to identify dark areas' attribution correctly for one SAR image with SPS. This paper focuses on the dark area classification for a single polarized airborne SAR image. The edge information between dark areas and the bright area contains implicit features for different objects. For instance, the road's edge information is always straight, while the edge of a tree's shadow is curved and uneven. What is more, the darkness of the dark area also depends on the type of scattering objects. For example, the absolute zero-energy echo area like lake and river looks much darker than the low-energy echo area like road and shadow. Hence, a novel Deep Margin Learning Algorithm (DMLA) is proposed to extract and train the dark area's edge slices to finish the classification task. Meanwhile, a novel Snake Search Algorithm (SSA) with chain code is deployed to quickly and accurately extract the dark area's margin slices. We construct a convolutional neural network with simple architecture to train the margin slices to classify and number them with an ID number. Finally, all these identified margin slices are matched to the raw SAR image to finish the dark area classification. Workflow diagram of the proposed DMLA method in this paper is shown in Figure 6. The rest of this paper mainly consists of the following sections; Section II introduces the proposed Deep Margin Learning Algorithm for dark area classification from airborne SAR images. Section III describes the validation experiment and results. Section IV concludes the whole paper and put forward a few prospects for future research directions.

II. DEEP MARGIN LEARNING ALGORITHM FOR DARK AREA CLASSIFICATION FROM SAR IMAGES
This section comprises three main subsections to illustrate the proposed method for dark area classification from airborne SAR images. The detailed analysis of the features of the dark region aims to confirm that the edge information is the key feature to distinguish the shadow from the zero energy echo region. A new continuous sampling method is proposed to generate dark area edge slices as input to the following classification network. Unlike the complex deep neural network used in the current commonly used SAR terrain classification methods, this paper constructs a CNN with a simple structure to promptly train and classify these edge segments and promptly output the final classification results.  SDMS [50] mapping, which contains several distinct dark areas in the original SAR image. Figure 7(b) shows a manual annotation of the dark areas in (a).
The yellow pentagonal star represents the shadows of the trees, and the blue diamond represents the smooth surface of the pool. The goal of this section is to accurately identify and classify dark areas in these raw SAR images. In the traditional fully polarized spatial SAR terrain classification task, different regions in the SAR image can be classified by the pixel value characteristics of each region in the image. However, this method is no longer suitable for airborne SAR images with dark areas, where all pixels have grayscale values close to zero. As shown in Figure 8, we used three 9 × 9 pixel Windows to sample the three dark areas in the original SAR image. Red squares 1 and 3 are SAR shadow samples, while blue squares 2 are pool samples. The average grayscale values of all pixels in Square 1, 2, and 3 are 12, 8, and 9, respectively. The pixels in these three squares have relatively low gray values, making it difficult for digital image processing algorithms to distinguish the distinctive features of different dark areas [57], [58]. Another surface classification method widely used in spatial SAR images is based on the shape of topography. However, the shape of the shadow depends on the incident angle of the emitted EWM, which significantly affects the image quality of airborne SAR images. Different EWM incidence angles lead to different shadow shapes of objects on the ground, which is also crucial for dark area classification in airborne SAR images. Therefore, the traditional terrain classification method is not suitable for our task.
Surprisingly, we gained some insights by looking closely at the edges of the dark areas in the airborne SAR images. The boundaries of trees and shadows, shadows and background, and pool pools and background are marked manually with red, green, and blue dashed lines, respectively, in Figure 9. As the three color squares labeled in Figure 10, we extracted ten edge slices and one clutter slice from the dark region in this SAR image with a 128 × 128 window. 1-5 slices . Dashed lines of red, green, and blue are marked in the SAR image, respectively, representing the boundary between the shadow of the tree and the background clutter, the boundary between the tree itself and the shadow, and the boundary between the pond and the background clutter.
were selected from the SAR shadow region, 6-10 slices were selected from the specular reflection region, and 11 slices were clutter slices. By comparing the two pairs of edge slices, VOLUME 9, 2021 FIGURE 10. Three colors of rectangular boxes were used to capture 11 window images of 128 * 128 at the boundary between the dark area and the background clutter.
we found that the edges of the specular reflection area are relatively smooth and well distinguished.
In contrast, the edges of SAR shadows with trees and ground are curved and irregular. This is because the top surfaces of ground objects are usually uneven, causing the portion of the incident electromagnetic wave that is blocked to depend entirely on its shape. As shown in Figure 11, although the treetops block most of the EM waves incident from the top, a small amount of EM waves still reaches the ground through the gaps between the branches and leaves. Accordingly, the amount of EWM causes uneven blurring of shadows and background in the airborne SAR image. For areas where specular reflection occurs, the boundary between smooth and rough areas is evident because the electromagnetic waves irradiating on smooth areas are entirely reflected. In contrast, the rugged areas absorb most of the electromagnetic waves, making the boundary of the SAR image obvious. Figure 12 shows the scattering model of a water pond under EM wave observation. Furthermore, we analyze the pixel's intensity in these margin slices by luminance distribution [59]. As shown in Figure 13, piece 2 and piece 7 represent the margin slice from the SAR shadow and the area with a smooth surface.  Clutter slice 11 is used as a reference to help readers better understand the pixel intensities of different margin regions in the airborne SAR image. Low-intensity pixel areas represent dark areas in the original SAR image, and high-intensity pixel areas represent cluttered areas. The clutter's edge pixels' luminance intensity falls rapidly to low values for smooth surface area margin slice in the 2-D luminance distribution plot. However, the intensity values of the edge pixels in the SAR shadow margin slice decrease smoothly and slowly, which also reflects the presence of noise in the boundary transition region in the original SAR image.
The edge characteristics are also shown in the 3D luminance distribution plot. Instead of using the whole part of the area, we only utilize the dark area's margin slices as input to the classification network. The first essential step is to get the margin slices of the different dark regions accurately.
To accurately acquire margin slices, locating the edge of the dark area is necessary. Conventional edge detection algorithms rely on the pixel's gradient around the edge to discriminate the operator's boundary [61]- [63]. However, for the SAR shadow edge region, the gradient is smooth and slowly changed, which is hard to detect the edge. Figure 14 and   Figure 15 show the six commonly used operators applied to the dark area's margin slices in the SAR image. The Canny edge detection algorithm seems to perform the best among all six algorithms for specular reflection area margin slices. However, the boundary extracted by the Canny algorithm is not closed due to the speckle in the SAR image. It is hard to draw a coherent and complete line to represent the margin slices' entire edge. The intermittent of the canny detection for SAR shadow margin slices looks much more evident. Aiming at this problem, the following section mainly discusses how to acquire a coherent edge for the entire dark area in the SAR image and generate the margin slices for those areas simultaneously.

B. GENERATION OF THE MARGIN SLICE FOR DARK AREA
Many image segmentation methods are based on a deep neural network to provide high precision segmentation results [54]- [60]. In our proposed method, we do not need to extract the edge of the dark region accurately but generate a region slice centered on the edge of the dark region, which provides a high fault-tolerant rate for our segmentation. In this case, we tend to use a threshold-based fast image segmentation algorithm to improve the timeliness. As shown in Fig. 16, we first applied Otsu's global threshold segmentation method to airborne SAR raw images. Black areas represent dark areas in the original image. Due to the ubiquitous speckle noise in the original SAR image, the segmented speckle still exists in the background. Therefore, it is necessary to carry out the filtering operation to remove spots before segmentation. Fig. 16 (c) is the image segmentation result after Lee filtering, indicating that the black dot influence point is no longer mixed with the background. Figure 16 (d) is the reverse mask of (c), indicating that the blue region is the dark region extracted from the original airborne SAR image.
The dark areas in the segmented raw SAR images are irregularly shaped and disordered with some connectivity. Therefore, different types of dark areas have some differences in image distribution. For example, as shown in Figure 16 (d), the forest produces shadow areas punctuated by trees, resulting in many holes in the forest region in the segmented SAR image. In contrast, for low-energy scattering regions, such as pools, the extracted dark areas are flat and substantial regions with precise edges. Meanwhile, the pixel points with very low brightness in the clutter of the original SAR image are removed using the global thresholding method.
Next, we extract the edges of these dark regions in the original SAR image after segmentation. However, some details of the edges of the dark regions draw our attention. We found some burrs at the edges of each dark region in the original SAR image, mainly caused by the blurred transition between VOLUME 9, 2021 the object and the dark region. It is not easy to distinguish the boundary of these two regions accurately. Therefore, the morphological operation of the open operation is performed on the segmentation results to eliminate the small targets and smooth the boundaries of the prominent targets. As shown in Fig. 17(c), after the open operator, the edges of the dark region become smooth and flat while maintaining the shape of the central region. With this operation, we eliminate the tiny dark regions, which are usually considered disturbances in clutter, and are not the main targets of our study. Figure 17 (h) shows the complete segmented SAR image after the open operation. By observing the extracted dark areas, we can find some features in the segmented image. The contour of the extracted dark areas has a specific spatial commonality. The dark areas generated by the same type of ground targets are correlated in the image, forming an interconnected region. We could extract those regions' edges with an edge detection algorithm and number each region accordingly. As shown in Figure 18, each interconnected region is marked with a number in a different color. We acquire 14 connected regions from the segmented SAR image, which are the dark areas we will classify later.
In order to generate edge slices for each connected region, we first need to position the centroid of the edge slices on the boundary. Inspired by the Snake game, something similar can be applied here. Snake is an arcade game that originated in 1976 as Blockade [64]. The player controls a long, thin line (commonly known as a snake) that keeps moving forward. The player can only navigate around the head of the snake (up, down, left, and right), picking up objects (or ''dots'') along the way and avoiding touching themselves or other obstacles. For each piece of food the snake eats, it grows a bit larger and then searches for the next piece of food. Here, we propose a new snake search algorithm (SSA) that locates each pixel and numbers the ID of each pixel as the snake advances to the edge of a dark region. First, we define the food that the snake needs to pursue during its movement, i.e., the dark region boundary block. As shown in Figure 19, there are four types of boundary blocks on the boundary of the dark connected region for the snake to find, i.e., fourpixel arrangements. Centered on the current pixel where the snake is located, the snake automatically searches for the edge blocks in eight directions centered on itself and moves to the next boundary block. The snake's task on the move is to chase the pixel with pixel value one and travel to that pixel, and label the significant pixel point reached with ID. As shown in Figure 20 (a), the purple triangle represents the snake's head. It has eight moveable directions based on its current position. Figure 20 (b) shows a zoomed-in image of the boundary of the dark region, which contains four I-shaped boundary modules. Figure 20 (c) shows the basic principle of the SSA algorithm, as shown in the steps in Table 2.
As shown in Figure 21, we choose an arbitrary pixel point on the boundary as the starting point of the snake. Starting from the initial point, it automatically searches for the next point on the boundary with pixel value one and marks a number as the ID. Finally, the snake returns to the starting point with the complete boundary outline. Since the connected area is closed, no matter which pixel point the snake  starts from, it will eventually return to the starting point. To demonstrate the application of SSA in authentic SAR images, we chose a part of the interconnected region19 to validate the proposed method. Figure 22 (a) shows the dark region connected to region 19, and Figure 22 (b) shows the enlarged portion of the edge contour of the dark region. For any closed dark area connected region, the snake can start from any pixel on the boundary and automatically search for the next pixel on edge. Once the snake reaches a new pixel, the snake determines a number for that pixel, as shown in Figure 22 (c). Finally, when the snake returns to the starting point, a complete closed boundary containing the ID of each pixel is formed, as shown in Figure 22 (d). In the next step,  Figure 17 along the boundary (c) SSA connects the pixels in the adjacent boundary modules with a pixel value of 1.  this closed region with identifying pixels is used to generate an edge slice on each sampled boundary pixel.
To generate edge slices centered on pixels on the edges, as shown in Figure 23 (a), we create a 128 × 128 window centered on the selected pixels by extending the selected pixels by 128 pixels in the X and Y directions. We then apply this window to the pixels in the dark region connecting the region boundaries, where the pixels are sampled at fixed VOLUME 9, 2021 intervals. Figure 7 (f) shows a schematic diagram of sampling the region's boundaries in question with the created window. Figure 23 (b) shows that the green pentagons and green squares represent the sampled pixels on the boundary and the generated edge window, respectively. Finally, we map all the windows sampled at the boundaries of the corresponding regions onto the original SAR image to obtain edge slices of all dark regions. Figure 23 (e) and Figure 23 (f) show the edge slices of the shaded and low-energy scattering regions in the actual SAR image. The sampling interval of the edge slices can be adjusted according to practical needs to control the output amount, which is flexible.
As shown in the schematic diagram of the DML method in Figure 24, the four shapes of the graphs represent different classes of dark regions in the original SAR image. We first use the SSA algorithm to sample the pixel points at the edges of the dark regions at specific intervals to form dark region edge slices. Then, we label these slices as classes belonging to the same class of dark regions and feed them into a CNN for training to obtain a trained network. Eventually, this trained network can identify new edge slices for the final classification of dark regions. The ultimate goal of our proposed method is to identify any unknown sample slices in the original SAR image regarding the dark region. This method can identify unknown dark regions in the same SAR image and dark regions in different SAR images. We will  build a deep neural network for this classification problem to train and classify edge slices of dark regions in the next section.

C. OUR CLASSIFICATION NETWORK FOR GENERATED MARGIN SLICES
To classify these margin slices, we construct a convolutional neural network with a simple structure to learn and generalize these images' features. In this paper, we use the deep learning toolbox in Matlab to construct and train the proposed deep neural network. The Matlab version used in this paper is the version 2021a, and the deep learning toolbox version is 14.02. The conventional SAR terrain classification method relies on the deep neural network with a complex structure because the network's input is the original SAR image of the large scene.
Only CNN with complex structures can well summarize the detailed features of the large scene. Most researchers who use deep learning methods are reluctant to discuss the time spent on their methods because training these networks can be very time-consuming. Compared with the conventional SAR terrain classification methods based on the deep neural network, the input to our network is a 128 * 128 image edge slice, which only contains the dark area margin information while not the full original SAR image. This kind of input dramatically reduces the amount of neural network training on the redundant information of the original SAR image because the gray value is almost zero for the dark areas in the SAR image. Then, we selected a serial convolution architecture to construct the feature extraction part of the CNN. As shown in Figure, two convolutional layers extract the input features with the following Relu and max-pooling layer. The convolutional kernel size in two convolutional layers is 8 * 8 and 4 * 4 with the stride of two, respectively. The size of the max-pooling layer is 2 * 2 with a stride of two. The last max-pooling layer follows a fully connected layer with 1048 neurons to flatten the generalized feature vectors. SoftMax layer is used to classify the features in different categories in front of the output layer. Figure 24 shows the detailed configuration of the network.
To verify the feasibility of our proposed network, we conducted a pre-training test with a small dataset. This dataset only contains three types of dark areas in airborne SAR images, generated in section B. We set the boundary pixel sampling interval to 3 pixels and select 700 newly generated shadow edge slices and 500 low-energy scattering edge slices in each category. We split all the images into two subsets, which are the training set and the validation set. 80% of the images are used as the training set and the rest as the validation set. Figure 26 shows the training process of our constructed CNN with the small dataset in Section B. It reveals that the loss function of this convolutional neural network converges fast, reaching 100% verification accuracy within 8 seconds. The results of this pre-training show that our proposed CNN provides a feasible reference for subsequent formal validation experiments. In the following section, we are to utilize several airborne SAR datasets to validate our proposed DML method.

III. CONFIRMATORY EXPERIMENTS AND RESULT DISCUSSION
In this paper, we conduct experiments on the MSTAR [65], MiniSAR [66], and BSSAR [67] datasets to verify the reliability of our proposed DML large, respectively. The airborne SAR captures all SAR images in these three datasets. Here, we mainly use 100 SAR land scene images from the MSTAR dataset. The size of each image is 1478 * 1784. Figure 27 shows several sample raw SAR images taken by airborne SAR at an angle of 15 degrees at the Arsenal National Space Center. Sandia National Laboratories created VOLUME 9, 2021 a MiniSAR dataset with an unmanned airborne SAR platform in 2007. The UAV flew at an altitude of 900 meters and captured the range and azimuth of the Minnesota National Guard at a resolution of 0.1 m * 0.1 m. The size of each full-scene SAR image is 2510 * 1638, which contains vehicles, airport runways, and houses. The Science and Technology Laboratory of U.K. Defense uses the I-Master airborne SAR sensor to develop BSSAR dataset. The size of SAR images in this dataset is 6656 * 5056, which contains rivers, bridges, roads, and farmland.
As shown in Figure 28, Figure 29, and Figure 30, we first extract the dark areas in three datasets to obtain the occlusive connected region with ID. Then, we utilize the SSA to generate margin slices for those boundaries. We create a dedicated dataset called SAR Dark Area Margin Dataset (SDAMD) in this paper to validate our proposed method. There are eight types of dark area margin slice in SDAMD: tree shadow, lake, bridge, house roof, airport, vehicle shadow, building shadow, and road. Figure 31 shows the eight types of the dark area margin slice. Table 3 shows the detailed configuration, composed of two subsets, i.e., the training set and the test set. We select 5000 margin slices from VOLUME 9, 2021   three datasets in each category for the training set of SDAMD and 1000 margin slices for the test set. The training set is divided into two subsets as described in the previous section to train our convolutional neural network and supervise the overfitting issue. The test set is used to test the generalization of the deep neural network and output the final classification result. We upload this dataset on the GitHub page to make it available to the public.

B. FEATURE LEARNING AND CLASSIFICATION OF DARK AREA MARGIN
For SAR interpretation systems, the target detection and classification speed are our main concerns. It is well known that training deep neural networks with complex structures takes too much time to obtain convergence. However, our CNN network has a simple structure and single input image information, and the input edge slices contain only the boundary information of each dark region. Therefore, in the initial stage of neural network training, we can set the initial learning rate slightly higher to increase the decrease of the gradient of the loss function by setting the initial learning rate to 0.1. When   the neural network training iterations increase, the neural network training with SGD as the optimization algorithm starts to encounter difficulties. The gradient explosion or gradient disappearance of the back-propagation training model along the extension direction of the network layers does not provide basic information about the conduction loss of the lower layer neural network parameters. The loss function decreases slowly, and the best quality will not be achieved if the initial learning rate continues to be used. Therefore, we set the learning rate to decrease by a factor of 0.1 in the absence of two epochs in training. We also use mini-batch gradient descent   to speed up the training, i.e., small-batch gradient descent. This method divides the data into batches and updates the parameters batch by batch so that the data in a batch jointly determine the direction of this gradient. Gradient descent is less likely to be biased and reduces randomness. On the other hand, since the sample size of the batches is much smaller than the entire data set, it is not computationally intensive. Table 4 describes the detailed network training options.
Since our edge slices are sampled in a single vertical direction, we use data enhancement techniques to improve the generalization of neural network learning to reduce overfitting.  First, we rotate the images in the training set by any angle within 15 degrees with a certain probability and translate the images by a certain amount within 5 pixels in the updown direction. Next, we start training our CNN with these enhanced data.
Our deep neural network is trained by one NVIDIA Geforce RTX 3090 with 10496 CUDAs. Figure 32 shows the training process of our CNN with the augmented data, which reveals that it takes 1 min 24 seconds with 550 iterations to achieve convergence. After four epochs of training, the network achieves the final validation accuracy of 99.64%. There is no gap between the training accuracy curve and the validation accuracy curve, indicating no overfitting happens during the whole training process. The training accuracy accelerates rapidly in the first few iterations. Then, until convergence, the training accuracy curve rises steadily and slowly. Once the accuracy rate reaches 90 percent, the rise becomes extremely slow. At this moment, the decreasing learning rate comes into play, allowing the neural network to learn the features better and thus achieve convergence. We save the parameters of this well-trained deep neural network locally and adopt it on the test set of the SDAMD to validate the final classification accuracy of our proposed method. Figure 33 and Figure 34 show the classification confusion matrix of our proposed deep learning network on the MSTAR training set and training set. Figure 35 and Figure 36 show the classification confusion matrix of our proposed deep learning network on the MSTAR training set and training set. Figure 37 and Figure 38 show the classification confusion matrix of our proposed deep learning network on the MSTAR training set and training set. Figure 39 shows DMLA's classification confusion matrix across the test sets of the entire SDAMD dataset. Figure 40 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) of the visualized classification result. Figure 41 shows the ROC curve for each category in the SDAMD dataset. The final average classification accuracy is as high as 98.775%. By comparing and analyzing the ROC curve, it can be seen that the neural network classification model established in this paper has achieved excellent performance on the data set. The shadow recognition rate of the vehicle is slightly lower than the recognition rate of other dark areas. The reason is that the vehicles are small in size. The method proposed in this paper for generating edge slices of dark regions affects the final learning recognition rate because the interception window is 128 * 128. The shadow area of some vehicles is smaller than 128 * 128. However, this recognition accuracy is for the classification of margin slices of dark regions. We can classify a single dark part by mapping all classified margin slices to the original SAR image. The center points of all classified pieces correspond to the raw SAR image. It is assumed that a dark-connected region is judged as the current type if more than 90% of the points in the same dark-connected area are classified into the same category. Therefore, the classification accuracy of the proposed method in this paper is 100% for the dark areas in the original SAR image.

IV. CONCLUSION
The classification of dark areas in SAR images has not received enough attention from researchers. This paper proposes a deep edge learning method (DMLM) to classify dark areas in airborne SAR images, including SAR shadows and low-energy scattering targets with smooth surfaces. The proposed snake search algorithm (SSA) can automatically locate and generate edge slices of dark areas in SAR images. Unlike traditional SAR terrain classification methods, we do not use the full original SAR image as the network input. We only use edge slices of dark areas as input and accomplish the classification task by a simple architectural CNN. The experimental results show that the average classification accuracy of the proposed method is 98.775%, and the classification effect is satisfactory. The classification accuracy of the dark area in the original SAR image is 100%. At the same time, the method proposed in this paper also has some limitations, which are more targeted to specific SAR scenes. The shaded areas in SAR images need to be wholly separated from the low scattered regions. We are researching and ready to develop a new concept when the two are confused: SAR shadow pollution. It describes a scene where a shadowed area in a SAR image hides a target with low scattering energy.
Finally, the authors would like to point out that this paper is the first to classify dark regions in single-polarized airborne SAR images. The authors hope that it can draw researchers' attention to dark area classification in SAR images. It is expected that with this paper as a reference, more researchers will conduct more in-depth research on dark areas in airborne SAR images in the future.

ACKNOWLEDGMENT
The author would like to thank Airbus Defense and Telis, U.K., for providing valuable access to the SAR data used in this work. He would also like to thank the anonymous reviewers and the Editor-in-Chief for their hard work and constructive comments during the review process. The datasets and code used in this article have been uploaded to the GitHub encrypted folder and are available for public download. Researchers can contact the authors to obtain the file passwords and reproduce the experiments.