Abstract:
Ship detection is a significant and challenging task in remote sensing. Due to the arbitrary-oriented property and large aspect ratio of ships, most of the existing detec...Show MoreMetadata
Abstract:
Ship detection is a significant and challenging task in remote sensing. Due to the arbitrary-oriented property and large aspect ratio of ships, most of the existing detectors adopt rotation boxes to represent ships. However, manual-designed rotation anchors are needed in these detectors, which causes multiplied computational cost and inaccurate box regression. To address the abovementioned problems, an anchor-free rotation ship detector, named GRS-Det, is proposed, which mainly consists of a feature extraction network with selective concatenation module (SCM), a rotation Gaussian-Mask model, and a fully convolutional network-based detection module. First, a U-shape network with SCM is used to extract multiscale feature maps. With the help of SCM, the channel unbalance problem between different-level features in feature fusion is solved. Then, a rotation Gaussian-Mask is designed to model the ship based on its geometry characteristics, which aims at solving the mislabeling problem of rotation bounding boxes. Meanwhile, the Gaussian-Mask leverages context information to strengthen the perception of ships. Finally, multiscale feature maps are fed to the detection module for classification and regression of each pixel. Our proposed method, evaluated on ship detection benchmarks, including HRSC2016 and DOTA Ship data sets, achieves state-of-the-art results.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 59, Issue: 4, April 2021)
Funding Agency:
References is not available for this document.
Select All
1.
X. You and W. Li, “A sea-land segmentation scheme based on statistical model of sea,” in Proc. 4th Int. Congr. Image Signal Process., vol. 3, Oct. 2011, pp. 1155–1159.
2.
Z. Zou and Z. Shi, “Ship detection in spaceborne optical image with SVD networks,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 10, pp. 5832–5845, Oct. 2016.
3.
C. Zhu, H. Zhou, R. Wang, and J. Guo, “A novel hierarchical method of ship detection from spaceborne optical image based on shape and texture features,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 9, pp. 3446–3456, Sep. 2010.
4.
J. Tang, C. Deng, G.-B. Huang, and B. Zhao, “Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 3, pp. 1174–1185, Mar. 2015.
5.
G. Liu, Y. Zhang, X. Zheng, X. Sun, K. Fu, and H. Wang, “A new method on inshore ship detection in high-resolution satellite images using shape and context information,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 3, pp. 617–621, Mar. 2014.
6.
Z. Liu, H. Wang, L. Weng, and Y. Yang, “Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds,” IEEE Geosci. Remote Sens. Lett., vol. 13, no. 8, pp. 1074–1078, Aug. 2016.
7.
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 1, Jun. 2005, pp. 886–893.
8.
Z. Shi, X. Yu, Z. Jiang, and B. Li, “Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 8, pp. 4511–4523, Aug. 2014.
9.
G. Yang, B. Li, S. Ji, F. Gao, and Q. Xu, “Ship detection from optical satellite images based on sea surface analysis,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 3, pp. 641–645, Mar. 2014.
10.
S. Qi, J. Ma, J. Lin, Y. Li, and J. Tian, “Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images,” IEEE Geosci. Remote Sens. Lett., vol. 12, no. 7, pp. 1451–1455, Jul. 2015.
11.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Red Hook, NY, USA : Curran Associates, 2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
12.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248–255.
13.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580–587.
14.
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1440–1448.
15.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 91–99.
16.
J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 379–387.
17.
K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 2961–2969.
18.
J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 7263–7271.
19.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788.
20.
J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” 2018, arXiv:1804.02767. [Online]. Available: http://arxiv.org/abs/1804.02767
21.
W. Liu, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland : Springer, 2016, pp. 21–37.
22.
C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “DSSD: Deconvolutional single shot detector,” 2017, arXiv:1701.06659. [Online]. Available: http://arxiv.org/abs/1701.06659
23.
Z. Li and F. Zhou, “FSSD: Feature fusion single shot multibox detector,” 2017, arXiv:1712.00960. [Online]. Available: http://arxiv.org/abs/1712.00960
24.
Z. Liu, J. Hu, L. Weng, and Y. Yang, “Rotated region based CNN for ship detection,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2017, pp. 900–904.
25.
J. Ma, “Arbitrary-oriented scene text detection via rotation proposals,” IEEE Trans. Multimedia, vol. 20, no. 11, pp. 3111–3122, Nov. 2018.
26.
Y. Jiang, “R2CNN: Rotational region CNN for orientation robust scene text detection,” 2017, arXiv:1706.09579. [Online]. Available: http://arxiv.org/abs/1706.09579
27.
M. Liao, Z. Zhu, B. Shi, G.-S. Xia, and X. Bai, “Rotation-sensitive regression for oriented scene text detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5909–5918.
28.
X. Yang, “Automatic ship detection in remote sensing images from Google Earth of complex scenes based on multiscale rotation dense feature pyramid networks,” Remote Sens., vol. 10, no. 1, p. 132, Jan. 2018.
29.
S. M. Azimi, E. Vig, R. Bahmanyar, M. Körner, and P. Reinartz, “Towards multi-class object detection in unconstrained remote sensing imagery,” in Proc. Asian Conf. Comput. Vis. Cham, Switzerland : Springer, 2018, pp. 150–165.
30.
X. Yang, “SCRDet: Towards more robust detection for small, cluttered and rotated objects,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8232–8241.