Comparing the robustness of U-Net, LinkNet, and FPN towards label noise for refugee dwelling extraction from satellite imagery | IEEE Conference Publication | IEEE Xplore

Comparing the robustness of U-Net, LinkNet, and FPN towards label noise for refugee dwelling extraction from satellite imagery


Abstract:

Extracting footprints of refugee dwellings from satellite imagery supports dedicated humanitarian operations. Recently, deep-learning-based approaches have been proved to...Show More

Abstract:

Extracting footprints of refugee dwellings from satellite imagery supports dedicated humanitarian operations. Recently, deep-learning-based approaches have been proved to be effective for this task. However, such research is still limited due to the lack of cleaned labels for supervised-learning-based models. This research compares the performance of noisy labels from past humanitarian operations and cleaned labels by manual annotation in three classical deep learning architectures (U-Net, LinkNet and Feature Pyramid Network (FPN)) and twelve backbones (VGG16, VGG19, ResNet-18, ResNet-34, DenseNet-121, DenseNet-169, Inception-v3, InceptionResnet-v2, MobileNet-v1, MobileNet-v2, EfficientNet-B0, EfficientNet-B1). The results turn out that even though cleaned labels outperform noisy labels, the noisy labels have a high potential to replace cleaned labels because producing cleaned labels requires much more time, and predicted footprints of models trained with noisy labels are promising in humanitarian applications. Besides, the performance of the selected architectures and backbones is similar in general. Overall, FPN with VGG16, LinkNet with DenseNet-121, and U-Net with EfficientNet-B0 outperform other combinations. If considering both accuracy and training time, U-Net with VGG16 and LinkNet with ResNet-18 could be two alternatives for future research.
Date of Conference: 08-11 September 2022
Date Added to IEEE Xplore: 11 October 2022
ISBN Information:
Print on Demand(PoD) ISSN: 2377-6919
Conference Location: Santa Clara, CA, USA

Funding Agency:

References is not available for this document.

Select All
1.
UN Sustainable Development Group, “Universal Values Principle Two: Leave No One Behind.” https://unsdg.un.org/2030agenda/universal-values/leave-no-one-behind#:~:text=Universal Values,-Principle Two%3A Leave=It represents the unequivocal commitment, of humanity as a whole. (accessed Feb. 27, 2022 ).
2.
UNHCR, “The Sustainable Development Goals and the Global Compact on Refugees,” 2020. [Online]. Available: https://www.unhcr.org/5efcb5004.pdf
3.
M. Çelik et al., “Humanitarian logistics,” in New directions in informatics, optimization, logistics, and production, INFORMS, 2012, pp. 18–49.
4.
F. Checchi, B. T. Stewart, J. J. Palmer, and C. Grundy, “Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations,” Int. J. Health Geogr., vol. 12, 2013, doi: 10.1186/1476-072X-12-4.
5.
K. Spröhnle, D. Tiede, E. Schoepfer, P. Füreder, A. Svanberg, and T. Rost, “Earth observation-based dwelling detection approaches in a highly complex refugee camp environment - A comparative study,” Remote Sens., vol. 6, no. 10, pp. 9277–9297, 2014, doi: 10.3390/rs6109277.
6.
P. Füreder, D. Tiede, F. Lüthje, and S. Lang, “Object-based dwelling extraction in refugee/IDP camps–challenges in an operational mode,” South-Eastern Eur. J. Earth Obs. Geomatics, vol. 3, no. 2S, pp. 539–544, 2014.
7.
A. Fisher, E. A. Mohammed, and V. Mago, “TentNet: Deep Learning Tent Detection Algorithm Using A Synthetic Training Approach,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 2020- Octob, pp. 860–867, 2020, doi: 10.1109/SMC42975.2020.9283377.
8.
S. Lang et al., “Earth observation tools and services to increase the effectiveness of humanitarian assistance,” Eur. J. Remote Sens., vol. 53, no. sup2, pp. 67–85, 2020, doi: 10.1080/22797254.2019.1684208.
9.
O. Ghorbanzadeh, D. Tiede, Z. Dabiri, M. Sudmanns, and S. Lang, “Dwelling extraction in refugee camps using CNN - First experiences and lessons learnt,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., vol. 42, no. 1, pp. 161–166, 2018, doi: 10.5194/isprs-archives-XLII-1-161-2018.
10.
O. Ghorbanzadeh, D. Tiede, L. Wendt, M. Sudmanns, and S. Lang, “Transferable instance segmentation of dwellings in a refugee camp - integrating CNN and OBIA,” Eur. J. Remote Sens., vol. 54, no. sup1, pp. 127–140, 2021, doi: 10.1080/22797254.2020.1759456.
11.
J. A. Quinn, M. M. Nyhan, C. Navarro, D. Coluccia, L. Bromley, and M. Luengo-Oroz, “Humanitarian applications of machine learning with remote-sensing data: Review and case study in refugee settlement mapping,” Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., vol. 376, no. 2128, 2018, doi: 10.1098/rsta.2017.0363.
12.
L. Wickert, M. Bogen, and M. Richter, “Lessons Learned on Conducting Dwelling Detection on VHR Satellite Imagery for the Management of Humanitarian Operations,” Sensors Transducers, vol. 249, no. 2, pp. 45–53, 2021.
13.
D. Tiede, G. Schwendemann, A. Alobaidi, L. Wendt, and S. Lang, “Mask R-CNN- based building extraction from VHR satellite data in operational humanitarian action: An example related to Covid- - 19 response in,” Trans. GIS, pp. 1–15, 2021, doi: 10.1111/tgis.12766.
14.
G. W. Gella, L. Wendt, S. Lang, and A. Braun, “Testing Transferability of Deep- Learning-Based Dwelling Extraction in Refugee Camps Methodology 2. 1 The test sites,” GI_Forum, vol. 9, no. 1, pp. 220–227, 2021, doi: 10.1553/giscience2021.
15.
X. Yuan, J. Shi, and L. Gu, “A review of deep learning methods for semantic segmentation of remote sensing imagery,” Expert Syst. Appl., vol. 169, no. November 2020, p. 114417, 2021, doi: 10.1016/j.eswa.2020.114417.
16.
P. Borba, F. de Carvalho Diniz, N. C. da Silva, and E. de Souza Bias, “Building Footprint Extraction Using Deep Learning Semantic Segmentation Techniques: Experiments and Results,” in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 4708–4711.
17.
UNHCR, “Kule refugee camp,” 2020.
18.
P. Yakubovskiy, “Segmentation Models,” GitHub repository, 2019. https://github.com/qubvel/segmentation_models (accessed Nov. 15, 2021 ).
19.
H. Jung, H. S. Choi, and M. Kang, “Boundary Enhancement Semantic Segmentation for Building Extraction From Remote Sensed Image,” IEEE Trans. Geosci. Remote Sens., pp. 1–12, 2021, doi: 10.1109/TGRS.2021.3108781.
20.
R. A. Ansari, R. Malhotra, and K. M. Buddhiraju, “Identifying informal settlements using contourlet assisted deep learning,” Sensors (Switzerland), vol. 20, no. 9, pp. 1–15, 2020, doi: 10.3390/s20092733.
21.
W. Li, C. He, J. Fang, J. Zheng, H. Fu, and L. Yu, “Semantic segmentation-based building footprint extraction using very highresolution satellite images and multi-source GIS data,” Remote Sens., vol. 11, no. 4, 2019, doi: 10.3390/rs11040403.
22.
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234–241.
23.
M. Wu, C. Zhang, J. Liu, L. Zhou, and X. Li, “Towards Accurate High Resolution Satellite Image Semantic Segmentation,” IEEE Access, vol. 7, pp. 55609–55619, 2019, doi: 10.1109/ACCESS.2019.2913442.
24.
A. Chaurasia and E. Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” 2017 IEEE Vis. Commun. Image Process. VCIP 2017, vol. 2018-Janua, pp. 1–4, 2018, doi: 10.1109/VCIP.2017.8305148.
25.
L. Xia, J. Zhang, X. Zhang, H. Yang, and M. Xu, “Precise Extraction of Buildings from High-Resolution Remote-Sensing Images Based on Semantic Edges and Segmentation,” Remote Sens., vol. 13, no. 16, p. 3083, 2021.
26.
Q. Zhu, Z. Li, Y. Zhang, and Q. Guan, “Building extraction from high spatial resolution remote sensing images via multiscale-aware and segmentation-prior conditional random fields,” Remote Sens., vol. 12, no. 23, p. 3983, 2020.
27.
S. Seferbekov, V. Iglovikov, A. Buslaev, and A. Shvets, “Feature pyramid network for multi-class land segmentation,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2018-June, pp. 272–275, 2018, doi: 10.1109/CVPRW.2018.00051.
28.
Y. Zhao, R. Han, and Y. Rao, “A new feature pyramid network for object detection,” Proc. - 2019 Int. Conf. Virtual Real. Intell. Syst. ICVRIS 2019, pp. 428–431, 2017, doi: 10.1109/ICVRIS.2019.00110.
29.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
30.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.

Contact IEEE to Subscribe

References

References is not available for this document.