Loading [a11y]/accessibility-menu.js
Multimodal Noisy Segmentation based fragmented burn scars identification in Amazon Rainforest | IEEE Conference Publication | IEEE Xplore

Multimodal Noisy Segmentation based fragmented burn scars identification in Amazon Rainforest

Publisher: IEEE

Abstract:

Detection of burn marks due to wildfires in inaccessible rain forests is important for various disaster management and ecological studies. Diverse cropping patterns and t...View more

Abstract:

Detection of burn marks due to wildfires in inaccessible rain forests is important for various disaster management and ecological studies. Diverse cropping patterns and the fragmented nature of arable landscapes amidst similar looking land patterns often thwart the precise mapping of burn scars. Recent advances in remote-sensing and availability of multimodal data offer a viable time-sensitive solution to classical methods, which often requires human expert intervention. However, computer vision based segmentation methods have not been used, largely due to lack of labelled datasets.In this work we present AmazonNET – a convolutional based network that allows extracting of burn patters from multimodal remote sensing images. The network consists of UNet- a well-known encoder decoder type of architecture with skip connections commonly used in biomedical segmentation. The proposed framework utilises stacked RGB-NIR channels to segment burn scars from the pastures by training on a new weakly labelled noisy dataset from Amazonia.Our model illustrates superior performance by correctly identifying partially labelled burn scars and rejecting incorrectly labelled samples, demonstrating our approach as one of the first to effectively utilise deep learning based segmentation models in multimodal burn scar identification.
Date of Conference: 11-14 October 2020
Date Added to IEEE Xplore: 14 December 2020
ISBN Information:

ISSN Information:

Publisher: IEEE
Conference Location: Toronto, ON, Canada

References

References is not available for this document.