Loading [a11y]/accessibility-menu.js
Dual-Constraint Coarse-to-Fine Network for Camouflaged Object Detection | IEEE Journals & Magazine | IEEE Xplore

Dual-Constraint Coarse-to-Fine Network for Camouflaged Object Detection


Abstract:

Camouflaged object detection (COD) is an important yet challenging task, with great application values in industrial defect detection, medical care, etc. The challenges m...Show More

Abstract:

Camouflaged object detection (COD) is an important yet challenging task, with great application values in industrial defect detection, medical care, etc. The challenges mainly come from the high intrinsic similarities between target objects and background. In this paper, inspired by the biological studies that object detection consists of two steps, i.e., search and identification, we propose a novel framework, named DCNet, for accurate COD. DCNet explores candidate objects and extra object-related edges through two constraints (object area and boundary) and detects camouflaged objects in a coarse-to-fine manner. Specifically, we first exploit an area-boundary decoder (ABD) to obtain initial region cues and boundary cues simultaneously by fusing multi-level features of the backbone. Then, an area search module (ASM) is embedded into each level of the backbone to adaptively search coarse regions of objects with the assistance of region cues from the ABD. After the ASM, an area refinement module (ARM) is utilized to identify fine regions of objects by fusing adjacent-level features with the guidance of boundary cues. Through the deep supervision strategy, DCNet can finally localize the camouflaged objects precisely. Extensive experiments on three benchmark COD datasets demonstrate that our DCNet is superior to 12 state-of-the-art COD methods. In addition, DCNet shows promising results on two COD-related tasks, i.e., industrial defect detection and polyp segmentation.
Page(s): 3286 - 3298
Date of Publication: 25 September 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.