Camouflaged Object Segmentation with Omni Perception
Haiyang Mei1
Ke Xu2
Yunduo Zhou1
Yang Wang1
Haiyin Piao3
Xiaopeng Wei1
Xin Yang1,*
1
Dalian University of Technology
2
City University of Hong Kong
3
Northwestern Polytechnical University
|
1. Abstract
Camouflaged object segmentation (COS) is a very challenging task due to the deceitful appearances of the candidate objects to the noisy backgrounds. Most existing state-of-the-art methods mimic the first-positioning-then-focus mechanism of predators, but still fail in positioning camouflaged objects in cluttered scenes or delineating their boundaries. The key reason is that their methods do not have a comprehensive understanding of the scene when they spot and focus on the objects, so that they are easily attracted by local surroundings. An ideal COS model should be able to process local and global information at the same time, i.e., to have omni perception of the scene through the whole process of camouflaged object segmentation. To this end, we propose to learn the omni perception for the first-positioning-then-focus COS scheme. Specifically, we propose an omni perception network (OPNet) with two novel modules, i.e., the pyramid positioning module (PPM) and dual focus module (DFM). They are proposed to integrate local features and global representations for accurate positioning of the camouflaged objects and focus on their boundaries, respectively. Extensive experiments demonstrate that our method, which runs at 54 fps, significantly outperforms 15 cutting-edge models on 4 challenging datasets under 4 standard metrics.2. Downloads
Paper | : [ OPNet.pdf ] |
Experimental results | : [ Google Drive ] [ OneDrive ] [ Baidu Disk, fetch code: omni ] |
Pre-trained model | : [ Google Drive ] [ OneDrive ] [ Baidu Disk, fetch code: omni ] |
Code | : [ Github ] |
3. BibTex
@InProceedings{Haiyang:OPNet:2023,4. Website Visit Statistics