원문정보
A Self-Supervised Learning Approach For Aerial Image Segmentation
초록
영어
Growing inventions in deep learning have made a huge impact in remote sensing applications such as disaster management, smart city applications and environment monitoring. Unmanned Aerial Vehicles (UAVs) have emerged as a cost-effective resource among remote sensing platforms offering higher-resolution and detail-oriented observations. Semantic segmentation of such detailed images can lead to many potential applications such as urban planning and recent advancements in segmentation frameworks have served this purpose well. However, these frameworks rely largely on annotated data which is a costly and time-consuming process. Availability of limited labelled data pose challenges as well. To address these issues, a self-supervised learning architecture is proposed in this paper using redundancy reduction principle. A multiple sample redundancy reduction loss-based encoder-decoder architecture is presented. This loss is used to pre-train the encoder in an unsupervised way to learn data representations. These learned representations are used to initialize the encoder for the downstream task. And the encoder-decoder structure is fine-tuned for image segmentation tasks. The efficacy of the proposed network is validated on Urban Drone Dataset achieving 64.90% intersection over union (IoU), 83.06% overall accuracy and 78.37% kappa value while outperforming other dual sample-based loss architectures.
목차
1. Introduction & Background
2. Proposed Methodology
3. Experimental Details
4. Experiment Results
5. Conclusions
Acknowledgement
References
