earticle

논문검색

Session Ⅲ: Real-World AI Applications

A Multimodal Network For Residential Areas Semantic Segmentation

초록

영어

Automatic identification of residential areas from remote sensing images is beneficial to tasks such as urban planning and disaster assessment. Currently, residential area extraction tasks are primarily based on deep learning methods using single-modal data, and the information that a single modality can express is limited. Therefore, this paper proposes an end-to-end multi-modal semantic segmentation model that extracts features of remote sensing images and mobile phone signaling data through a dual-branch encoder, fuses the two features, and concatenates them with the feature map of the decoder stage. Experimental results show that our proposed method outperforms other models and can effectively identify residential areas.

목차

Abstract
I. INTRODUCTION
II. THE PROPOSED MODEL
A. Overall Architecture
B. Encoder-Feature Extracting
C. Decoder-Resolution Restoring
III. EXPERIMENTAL RESULT
A. Dataset
B. Evaluation Metrics
C. Results
IV. CONCLUSION
REFERENCES

저자정보

  • Lei Yan Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China
  • Zhiguo Yan Department of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.